Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Mathematics can be seen as a logic game. You start with a set of assumptions and you come up with all the logical conclusions you can from that. Then, if someone else finds a situation that fits those assumptions, they can benefit from the pre-discovered logical conclusions. This means that if some conclusions require fewer assumptions, then those conclusions are more generally applicable

This is a really, really nice expression of something my mind's been hovering around for a while.



This is also a part of why I am somewhat fascinated by the idea and the state of Lean4 and mathlib in Lean4. People put more and more formally verified proofs into mathlib, which in turn makes formally proving further theorems in mathlib easier.

If you start with nothing (like in the numbers game), simple proofs are a lot of ... just effort, because you have specify a lot of rewrites and overall work. In mathlib, however, systems like simp (the simplification system) or linarith ("There is a solution by linear arithmetic") seem to do a lot of heavy, repetitive lifting by now.

It's a really interesting snowball effect. Sadly, everything I understand is most likely already in there, so I doubt I could contribute meaningfully, haha.


That's very interesting, I'm no mathematician but I should have a play around with it.

> Sadly, everything I understand is most likely already in there, so I doubt I could contribute meaningfully, haha.

I wouldn't be so sure - and even if so then remember there's enormous benefit to improving tooling around a system. If you want to be involved somehow, better devx, tutorials, output, packaging, error messages all make a big difference to end users.

Edit -

As another thought, is there benefit in going through papers and translating that work into lean4? I'm not really familiar enough with it but if so that may

1. Find issues in current work, like Tao did in his own work

2. Add to a reusable body of work


Former computational mathematics major

You absolutely can contribute meaningfully

The maths world is incomprehensibly broad and deep, even if you just take the Erdos approach and go for interesting but shallow problems


It blows my mind to think of mathematics/logic almost like a huge cellular automaton. “axioms” don’t necessarily correspond to “truth”, to me they’re arbitrary constraints that can give rise to complexity. And sometimes the resulting systems can be useful


The whole of the puzzles of cosmology actually all might be obvious if we had a few different fundamental theorems. But because we hit on some that almost work, and then build upon them a hole edifice of mathematics that is internally consistent and almost fits the universe we keep beating on it, not realizing that backing up a little and then driving forward again at a slightly different angle might yield a simpler, and even more consistent and explanatory system.


Most mathematics has no application to science whatsoever. It's a huge parts bin which scientists delve into when they build their models. And then much of the work is in trying to shoehorn the mathematics into being tractable.

Mathematics is also not provably internally consistent. This was famously shown by Gödel [1].

[1] https://en.wikipedia.org/wiki/Gödel%27s_incompleteness_theor...


Most mathematics originates from trying to solve physical or engineering problems. Typically physicists have been on the forefront of mathematical research - this has only really changed significantly in the last few decades.

Also, mathematics as practiced is internally consistent. It is incomplete, though. That is how it stays afloat of Godel's result. Basically Godel's results showed that no matter how much we strive, there will always be propositions which might be true, but which we will not be able to prove are true. Unless of course we start using methods that sometimes prove false propositions, which we have not done.


> Most mathematics originates from trying to solve physical or engineering problems.

Has this been true since the early 20th century? I have no feel for what constitutes "most" in the vast corpus of pure mathematics, so am not challenging your claim but rather am curious.


You're right, that might actually be wrong.

However, the claim I was actually thinking of, which is right I think, is that the maths used in the physical revolutions of the turn of the century (SR, QM, GR, and probably QFT, QED, and QCD as well) was invented by physicists or by mathematicians working with physicists for the express purpose of developing this theories, not the other way around.

Also, the basis of mathematics and the first few thousand years were indeed motivated by these kinds of concerns.


I wouldn't agree - consider the hyperbolic transforms used to describe space time "bending" wrt relativity:

https://en.wikipedia.org/wiki/History_of_Lorentz_transformat...

    In mathematics, transformations equivalent to what was later known as Lorentz transformations in various dimensions were discussed in the 19th century in relation to the theory of quadratic forms, hyperbolic geometry, Möbius geometry, and sphere geometry, which is connected to the fact that the group of motions in hyperbolic space, the Möbius group or projective special linear group, and the Laguerre group are isomorphic to the Lorentz group.
Mathematicians were following up on "what happens when you discard one of Eucilids Axioms" and discovering there was an entire world of consistent hyperbolic geometry and more.

Some time later:

    In physics, Lorentz transformations became known at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell's equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant between different inertial frames.
If you read mathematics histories it's a common complaint that it's nigh on impossible to discover something new and esoteric that doesn't soon end up with a military application; the ongoing search for interesting but useless mathematics is akin to the search for the fountain of youth.

It is the case (IIRC) that quaterions arose directly from Hamilton's search for a better way to describe mechanical motions in three dimension spaces - ie created to be useful from the outset.


I think there's a lot of fascinating mathematical "dualism" in how many of those were developed at the same time together by both "practical" mathematicians (such as physicists) and "theoretical" mathematicians. You feel it is easy to argue that because the practical mathematicians had an easily defined "need" (hypothesis/experiment) they were the "leaders" and the arrow flowed from them to the theoretical mathematicians working with them, but there's just as much evidence in some of those cases that those theoretical mathematicians were already doing the theory building on their own and had a "need" to find practical use cases/outlets. In some cases we know the theoretical mathematician sought out the physicist to try to find ways to test a theory and were really the ones building the hypotheses. In some of the cases we know that though both are generally credited for "deep" collaboration after the fact, because they never really worked together and did all of their work in parallel and it is likely both would have completed just about the same work even if they never crossed paths. Newton and Leibniz famously never corresponded until after both published their own takes on the fundamental principles of The Calculus. Alonso Church had already developed the Lambda Calculus before corresponding with Alan Turing on the fundamentals of Computing and Alan Turing couldn't even share most of his practical work because it was still state secrets (and there was an ocean's distance in their correspondence anyway).

I think as often as not the "arrows" in the diagram point both directions at the same time: the practical needed the theorist to explain the patterns they were seeing and the theorist needed the practical to take the simple beautiful thing they were working on and make it practical and find the edge cases and complications.

That sort of "dualism" seems an interesting pattern in math.


Yes, I agree to some extent with the restricted claim, which only (slowly) started to break down in the 17th century in the west.

A lot of Indian mathematics was rather abstract going back to Vedic times, but since they didn't develop the concept of proof, it sadly had little impact on other mathematics practice (except as inspiration to Persian and Arab scholars) other than the the famous cases of zero and positional notation. The mathematical documents I've seen from that practice have been in the form of essays.

I know little of Chinese or Mesoamerican mathematics and wonder where they were on this axis. It seems pretty likely that maths started in support of astronomy/planting predictions in the cultures I know of so likely also for East Asia and the Americas, but whither thence did it go?


Is there no use for some kind of branch of mathematics that can prove false things?

You would think that with how much math there is, there would be a whole field of working with uncertain proofs. I have no idea what for, but then again I'm not a math guy.


This has happened, and is happening all the time. Many groundbraking theories in physics can be framed this way. The problem is that "slightly different angle" is a huge space, so scientists throw a lot of theories at it and see what sticks.


In other words, you're claiming that we may have built-in biases, invisible to us, which cause us to mistake certain premises for conclusions, and now we've got a definition of something like what a "unit" is, or what identity means, that work well enough to solidly discourage further investigation.

yet if we just tried, oh, making the unit circle a unit... ellipse... all of the epiphenomenal complexity that comes from remediating the pervasively accumulated 0.01% error in that fundamental assumption would instantly vanish.


This reminds me of "The Road Not Taken" by Harry Turtledove. Awesome story!


I love this idea.

You might enjoy Stephen Wolfram's writing- it's exactly what you're talking about


Axioms are not wrong if you can derive some math from them.

They may not correspond to anything in our world, and then we usually discover something that does.


The point wasn't that they're wrong, but instead that they are arbitrary. You could create a mathematical system with entirely different axioms than what we explore typically, and it would only be different in how usefully it maps onto real world concepts.


Mathematics is humanity's longest running, largest-scoped, most complicated game.

It also happens to be useful, and you can dive into a lot of philosophy about that which is all very interesting. The utility itself is a large thing on its own. But I think of that utility as something separate from the game itself. The game is just a game. You can do whatever you want with it. If you want to convert your cookbook to hexadecimal just for fun, you can. The fact that it is (broadly speaking) useless, that it will produce no new knowledge, and if anything negative utility in general, doesn't mean you can't do it.

That's the game.

You can also try to play the game to prove the Twin Prime conjecture. That's a much harder level.

This game is scalable to all ages and skill levels, has the best level variety, and can be done with anything from just your personal noggin, to a pencil & paper, to the largest computing cluster in the world. Technically all other games you play are a subset of this game; that may not always be a useful way to think of it, but it is technically true. And while there are a few rules, generally, nobody can tell you how to play it. You want to color pretty pictures? The game has lots of ways of doing that. You want to smash atoms together? The game can help with that. You want to simply count to the highest number you possibly can? Go for it. It's a very popular play with the younger players, but anyone can do it.


The same principle applies to programming. Functions that know less about their arguments are more generally applicable.


A good example of that is how the Axiom of choice impacts the measure/probability theory.

It imply the existence of some sets that cannot be Lebesgue measured (which is an generalization of width, volume, etc for arbitrary sets, also generalization of probability for arbitrary sets)... but it's not possible to present a single example of those non measurable sets, only prove that they exist.

And it's possible to construct an alternative theory with the axiom of determinacy, then any subset of R is measurable.

* https://en.wikipedia.org/wiki/Axiom_of_choice * https://en.wikipedia.org/wiki/Axiom_of_determinacy * https://en.wikipedia.org/wiki/Lebesgue_measure




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: