Hacker Newsnew | past | comments | ask | show | jobs | submit | more MR4D's commentslogin

This.

AI is a feature, not a product.


Germany doesn’t have hurricanes or wildfires. Take those out and I’d bet the grids are much more comparable.

PGE has to de-energize lines to prevent fires. Hurricanes just blow them down.


The links actually cover this, since EIA tracks major events in power disruptions and separates them in the graph. US network is still orders of magnitude worse than Germany.


Reading through this article is like reading a description of the Monty-Hall problem. [0]

It's as through the conclusion seems to defy common sense, yet is provable. [1]

[0] - https://priceonomics.com/the-time-everyone-corrected-the-wor...

[1] - 2nd to the last paragraph: "The fact that you can achieve a constant average query time, regardless of the hash table’s fullness, was wholly unexpected — even to the authors themselves."


I always really enjoyed the Numb3rs lecture on Monty-Hall

https://www.youtube.com/watch?v=P9WFKmLK0dc


guys it's 2025, let's have a throw-down fight about the monty hall problem.


Monty Hall is solved. We need to fight over whether .999 repeating == 1.000 repeating


> We need to fight over whether .999 repeating == 1.000 repeating

Because we've already agreed that 9 repeating the other way is equal to -1.

It must be because if you add 1, you get an infinite string of zeroes.


You have probably seen these already, but just in case: I think you would really enjoy p-adic numbers: https://en.wikipedia.org/wiki/P-adic_number


Forgive my mathematical ignorance, but why would it be? Isn’t it just asymptotically close but not actually equal? What does the 0’s repeating give you that 1 does not?


The 0's don't give you anything. You could just say 1.

As for why they're equal, there are various proofs and explanations, but the simplest proof is probably:

1/3 + 1/3 + 1/3 = 1

0.333... + 0.333... + 0.333... = 1

0.999... = 1


asymptotically close as you add 9s, but 9 repeating means you DO add an infinity of 9s, so it equals 1.

The reasoning that persuaded me initially was 1/3 is .333 repeating, 2/3 is .666 repeating, and 3/3 is .999 repeating.


All of the answers given at this point seem to rely on intuition and how things "should" work. Intuition like that is great, except when trying to approach something that seems like a paradox when applying just intuition.

An expression like "0.999 repeating" does not just fall from the sky, after which we go ahead and probe it to figure out what it is or means or what it's equal to. A mathematical construction is precisely defined. So one goes to the definition and asks: what does "0.999 repeating" mean? After several rounds of unpacking definitions and verifying certain properties of convergent real sequences, one arrives at the conclusion that "0.999 repeating" is in fact 1. They are one and the same. Not "asymptotically", not "approximately" – they are the same. They are just written out differently. Your and mine handwriting is probably different, but that doesn't mean the number 1 written by one of us is different from the number 1 written by the other.

This is a typical misunderstanding about mathematics. Everything in mathematics is defined by humans, and the definitions can be unpacked layer by layer. The natural world is full of things that just exist, fully formed, without human interaction. The natural sciences give us wonderful tools to probe those things and figure out what they are and how they are. That's excellent, but it's usually not the right tool for answering questions like in this thread.


This is likely oversimplified, but an intuitive approach is:

1/3 = 0.333 repeating

3/3 = 0.999 repeating

1 = 0.999 repeating


yeah but 1/3 = 0.333 recurring is an equivalent problem to the parent


X = .999r

10x= 9.999r

10x - x = 9.999r - .999r

9x = 9

x = 1

.999r = 1


If two numbers are different, you can always point to a different number between those two numbers.

So what number is between .999 repeating and 1?


> So what number is between .999 repeating and 1?

That's easy. It's (1-0.999…)


If 0.999... is 1 then 1-0.999... is 0


Asymptotic analysis is only relevant if you have some series (or a function).

The series 0.9, 0.99, 0.999,... is asymptotically close to 1 and also asymptotically close to 0.9999... (with infinite 9s), since for any epsilon, I can find an index N after which all elements of the series are within epsilon of the target.

Since a single series can't have two limits, 1.0 should be equal to 0.999...

Note that real numbers are allowed to have infinite digits after the point, otherwise they wouldn't include things like 1/3.


My favorite version:

    x = 0.999...
    x - x/10 = 0.9
    x = 1


> “Our brains are just not wired to do probability problems very well, so I’m not surprised there were mistakes,” Stanford stats professor Persi Diaconis told a reporter, years ago. “[But] the strict argument would be that the question cannot be answered without knowing the motivation of the host.”

This is wrong. Let’s label the goats A and B to simplify things (so we do not need to consider the positions of the doors). There are 3 cases:

1. You pick the right door. The other two doors have goats. The host may only choose a goat. Whether it is A or B does not matter.

2. You pick the door with goat A. The host may only choose goat B.

3. You pick the door with goat B. The host may only choose goat A.

The host’s intentions are irrelevant as far as the probability is concerned (unless the host is allowed to tell the contestant which door is correct, but I am not aware of that ever being the case). 2/3 of the time, you pick the wrong door. In each of those cases, the remaining door is correct.

The most strict argument is yet another statistics professor got basic statistics wrong.


I can assure you Diaconis didn’t get it wrong.

> "The problem is not well-formed," Mr. Gardner said, "unless it makes clear that the host must always open an empty door and offer the switch. Otherwise, if the host is malevolent, he may open another door only when it's to his advantage to let the player switch, and the probability of being right by switching could be as low as zero." Mr. Gardner said the ambiguity could be eliminated if the host promised ahead of time to open another door and then offer a switch.

The hosts’s intentions absolutely do matter, because the problem (as originally stated) doesn’t specify that the host always opens a door and offers a switch. Maybe he only offers a trade when you initially picked the good door.


The problem as stated does not give the host such an option. If it did, the host opening a door would imply that the player picked the right answer, and it would only happen 1/3 of the time.

Some of the comments aged fairly well, although not in the way that their authors intended:

> There is enough mathematical illiteracy in this country

> If all those Ph.D.’s were wrong, the country would be in some very serious trouble.

In the 1800s, Carl Friedrich Gauss lamented about the decline in mathematical ability in academia. Despite academia since having advanced mathematics farther, mathematical ability in academia still has evidence of decline. Professors tend to be good at extremely specialized things, yet they get the simple things wrong. I once had a Calculus professor who failed to perform basic arithmetic correctly, during his calculus class. All of the algebra was right, but his constants were wrong. This happened on multiple occasions.


The problem as stated, in the article you pulled your quote from, puts no limits on how the host decides whether or not to offer a switch:

> Imagine that you’re on a television game show and the host presents you with three closed doors. Behind one of them, sits a sparkling, brand-new Lincoln Continental; behind the other two, are smelly old goats. The host implores you to pick a door, and you select door #1. Then, the host, who is well-aware of what’s going on behind the scenes, opens door #3, revealing one of the goats.

> “Now,” he says, turning toward you, “do you want to keep door #1, or do you want to switch to door #2?”

All you know is that in this particular instance the host has opened a door and offered a switch. You cannot conclude that the host always opens a door and offers a switch.

The problem as stated allows the host to offer switches only when the contestant picked the door with the prize, or only when the moon is gibbous, or only when the tide is going out. Diaconis and Gardner are completely correct to point out that the problem as stated is under specified and that the intent of the host matters.


The problem as stated has the host open an incorrect door and offer the player a chance to change his choice. Inferring that another possible variation might exist does not change the fact that we are discussing the variation that was presented. Both you and Diaconis are wrong.


> The problem as stated has the host open an incorrect door and offer the player a chance to change his choice.

Correct, in this one particular instance. You cannot conclude from this particular instance that the host always opens the door and offers a change.

> Inferring that another possible variation might exist

is totally reasonable, while denying the possibility that the host might be able to choose his actions specifically to benefit or screw you over is an unwarranted leap.

The problem statement does not put constraints on the host. You cannot solve the problem by assuming that those constraints exist and then attack those like Diaconis who point out that those constraints don’t exist and that the thing that is unconstrained matters.


There is a genuine human language problem here, NOT A MATH PROBLEM, which accounts for two differing but self-consistent views. It is a legitimate difference in views because human language is genuinely ambiguous.

"You pick a door, the host opens another door and reveals a goat. Should you switch?"

Does this mean you are in one particular situation where the host opened a 2nd door, with a goat? Or does it mean the host always opens a 2nd door with a goat?

If the host always opens a 2nd door, showing a goat, you should switch to the third unopened door.

If all you know, is this time you picked a door, then the host revealed a goat, you don't know what to do. Maybe this host only opens goat doors after you pick the right door, in order to trick you into switching? In that case switching would be the worst thing you could do.

A host with that strategy is a special case, but special cases where a potential general solution (always switch) doesn't work, are all you need to disprove the general solution. It cannot be a general solution if their is even one special case it doesn't work.

Most people interpret the problem to mean the host always reveals goats.

But if the language isn't clear on that, then you do have a different problem, whose solution is really impossible to optimize for without some more information on general host behavior or strategies. Without that information, all you can do is flip a coin. Or always stay, or always switch. You have no means to improve your odds whatever you do.


The problem statement does put constraints on the host, by specifying that the host opened an unselected door with a goat behind it, only to ask if the player wants to change his choice. The answer to the question of whether the player should change the choice is well defined. Other variations are irrelevant since they are different problems.

Your argument is equivalent to denying that 2 + 2 = 4 is correct because the author had the option to write something other than a 2 as an operand.


Nope. The probability theory doesn't work like that. When you argue that 2+2=4 you assume 2 and 2 are known and they are not.

A=you picked the car at first

B=the host opened the door

P(A|B) can be anywhere between 0 and 1.

In your calculations you assume that P(A|B)=P(A) which is correct ONLY if A and B are independent. Independence of A and B is not in the problem statement, you invented this clause yourself.


This is an excellent example of what I am saying. 2 + 2 = 4 was already written and you are insisting that it was not.

That said, the source material is this:

https://web.archive.org/web/20130121183432/http://marilynvos...

The problem is well defined in the source material and what others are interjecting here is another problem.


Where exactly does it state the independence of these two variables in the problem definition in the source material?


> All you know is that in this particular instance the host has opened a door and offered a switch. You cannot conclude that the host always opens a door and offers a switch.

And in this particular instance, it makes sense to switch.

I'm sorry, but the problem is well-formed and well-specified.


> And in this particular instance, it makes sense to switch.

Are you are accepting that the host might be someone who only opens a door with a goat when your first choice was the door with a car behind it, and still arguing that you should switch?


The question asks what the best choice in this situation is, not a different situation. The answer does not depend on whether any other situations exist.


We don't really know the situation if all we were told is that we picked a door, and the host showed us a goat behind another door.

If we know that the host will always show us a goat behind another door, then yes, we should clearly switch.

If the host typically lets us just open the door, but will show us a goat before we open the door if the show is running too fast and they need to kill time, then we should switch if offered.

If the host typically lets us open the door, but will show us a goat if the show is running too fast, or if the prize budget is running low and we picked the car, then we should switch if we think the previous games went quickly, but not if there were some slow games already.

If the host only shows a goat when the contestent picks the car, then we should never switch.

Many problem statements include that the host always shows a goat; and if it doesn't you can kind of assume it, because it's a well-known problem, but if it's a novel problem and unsaid, then how are you supposed to know? I haven't watched enough Let's Make a Deal to know if they always give a second choice. Reading the NYT article linked elsewhere in the thread, I am reminded that Monty Hall could offer cash to not open doors too, so with the problem as stated and Let's Make a Deal being referenced, I have to assume an antagonistic host, unless provided with more information on their behavior.

As stated, assuming unknown behavior of the host, we can't put a number on the probability of switching.

Also, to address another point you made elsewhere in the thread. In addition to specifying the host behavior, it should also be specified in the problem statement that the car and goat positions were determined randomly, or at least the car was random, and the two goats are considered equal and assigned as convenient.


Here is what Marilyn vos Savant had to say:

> So let’s look at it again, remembering that the original answer defines certain conditions, the most significant of which is that the host always opens a losing door on purpose. (There’s no way he can always open a losing door by chance!) Anything else is a different question.

https://web.archive.org/web/20130121183432/http://marilynvos...

What you are discussing is a different problem.


Sure, the answer to the question states that the host always opens a losing door.

The question does not state that. It's an assumption that was made in the answering.

If that's a premise, then yes, always switch.

If you only go by the question, there's not enough information.


That was how everyone interpreted the original column (according to thousands of letters sent to Marilyn vos Savant), and it was made explicit in a clarification made in a follow up to the column. What you are discussing is a different problem.


No, not in this version of it.

A=you picked the car at first

B=the host opened the door

P(A|B) can be anywhere between 0 and 1.

When you say "it makes sense to switch" you assume that P(A|B)=P(A) which is correct only if A and B are independent. Their independence is not given in the problem statement.


That they are independent is the only reasonable assumption, everything else is going out of your way to complicate the problem.

If they are NOT independent because Monty only shows you a goat behind the door if you've picked the wrong (or right) door, this is giving the game away. You don't need to guess, you always know what to pick with 100% certainty based on Monty's algorithm.

(Also, the game show doesn't work like this. And the text doesn't mention Monty's motivations, which in standard logic puzzle formulation must mean they are irrelevant, just as the phase of the Moon is also irrelevant and you must not take it into consideration)

If Monty picks randomly instead of always a goat, and he shows a car, the game has ended and no probabilities are involved, because you don't get to switch anymore; you've lost.

If Monty opens a door and there's a goat, we're within the parameters of the problem as stated (and you should switch!).


> That they are independent is the only reasonable assumption

No, this is not true. From the mathematical viewpoint Monty can have any strategy as long as it satisfies the problem statement. Which is, he DID open the door for whatever reason, the rest is uncertain. This literally what Diaconis means when he says "the strict argument would be that the question cannot be answered without knowing the motivation of the host" -- yes, in the strict sense he is indeed correct. This thread started because ryao stated that Diaconis is wrong [1].

Now even if you try to play the card of "reasonable assumptions" and rule out "boring" strategies because they are "giving the game away" this still won't eliminate all "non-independent" cases. The space of possible probability distributions here is way bigger than your list above. I can come up with an infinite number of "reasonable non-independent" strategies for Monty.

For example:

1) He rolls a dice before the game in his dressing room, secretly from the audience.

2) If he gets 6: he will open a door if you guessed incorrectly. If you guessed correctly he won't open the door.

3) If he gets 1-5: he will open a door if you guessed correctly. If you guessed incorrectly he won't open the door.

The situation is still the same: you've made your guess, then Monty opened the door with a goat and now you need to figure out whether to switch or not. It matches the problem definition stated above: the door was opened but we don't know why.

Let's see your chances if we assume Monty follow the dice approach:

event A: you've guessed correctly from the first try

event B: Monty opened the door

P(A|B): probability that you've guessed correctly given that Monty opened the door -- if it's less than 50% you should switch

P(A) = 1/3

P(B) = (1/6)x(2/3) + (5/6)x(1/3) = 7/18

P(AB) = (5/6)x(1/3) = 5/18

P(A|B) = P(AB)/P(B) = 5/7

So, in this case Monty doesn't "give up the game" -- there's still a significant random aspect to it. However in this setup for you it's better for you to stay (5/7 of winning) rather than switch (2/7).

[1] https://news.ycombinator.com/item?id=43005371

UPD fixed a typo, it's 5/7 not 5/8


You're right: I was focused on Monty picking a door with a goat depending on whether you had picked the right door. That would certainly give the game away, but indeed is not the only option.

However,

> Now even if you try to play the card of "reasonable assumptions" and rule out "boring" strategies because they are "giving the game away" this still won't eliminate all "non-independent" cases. The space of possible probability distributions here is way bigger than your list above. I can come up with an infinite number of "reasonable non-independent" strategies for Monty.

None of the assumptions you proceed to list are "reasonable". They introduce enough to the puzzle that they ought to be stated as part of the problem. Since they aren't, it's safe to assume none of those are how Monty picks the door.

Your "dice rolling" formulation of the puzzle is nonstandard. If you want to go with it, you must make it clear in the presentation of the puzzle. There are infinite such considerations; maybe Monty observes the phase of the Moon, maybe Monty likes the contestant, and so on... it wouldn't work as a puzzle!

Given no additional information or context, all we're left with is assuming Monty always opens a door with a goat behind it.

If we want to introduce psychology: I bet you almost all of the naysayers to vos Savant's solution to the puzzle are a posteriori rationalizing their disbelief: they initially disbelieve the solution to the standard puzzle, then when shown it actually works, they stubbornly go "oh, but the problem is underspecified"... trying to salvage their initial skepticism. But that wasn't why they reacted so strongly against it -- it was because their intuition failed them! I cannot prove this, but... I'm almost certain of it. Alas! Unlike with probabilities, there can be no formal proofs of psychological phenomena!


> Given no additional information or context, all we're left with is assuming Monty always opens a door with a goat behind it.

If you're playing against an opponent and trying to devise a winning strategy against him you can't just say "given no additional information or context, all we're left with is assuming his strategy is to always do X" and viola: present a strategy Y that beats X.

In this case X is "always opens a door with a goat behind it" and Y is "always switch doors". This is fascinating but simply incorrect from the math standpoint.

> Your "dice rolling" formulation of the puzzle is nonstandard. If you want to go with it, you must make it clear in the presentation of the puzzle. There are infinite such considerations; maybe Monty observes the phase of the Moon, maybe Monty likes the contestant, and so on... it wouldn't work as a puzzle!

The "dice rolling" it's not a problem formulation, it's one of the solutions to that problem i.e. specific values of X and Y that satisfy all the requirements. I present it to prove that more than one solution exist and furthermore not all solutions have Y="always switch", so you can't establish Y independent of X.

They key difference here is that I don't consider it as a "puzzle", whatever that means. I consider it to be a math problem. Problems of this kind are often encountered in both Game Theory and Probability Theory. It's perfectly fine to reason about your opponents strategies and either try to beat them all or find an equilibrium: this is still math and not psychology.

You can argue that it's a puzzle instead and I don't mind. What I do mind however is saying that Diaconis was wrong. He specifically said "the strict argument would be..." meaning that his conclusions hold when you consider it as a math problem, not as a "puzzle". My whole point is to demonstrate that.


> They key difference here is that I don't consider it as a "puzzle", whatever that means. I consider it to be a math problem. Problems of this kind are often encountered in both Game Theory and Probability Theory. It's perfectly fine to reason about your opponents strategies and either try to beat them all or find an equilibrium: this is still math and not psychology.

This was quite obviously a puzzle, of the "math problem" kind. It admits a pretty straightforward -- but counterintuitive -- solution, which made some admittedly smart people upset.

Everything else is smoke and mirrors.

> this is still math and not psychology.

If you read the responses to vos Savant's column, they are quite emotional. There was quite obviously an emotional response to it, of the "stubborn" and/or "must attack vos Savant's credentials" kind, too.


> Maybe he only offers a trade when you initially picked the good door.

That would be a rather convenient signal to the player.


Indeed. But the hosts machinations can be arbitrarily more complex; maybe he offers the switch to contestants he finds attractive only when they’ve picked a goat, and contestants he finds unattractive when they’ve picked the car.


You're introducing bizarre ad hoc hypotheses.

This works as a logic puzzle. Assuming the host offers different doors depending on contestant attractiveness makes absolutely no sense. It's a bizarre assumption.

Maybe the goats can wander from door to door, or maybe there is no car, or maybe behind all of the doors there are tigers. Which would be absurd and unrelated to this puzzle.


The hosts' strategy is the core of the puzzle. The question is what information he has just conveyed to the player by opening the door and that depends entirely on his mental state. If he was always going to open a door then the player should switch. If he is opening a door only if the player has picked the car then they should not switch. If he has bizarre ad hoc motivations then the correct decision depends on bizarre ad hoc considerations.

And, as CrazyStat has correctly pointed out, as stated in the linked article the hosts' strategy is an unknown. It could be bizarre. Although I'd still rather say vos Savant was correct in her reasoning; since the answer is interesting it seems fairer to blame the person posing the question for getting a detail wrong.


The source material for the article says otherwise:

> So let’s look at it again, remembering that the original answer defines certain conditions, the most significant of which is that the host always opens a losing door on purpose. (There’s no way he can always open a losing door by chance!) Anything else is a different question.

https://web.archive.org/web/20130121183432/http://marilynvos...


You need to know why the host did that though. The host might have an adversarial strategy where they only open the door when using vos Savant's logic would make the player lose.

Her logic was really interesting, it is easy to see why she gets to write a column. But at the end of the day the problem is technically ill-formed.

> There’s no way he can always open a losing door by chance!

Yes there is, he might have picked a remaining door at random with the plan of saying "You lose!" if he finds the car. An actor with perfect knowledge can still have a probabilistic strategy. That doesn't really change the decision to switch, but it does have a material impact on the analysis logic.

She's correct that is a necessary assumption to get the most interesting form of the problem. But that isn't what the questioner asked.


I doubt the person writing the question had the more nuanced version in mind. When the column was published, nearly everyone who wrote to Marilyn vos Savant had been in agreement that the host always opened a door with a goat as the problem was specified. This idea that the host did not was made after Marilyn vos Savant was shown to be correct and after she had already confirmed that it was part of the consensus.


Adding ad hoc hypotheses about the host's motivations turns this into a family of related problems, but not The Monty Hall problem.

For any given logic puzzle, you can safely assume anything not specified is outside the problem.

Here, what Monty had for lunch, whether he finds the contestant attractive, or some complex algorithm for his behavior is left unspecified and -- since this is a logic puzzle -- this must mean none of this matters!

Imagine if Monty opened a door with a goat only if he had had goat cheese for breakfast. Sounds ridiculous for the logic puzzle, right?

We can safely assume, like Savant, that Monty always picks a door with a goat, turning this into a logic puzzle about probability.

Anything else is going out of your way to find ambiguity.


Well sure, it doesn't appear that vos Savant was asked the Monty Hall problem. She seems to have been asked an ill formed alternative problem and answered that instead. Then the interpretation of the ill formed question with the most interesting assumptions about the host's behaviour became the Monty Hall Problem.

And the linked article (and by extension Mr. Diaconis & CrazyStat) was talking about the question that vos Savant was asked as opposed to the one where the assumptions to come to an answer are enumerated.

> We can safely assume, like Savant, that Monty always picks a door with a goat, turning this into a logic puzzle about probability.

No we can't. Otherwise we can safely assume any random axiom, like "The answer is always the 3rd door". You have to work with the problem as written.


The problem as written is that Monty opened a door with a goat.

Everything else is an unwarranted addition, unsupported by the text!


This is like being asked how to solve a (legal) Rubik cube configuration and then considering how close you can come to a solution if given an illegal configuration. It is not relevant since it is not what was presented. You can always make things more complex by considering variations that are not relevant to the original problem.


This was a real life gameshow with known rules. The host always opened a door, and always a door with a goat.


This was a real life game show where the host did not always open a door and offer a switch. In fact most of the time he did not offer a switch. See [1] (Ctrl-F cheating for the relevant paragraph).

[1] https://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-do...


> [...] the problem (as originally stated) doesn’t specify that the host always opens a door and offers a switch. Maybe he only offers a trade when you initially picked the good door.

That would make no sense. Also, Monty always opens a door with a goat. The problem is well-formed, but most people misrepresent it in order to object to it.


> Also, Monty always opens a door with a goat.

Nope! That’s you adding a constraint that does not exist in the original problem.

> Was Mr. Hall cheating? Not according to the rules of the show, because he did have the option of not offering the switch, and he usually did not offer it.

From [1].

Constraints matter. Don’t play fast and loose with them.

[1] https://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-do...


You have removed a constraint from the original problem. What could have happened does not matter since we are being asked about what did happen.

Imagine two people playing a game of chess. Various moves are played. Then you are asked a question about the state of the board. The answer depends on the state of the board. It does not depend on the set of all states the board could have taken had one player made different choices.


You have hallucinated a constraint that never existed and then accused me of removing it.


The problem text itself specifies that the host opens an unselected door that has a goat behind it.


Oh yes, the problem text specifies that in this particular instance the host opens a door and offers a switch. It does not specify that the host does this every time, which is the constraint in question.


It's typically considered unnecessary to specify that, because it comes from a game show where he always reveals one wrong door. Monty Hall was the first host of the show.


> because it comes from a game show where he always reveals one wrong door.

Nope!

> Was Mr. Hall cheating? Not according to the rules of the show, because he did have the option of not offering the switch, and he usually did not offer it.

Emphasis added.

https://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-do...


This is about the only proper counter I've read so far. Why did it take so long to post?

This does indeed change the whole problem. I would argue that the problem as stated in vos Savant's column is different (and she says as much later on, "all other variations are different problems"), but I admit this makes me lose the supporting argument I've been using of "...and this is how Monty's game show worked". Point conceded.

I would also argue most people who objected to vos Savant's solution weren't considering Monty's strategy at all. They were objecting to the basic probabilities of the problem as stated by vos Savant, merely because they are counterintuitive (which can be summed up as "if you switch, you're betting you got the first guess of 1/3 wrong"), and everything else is an a posteriori rationalization.


I posted exactly the same thing in a reply to you way earlier in the thread [1]. It didn't take so long to post, you just weren't paying attention.

> I would also argue most people who objected [etc. etc.]

I don't care about any of those people, the discussion was about Persi Diaconis.

[1] https://news.ycombinator.com/item?id=43006185


> I posted exactly the same thing in a reply to you way earlier in the thread [1]. It didn't take so long to post, you just weren't paying attention.

I'm not asking why YOU took so long to respond or finding fault in your reasoning abilities, I'm saying there's been a lot of arguing in general in this sub-topic, and few people mentioned this fact -- which is the only relevant fact for challenging vos Savant's formulation of the problem (which matters because it's what sparked all this fuss).

> It didn't take so long to post, you just weren't paying attention.

This is the most dismissive possible thing to say, especially in response to a comment of mine where I'm conceding a point. I missed ONE other particular comment of yours, hence "I wasn't paying attention"? Wow. Sorry for not following your every response to everything.

> [...] the discussion was about Persi Diaconis.

I don't know nor care who Diaconis is, I just care about whether the Monty Hall problem truly was underspecified or not. This is about the Monty Hall problem, not about some person.


The problem text asks what is the better choice in this particular instance. It does not care about hypothetical other instances.


Ah, but probability is all about hypothetical instances, and how the host makes his decisions—or if he’s allowed to make a decision at all—is a key consideration in the calculation of the probability. If we don’t know how the host decides whether or not to offer a switch then we can’t calculate a probability and can’t decide which choice is better.


I see your point. You are arguing that the fact that the host did this could convey additional information that would affect the distribution. This criticism still does not seem valid to me because this argument can be used to alter the correct answer to a large number of problems.

Consider the question of whether John Doe did well on his mathematics examination. This would seem like a straightforward thing depending on the questions and his answers. We can assume they are provided as part of the problem statement. We could also assume that a definition for “did well” is included. We could then consider a situation where under chaos theory, his act of taking the examination caused a hurricane that destroyed his answer sheet before it had been graded. This situation was not mentioned as either a possibility or non-possibility. However, we had the insight to consider it. Thus, we can say we don’t know if he did well on his mathematics examination, even though there is a straightforward answer.

Another possibility is that game show could have rigged things without telling us, with a 90% chance of the prize is behind behind door #1, a 9% chance that the prize is behind door #2, and a 1% chance that the prize is behind door #3. Which door was the initial choice would then decide whether the player should change the choice, rather than anything the host does. However, this was not told to us, but to avoid saying that choosing the other door is always the answer, we decide to question the uniformity of the probability distribution, despite there being no reason to think it is non-uniform. Thus, assuming that the game show might have altered the probability distribution, we can say not only that the host’s intent does not matter, but we don’t know the answer to the question.

To be clear, my counterpoint is that these considerations produce different problems and thus are not relevant.


You might live in a world where the host doesn't want to give you the car, and only opens a door and offers you the option of switching if your first choice was the door with the car behind it. In that world, you shouldn't switch. I don't think this form of the problem statement gives you any reason to believe that you aren't in that world.


Here is what Marilyn vos Savant had to say:

> So let’s look at it again, remembering that the original answer defines certain conditions, the most significant of which is that the host always opens a losing door on purpose. (There’s no way he can always open a losing door by chance!) Anything else is a different question.

https://web.archive.org/web/20130121183432/http://marilynvos...

What you are discussing is a different problem.


Yes, I am discussing a different problem, and I don't think the original problem formulation gives enough information to distinguish between the 2 problems.

The answer can add assumptions, which is fine. I'm not passing judgement on Marilyn vos Savant. I do object to claims that the problem statement is sufficient to have a single answer, and based on that, I'd object to claims that somebody in that situation would be wrong not to switch doors. I would object on exactly the same grounds to anyone who tells you "you're wrong, there's a 50% chance of getting a car" (I might object further, on the grounds that the most obvious interpretation which gives that answer is inconsistent with this form of the problem statement).


If you're discussing a different problem, then it's not the Monty Hall Problem, which we're discussing here.

It's a probabilities logic puzzle, it's not about psychological tricks. Anything of that sort is an extraneous ad hoc hypothesis that you're introducing.

The point is whether, upon the reveal of a goat, you should switch or stick to your original choice. Nothing else matters. What Monty had for breakfast doesn't matter. Whether he likes you or not doesn't matter.


Following your arguments throughout this thread, I think the piece that is confusing you is the framing of the problem as a game-show host, which primes you to think of the host being "fair" by default.

To understand how the framing might change how you interpret the problem, consider the following scenario: You are in a game of poker, and you have a flush with king high. Your opponent reveals all but one card from their hand, which shows they have 4 hearts, and they also reveal that their last card is an ace, but they don't reveal its suit. It's your turn to bet. Do you bet, or do you fold?

Now you could treat this as a simple statistics problem -- there are four possible aces they could have in their hand, and only one is a heart, so only a 1/4 chance they will beat you. But is the solution to this problem that there is a 3/4 chance of winning the pot? In the problem text, we haven't specified under what conditions your opponent will reveal which cards in their hands. But somehow, by saying it's a game of poker makes you think that they probably are more likely to reveal their hand if they are bluffing, so the true probability is not 3/4.

We are primed by this description of this person as your "opponent" to think about them making the decision adversarially. What if instead we say that that game of poker is part of a game show and your opponent is the host of the game show? Depending on the assumptions you make about your opponent's motivations, you must calculate the odds differently, and simply saying "3/4" is not unambiguously correct.


This was the problem as stated in the Marilyn vos Savant column that started the controversy:

> Suppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?

Diaconis is in fact correct that given just that information the problem cannot be solved. What is missing is a statement that the host will always reveal a goat and always offer you a chance to switch doors.

If the host can chose whether or not to make the offer then if you you happen to receive the offer when you are on the show you cannot say anything about whether or not switching is to your advantage.

For instance suppose the show has given away a lot of cars earlier in the season and the producers ask the host to try to reduce the number of cars given away during the rest of the season. The host might then only offer switching when he knows the contestant has picked the car door.

He will still open a goat door first because that's more dramatic. He just won't offer to let you switch before going on to open either your door or the remaining door.


I don't think "motivation of the host" is a great way to accurately describe the issue that Diaconis is calling out, it is rather intended to be more intuitive.

In a precise way, the reason the question is underspecified is because it doesn't say if the probability of the host offering you a chance to choose again is dependent on which choice you make. If the host offers the choice more twice as often when your pick right and when you pick wrong, then changing you pick is the incorrect choice.

Now, colloquially, it can makes sense to assume the host always offers the choice, but practically, if we're looking at how to use statistics in a real world situation, that isn't a safe to always assume that probabilities are independent.


The question as stated does not permit such a choice by the host since if it were a choice, it had already been made.

This is like being presented with a nearly completed game of chess, asked if the loser can lose in 1 move and then arguing that the answer is more nuanced because there might have been other moves taken that produced a different end games rather than the ones that produced this particular end game. We do not care about those other end games, since we are only considering this particular one.


> The question as stated does not permit such a choice by the host since if it were a choice, it had already been made.

Whether the choice was already made by the host makes no difference, what matters is what information about the hidden state can be derived from that choice.

Let's say the rules of the game are modified to sat that the host never offers a re-selection when you already have selected a door with a goat. Then if the host has offered you a re-selection you should definitely not take it because you already have the good prize. You know this because the re-selection offer provides information about what is behind the door you selected.

In fact, any time your choice of door has amyy statistical effect on whether a re-selection is offered, then a re-selection offer (or lack) provides a small amount of information that modifies the expected value of choosing a new door.

> This is like being presented with a nearly completed game of chess, asked if the loser can lose in 1 move and then arguing that the answer is more nuanced because there might have been other moves taken that produced a different end games

It is absolutely nothing like that. That is not a question about statistics or probability.


> The question as stated does not permit such a choice by the host since if it were a choice, it had already been made.

I don't think you understand the concept of conditional probabilities correctly.

The fact that event B already happened doesn't make it any easier to compute P(A|B) nor it renders the P(B) useless.

On the contrary P(B) and P(AB) are key to solve this problem.


I think this explanation is just cope. Nothing about the problem should lead you to believe that the host is an evil genie purposefully trying to trick you. Attacking the framing device for the problem is the kind of post-hoc rationalization you make after failing at a probability test.


> Nothing about the problem should lead you to believe that the host is an evil genie purposefully trying to trick you.

Is it really unreasonable to assume that the host would like to keep the car? As I see it, that's the economic intuition behind why most people don't switch.


I would assume Monty Hall was paid the same amount either way.


I think we're disagreeing about how much it's reasonable to assume. I'm happier treating it as a self contained problem (in which case I'd say that the form quoted by CrazyStat is underspecified); but if you're familiar with the TV show it's based on, you can reasonably assume that he always opens a door with a goat and gives you a chance to switch.

My objection is to the claim that "most people get it wrong", if most people are being fed the underspecified problem. I think the gut reaction is not to switch, because in most comparable situations across human experience it would be a mistake (imagine a similar situation at a sketchy-looking carnival game rather than a TV show). They then try to justify that formally and make mistakes in their justification, but the initial reaction not to swap is reasonable unless they've been convinced that Monty Hall always opens a door with a goat and gives a chance to switch.


> My objection is to the claim that "most people get it wrong", if most people are being fed the underspecified problem. I think the gut reaction is not to switch, because in most comparable situations across human experience it would be a mistake (imagine a similar situation at a sketchy-looking carnival game rather than a TV show

This may have a role to play. However there is a long history of people who aren't "going off their gut", including statisticians, getting this wrong with a very high level of confidence. It seems pretty clear that there is more than just an "underspecificity" problem. If you properly specify the problem, you will get similar error levels.


I agree, but I believe the reason for the errors is because people intuitively have a pretty good grasp of the game theory for the situation where someone is trying not to give you something they promised (and it's the sort of thing where IRL you shouldn't believe somebody trying to convince you to change your mind, so it's a useful bias to ignore parts of the problem even when it is fully specified). I believe that the statisticians then try to justify that, and end up making incorrect arguments.


> I agree, but I believe the reason for the errors is because people intuitively have a pretty good grasp of the game theory for the situation where someone is trying not to give you something they promised

Unfortunately this doesn't match reality. The vast majority of people who got the problem wrong when it was first published are not confused about the rules and insist that the chances are even (same chance to get a car switching as not switching). This doesn't match a theory that these people think the host is trying to trick the player in some way.

Additionally, You can reframe the problem and will still see significant error rates.


Here is the source material for the article:

https://web.archive.org/web/20130121183432/http://marilynvos...

It contains a clarification that the article omitted from the description:

> So let’s look at it again, remembering that the original answer defines certain conditions, the most significant of which is that the host always opens a losing door on purpose. (There’s no way he can always open a losing door by chance!) Anything else is a different question.


Yes, of the host opens a door, it will always be a losing door. Nobody is disputing that.

The part that is underspeified is: does the host always open a door and if not, does the player's choice of a door impact whether the host opens a door?


I think you should take the time to understand why this explanation matters. It reveals some important things about how people can make mistakes with statistics. Not understanding something doesn't make it "cope".


Did you not read the entire article on how scads of intelligent people got this wrong? And the explanation of why they got it wrong? It’s like following a map that carefully routes you around a sinkhole, and then stepping right into the sinkhole.


I read the entire article. They all had defective reasoning. The player picked an option with a 1/3 chance of being right and a 2/3 chance of being wrong. The host’s action did not change that. However, the host’s action did make the remaining door have a 2/3 chance of being right and a 1/3 chance of being wrong.


This is downvoted, but correct. Which door is right and which door is wrong doesn't get reshuffled when the host removes a wrong door, so even though there's only 2 doors left the chance isn't 50:50, it's still 33:67 - with the player having most likely chosen a wrong door.


It's not correct. P(A|B)=P(A) only if A and B are independent.

Requiring independence in this case literally means "the host opens the door regardless of the player making the right or wrong choice first time". It's a core assumption in your calculations, without it the math is not correct.


Not this again... https://duckduckgo.com/?q=monty+hall+site%3Anews.ycombinator...

I need to write a blog post or something convincing everyone we need to stop talking about the Monty Hall problem and replace it with a new problem with all the ambiguities removed. (Unless ambiguity is the point, then Monty Hall is fine.)


There are no ambiguities in the Monty Hall problem. It's usually people who skim read and make assumptions about the challenge. No new problem is going to stop people from skim reading.

For example, going by that ddg search, one result is making a fuss about not knowing whether Monty opens a door at random and happens to show a goat, or purposefully opens a door with a goat behind it. But we do know: it's always on purpose, Monty never opened a door with a car behind it, thus prematurely ending the bet. So there's no ambiguity.

The problem is cool because the right answer doesn't seem intuitively right, even though it can be formally shown to be right.


> But we do know: it's always on purpose, Monty never opened a door with a car behind it

We only know that if the problem tells us. Sometimes it doesn't.

> There are no ambiguities in the Monty Hall problem

The problem has been written up thousands of times. I'm sure that some writeups are sufficiently unambiguous, but many are not. For example, consider the two "variants" described by this comment https://news.ycombinator.com/item?id=8664550

> The host selects one of the doors with a goat from the remaining two doors, and opens it.

> The host chooses one of the remaining two doors at random and opens it, showing a goat.

This commenter was trying hard for semantic precision, and yet, I think if you encountered the first variant in isolation it would be perfectly reasonable to interpret it as "The host [randomly] selects one of the doors with a goat [although he might have selected the prize]" even though this is clearly not what the commenter was attempting. If you disagree, that only proves my point: this problem is prone to silly and wasteful semantic debate, rather than the interesting probability result it should be focused on.


I don't think it's reasonable to assume Monty picks a door at random, no.


The original wording was "You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat."

I think there's a good argument that the intended interpretation should be the favored one. If he doesn't use the knowledge to always reveal a goat, why did they bother specifying that he has the knowledge?

But it still doesn't quite explicitly say that he's not picking randomly.


What if he used the knowledge to decide whether or not to open a door and offer you the choice? I think that scenario is why most people instinctively want to not switch (even if they aren't consciously aware of it), so it's a pity to disregard it.


People instinctively want not to switch because the probabilities involved aren't intuitive.

Notice if Monty uses the knowledge of whether to open the door based on what you choose, he'd be giving the whole game away, so that's not it.


It seems to me you're answering your own question. There's only one reasonable interpretation; you have to go out of your way (unreasonably!) to find any ambiguity.


Nah, there's a lot of space between "this should be the favored interpretation" and "this is strictly implied" - especially in the context of a math puzzle!


I don't think there is. This is a simple brain teaser. It's fun because the right answer is counterintuitive, that's all.

It's not at all about "rules lawyering" the premise of the puzzle.


But this makes no sense. You're proving the "counterintuitive answer" with the probability theory but when confronted with the fact that your proof is not mathematically correct you say "this is a simple brain teaser". I don't see how it changes anything. If you rely on the probability theory then you have to do it correctly, there's no special brain teaser version of math.

It's like going around saying that "counterintuitive answer to equation x/x=1 is 17" and then "prove it" by dividing 17 by 17 . When confronted with the fact that in math to solve an equation is to find ALL solutions and not just one, you say "It's not at all about rules lawyering the premise of the puzzle". Well avoid dealing with the math then because the math in fact is all about the rules.


> But this makes no sense. You're proving the "counterintuitive answer" with the probability theory but when confronted with the fact that your proof is not mathematically correct you say "this is a simple brain teaser"

The answer IS mathematically correct. It's just counterintuitive and it made some PhDs trip.

Nobody here -- not even you, before -- was arguing it's mathematically incorrect. Some people, when told the right answer, claimed the problem is underspecified and admits more than one context that may change the answer, which is not at all the same as saying the answer is mathematically incorrect! Not even Diaconis, the person some of you are so eager to defend (for some bizarre reason) claims it's "mathematically incorrect"!

It seems you're grasping at straws now.


I do argue that. In probability theory you can't assume independence of two random variables unless it's a part of the problem statement because this assumption changes everything. Where I got my degree this would be considered an error and an incorrect solution to the problem.

It's not different from how in middle school you can incorrectly assume that "x is positive" when solving x^2=4 in R. An answer "x=2" is mathematically incorrect.


> I do argue that

Well, you're wrong. What's worse, the followup by vos Savant clarified this.

> In probability theory you can't assume independence of two random variables unless it's a part of the problem statement

In most logic puzzles you can safely assume an interpretation of the problem that makes sense and which doesn't go into extraneous tangents. Going "well, akshually, if we assume a spherical goat..." is usually a bad sign.

Frankly, all of this still reads as an a posteriori rationalization for finding the solution to the straightforward formulation of the puzzle counterintuitive.


> Frankly, all of this still reads as an a posteriori rationalization for finding the solution to the straightforward formulation of the puzzle counterintuitive

Luckily, when I was introduced to that problem many years ago it was presented correctly and the answer albeit counterintuitive was perfectly clear. Developing an intuition for it was also rather easy (what is the chance I guessed incorrectly at a first try? it's the answer).

What's above is my reaction to the incomplete formulation of the problem and an incorrect answer that follows.

The reason I'm so annoyed by this is because probability theory is very fragile and only works when applied with absolute precision. If you follow your approach with the Two Envelopes Problem and make some "reasonable assumptions", you get a crazy answer (always switch). And people who are in the business of logic puzzles rather than probability theory wouldn't even know the difference.

Therefore I would rather discourage people from working on logic puzzles and suggest doing the actual math instead.


I imagine one reason people have a hard time with the monty hall problem is that they have learnt a rule that seems to fit but really doesn't. A person not trained at all in math might do better as they haven't learnt that rule.

There's probably a name for that cognitive bias, but I don't know it.


What new problem without ambiguities do you suggest?


I'm still working on it, but I think a key idea is that 1) The "host" tells you he is going to eliminate one of the two losing options, and then allow you to choose from the two remaining options. 2) The host allows you, if you wish, to "protect" one of the options from being eliminated. If you choose to protect, you may still choose either of the remaining options after elimination occurs.

The correct solution is to protect one of the options and then choose the other option.

The real challenge is coming up with appropriate flavor text for this idea, but I think I'll get it eventually.


A reference point in case you’re interested…

Not sure what it is today, but about 25 years ago, the standard 30-minute TV show had 22 minutes of content and 8 minutes of ads.

An hour long show was exactly double that, at 44 minutes of content and 16 minutes of ads.


On one of my streaming services the times are shown with ads though I pay to avoid them. A 2 hour show mostly comes on at 90 minutes just as you say.


I like the analogy of compression, in that a distilled model of an LLM is like a JPEG of a photo. Pretty good, maybe very good, but still lossy.

The question I hear you raising seems to be along the lines of, can we use a new compression method to get better resolution (reproducibility of the original) in a much smaller size.


> in that a distilled model of an LLM is like a JPEG of a photo

That's an interesting analogy, because I've always thought of the hidden states (and weights and biases) of an LLMs as a compressed version of the training data.


And what is compression but finding the minimum amount of information required to reproduce a phenomena? I.e. discovering natural laws.


Finding minimum complexity explanations isn't what finding natural laws is about, I'd say. It's considered good practice (Occam's razor), but it's often not really clear what the minimal model is, especially when a theory is relatively new. That doesn't prevent it from being a natural law, the key criterion is predictability of natural phenomena, imho. To give an example, one could argue that Lagrangian mechanics requires a smaller set of first principles than Newtonian, but Newton's laws are still very much considered natural laws.


Maybe I'm just a filthy computationalist, but the way I see it, the most accurate model of the universe is the one which makes the most accurate predictions with the fewest parameters.

The Newtonian model makes provably less accurate predictions than Einsteinian (yes, I'm using a different example), so while still useful in many contexts where accuracy is less important, the number of parameters it requires doesn't much matter when looking for the one true GUT.

My understanding, again as a filthy computationalist, is that an accurate model of the real bonafide underlying architecture of the universe will be the simplest possible way to accurately predict anything. With the word "accurately" doing all the lifting.

As always: https://www.sas.upenn.edu/~dbalmer/eportfolio/Nature%20of%20...

I'm sure there are decreasingly accurate, but still useful, models all the way up the computational complexity hierarchy. Lossy compression is, precisely, using one of them.


The thing is, Lagrangian mechanics makes exactly the same predictions as Newtownian, and it starts from a foundation of just one principle (least action) instead of three laws, so it's arguably a sparser theory. It just makes calculations easier, especially for more complex systems, that's its raison d'être. So in a world where we don't know about relativity yet, both make the best predictions we know (and they always agree), but Newton's laws were discovered earlier. Do they suddenly stop being natural laws once Lagrangian mechanics is discovered? Standard physics curricula would not agree with you btw, they practically always teach Newtownian mechanics first and Lagrangian later, also because the latter is mathematically more involved.


I will argue that 'has least action as foundation' does not in itself imply that Lagrangian mechanics is a sparser theory:

Here is something that Newtonian mechanics and Lagrangian mechanics have in common: it is necessary to specify whether the context is Minkowski spacetime, or Galilean spacetime.

Before the introduction of relativistic physics the assumption that space is euclidean was granted by everybody. The transition from Newtonian mechanics to relativistic mechanics was a shift from one metric of spacetime to another.

In retrospect we can recognize Newton's first law as asserting a metric: an object in inertial motion will in equal intervals of time traverse equal distances of space.

We can choose to make the assertion of a metric of spacetime a very wide assertion: such as: position vectors, velocity vectors and acceleration vectors add according to the metric of the spacetime.

Then to formulate Newtonian mechanics these two principles are sufficient: The metric of the spacetime, and Newton's second law.

Hamilton's stationary action is the counterpart of Newton's second law. Just as in the case of Newtonian mechanics: in order to express a theory of motion you have to specify a metric; Galilean metric or Minkowski metric.

To formulate Lagrangian mechanics: choosing stationary action as foundation is in itself not sufficent; you have to specify a metric.

So: Lagrangian mechanics is not sparser; it is on par with Newtonian mechanics.

More generally: transformation between Newtonian mechanics and Lagrangian mechanics is bi-directional.

Shifting between Newtonian formulation and Lagrangian formulation is similar to shifting from cartesian coordinates to polar coordinates. Depending on the nature of the problem one formulation or the other may be more efficient, but it's the same physics.


You seem to know more about this than me, but it seems to me that the first law does more than just induce a metric, I've always thought of it as positing inertia as an axiom.

There's also more than one way to think about complexity. Newtownian mechanics in practice requires introducing forces everywhere, especially for more complex systems, to the point that it can feel a bit ad hoc. Lagrangian mechanics very often requires fewer such introductions and often results in descriptions with fewer equations and fewer terms. If you can explain the same phenomenon with fewer 'entities', then it feels very much like Occam's razor would favor that explanation to me.


Indeed inertia. Theory of motion consists of describing the properties of Inertia.

In terms of Newtonian mechanics the members of the equivalence class of inertial coordinate systems are related by Galilean transformation.

In terms of relativistic mechanics the members of the equivalence class of inertial coordinate systems are related by Lorentz transformation.

Newton's first law and Newton's third law can be grouped together in a single principle: the Principle of uniformity of Inertia. Inertia is uniform everywhere, in every direction.

That is why I argue that for Newtonian mechanics two principles are sufficient.

The Newtonian formulation is in terms of F=ma, the Lagrangian formulation is in terms of interconversion between potential energy and kinetic energy

The work-energy theorem expresses the transformation between F=ma and potential/kinetic energy The work-energy theorem: I give a link to an answer by me on physics.stackexchange where I derive the work-energy theorem https://physics.stackexchange.com/a/788108/17198

The work-energy theorem is the most important theorem of classical mechanics.

About the type of situation where the Energy formulation of mechanics is more suitable: When there are multiple degrees of freedom then the force and the acceleration of F=ma are vectorial. So F=ma has the property that the there are vector quantities on both sides of the equation.

When expressing in terms of energy: As we know: the value of kinetic energy is a single value; there is no directional information. In the process of squaring the velocity vector directional information is discarded, it is lost.

The reason we can afford to lose the directional information of the velocity vector: the description of the potential energy still carries the necessary directional information.

When there are, say, two degrees of freedom the function that describes the potential must be given as a function of two (generalized) coordinates.

This comprehensive function for the potential energy allows us to recover the force vector. To recover the force vector we evaluate the gradient of the potential energy function.

The function that describes the potential is not itself a vector quantity, but it does carry all of the directional information that allows us to recover the force vector.

I will argue the power of the Lagrangian formulation of mechanics is as follows: when the motion is expressed in terms of interconversion of potential energy and kinetic energy there is directional information only on one side of the equation; the side with the potential energy function.

When using F=ma with multiple degrees of freedom there is a redundancy: directional information is expressed on both sides of the equation.

Anyway, expressing mechanics taking place in terms of force/acceleration or in terms of potential/kinetic energy is closely related. The work-energy theorem expresses the transformation between the two. While the mathematical form is different the physics content is the same.


Nicely said, but I think then we are in agreement that Newtownian mechanics has a bit of redundancy that can be removed by switching to a Lagrangian framework, no? I think that's a situation where Occam's razor can be applied very cleanly: if we can make the exact same predictions with a sparser model.

Now the other poster has argued that science consists of finding minumum complexity explanations of natural phenomena, and I just argued that the 'minimal complexity' part should be left out. Science is all about making good predictions (and explanations), Occam's razor is more like a guiding principle to help find them (a bit akin to shrinkage in ML) rather than a strict criterion that should be part of the definition. And my example to illustrate this was Newtonian mechanics, which in a complexity/Occam's sense should be superseded by Lagrangian, yet that's not how anyone views this in practice. People view Lagrangian mechanics as a useful calculation tool to make equivalent predictions, but nobody thinks of it as nullifying Newtownian mechanics, even though it should be preferred from Occam's perspective. Or, as you said, the physics content is the same, but the complexity of the description is not, so complexity does not factor into whether it's physics.


Laws (in science, not government) are just a relationship that is consistently observed, so Newton's laws remain laws until contradictions were observed, regardless of the existence of or more alternative models which would predict them to hold.

The kind of Occam’s Razor-ish rule you seem to be trying to query about is basically a rule of thumb for selecting among formulations of equal observed predictive power that are not strictly equivalent (that is, if they predict exactly the same actually observed phenomenon instead of different subsets of subjectively equal importance, they still differ in predictions which have not been testable), whereas Newtonian and Lagrangian mechanics are different formulations that are strictly equivalent, which means you may choose between them for pedagogy or practical computation, but you can't choose between them for truth because the truth of one implies the truth of the other, in either direction; they are the exactly the same in sibstance, differing only in presentation.

(And even where it applies, its just a rule of thumb to reject complications until they are observed to be necessary.)


Newtownian and Lagrangian mechanics are equivalent only in their predictions, not in their complexity - one requires three assumptions, the other just one. Now you say the fact that they have the same predictions makes them equivalent, and I agree. But it's clearly not compatible with what the other poster said about looking for the simplest possible way to explain a phenomenon. If you believe that that's how science should work, you'd need to discard theories as soon as simpler ones that make the same predictions are found (as in the case of Newtownian mechanics). It's a valid philosophical standpoint imho, but it's in opposition to how scientists generally approach Occam's razor, as evidenced eg by common physics curricula. That's what I was pointing out. Having to exclude Newtownian mechanics from what can be considered science is just one prominent consequence of the other poster's philosophical stance, one that could warrant reconsidering whether that's how you want to define it.


> Do they suddenly stop being natural laws once Lagrangian mechanics is discovered?

Not my question to answer, I think that lies in philosophical questions about what is a "law".

I see useful abstractions all the way down. The linked Asimov essay covers this nicely.



Well, JPEG can be thought of as an compression of the natural world of whose photograph was taken


And we can answer the question why quantization works with a lossy format, since quantization just drops accuracy for space but still gives us a good enough output, just like a lossy jpeg.

Reiterating again, we can lose a lot of data (have incomplete data) and have a perfectly visible jpeg (or MP3, same thing).


This brings up an interesting thought too. A photo is just a lossy representation of the real world.

So it's lossy all the way down with LLMs, too.

Reality > Data created by a human > LLM > Distilled LLM


What you say makes sense, but is there the possibility that because it’s compressed it can generalize more? In the spirit of bias/variance.


Yeah but it does seem that they're getting high % numbers for the distilled models accuracy against the larger model. If the smaller model is 90% as accurate as the larger, but uses much < 90% of the parameters, then surely that counts as a win.


The money doesn’t matter.

The goals don’t matter.

The people don’t matter.

The only thing that matters is how much regulatory red tape is involved.

My guess is that the paperwork will kill this. Read the announcement. Too much discussion about regulatory framework. In the US or China, all you need is some money and smart people. That’s a very low barrier to getting moving forward.


In other words, to be successful you need to be able to break the law and lobby the government? That is indeed the USA mindset, or should I say United Corporations of America? I'm happy EU is not USA.


That’s absolutely asinine and not at all what I said.

The EU over regulates things like tech and that why they won’t be successful at have an AI tech scene. Over time, anyone good will migrate to the US or China where they can work faster and not have as many rules to deal with.

A simple example is hiring and firing people - it’s much easier to make personnel changes in the US than Europe. As a result, US companies can take more risks.


Yes, US has at-will firing, and healthcare tied to the employer, and so forth. Basically the US has made sure that the corporations have all the power, and the people have none of it. Does this make it easier to make companies? Well of course it does, just like slave trade made it easier to collect crops.

Unlike the US however, we in EU really like having basic human rights - such as mandatory minimum vacation time, healthcare that won't immediately disappear if you lose your job, or depend on the job, as well as not getting fired without cause, and without multiple warnings beforehand.

If the result of this means that we won't be successful in the AI tech scene, or that all the Musk-like slave owners migrate to US or China where they can abuse people however much they like, I'm pretty sure Europeans are not going to shed a tear over that.

I realize more and more that the main difference between Americans and Europeans is that Americans think from the perspective of a corporation, whereas Europeans think from the perspective of themselves, as human beings. We're not compatible, clearly, so there's no need to force us to be the same.


I disagree with "basic human rights". They aren't. And the reason they aren't is because one person's mandatory minimum vacation time is another person's liability.

Yeah, it's great for the employee - I totally agree. But if you run a startup, that's a huge cost.

So yes, we disagree on approaches, and that's fine. Not everyone needs to be like us, and if you reread my original comment, I never said they did. [0]

[0] - "My guess is that the paperwork will kill this."

------ side note:

I'm American. I spent 2 weeks in Europe last summer for vacation. I loved it. Food was great, Formula 1 was great. Overall a fantastic time.

But if I'm going to run a startup, I would never do it in Europe. An organic foods company - sure - that would be a great place to do it.


From experience, regulation as an explanation for EU startup competitiveness is overused so much it's almost meaningless. Can you point out specific laws that you consider existential for startups?

What I find matters way, way more is two factors:

- Concentration of capital. The US has an ecosystem of wealthy people that want to put their money somewhere. This is good for startups, but can also backfire as we can see in the news.

- Unified market. EU is not a single market, it's several dozen markets with different regulations, different languages, and different cultures. You can't sell the same B2C product with the same marketing in Germany, Spain, and Sweden as easily as you can in California, Ohio, and Texas.



First, your last point answers your first question: a non-unified market is an implicit result of too many regulations. Harmonizing them would create a more unified market. The US is efficient because it is more homogenous. That efficiency is one of the things that leads to capital formation.

So, I think you have causation backwards. Capital formation doesn't really happen because it's too difficult to build and grow things in Europe.

Look at tech in Silicon Valley - all that capital formation is years worth of growth and reinvestment.

Look at oil & gas Texas - again, all that capital comes from years of growth and reinvestment.

And what you learn in silicon valley you can generally apply to starting a company in Austin Texas. What would happen if Mercedes wanted to move it's company (HQ and all) to Spain? How much would it have to relearn from a regulatory perspective?


I agree that the announcement should´ve talked more about goals and performance than regulatory stuff ;-).

But I think there is a new understanding among the bureaucracy that regulation (alone, without innovation) will kill Europe´s competitiveness and that some acceleration and cutting of red tape is necessary.

Can't say with certainty that this will be successful. But that we, as a very young startup that is barely known outside of our AI Open Source niche, are part of this, is already a sign in itself - a year ago I´d have never believed that this might be an option (and also probably would've declined if someone asked us to join a EU-funded project).

We will have engineers without a degree (but hundreds of thousands of HF downloads) working side-by-side with some of the top researchers + HPC centers.


I wish the effort well. Any change is welcome.


> China, all you need is some money and smart people

No way


And yet, when we look around the world around us…it seems…prescient.

Excellent book that still delivers 3 decades after it came out.


Seems the 2 biggest trends in phones are satellites and AI.

To me at least, everything else is just noise when a new phone comes out.


Well AI is a trend in everything right now.

And sats for mobile aren't really a big thing. Qualcomm worked on it and killed it, Samsung did for a while and killed it, Bullitt had it and it was a fringe thing before they collapsed. Only Apple really stood by it. And it's really only their moat, they're still not selling subscriptions.

Starlink direct to cell is only available on a fraction of sats, even those launched now. They won't be able to launch a full direct to cell fleet until starship comes online due to the added size and weight of the sats.

As an android user I still don't have any replacement for my Garmin inreach which is really a pain because Garmin put the prices up considerably. I was hoping to be able to replace it by now. Also, Bullitt's service is kinda a no go for emergencies because it uses a geostationary network so it won't work from valleys with mountains to the south (in the Northern hemisphere that is). Starlink, globalstar (apple) and Iridium (qualcomm, until they killed the deal) don't have this issue being low earth orbit.

I guess the emergency service isn't the big selling point everyone thought it was and the capacity isn't there yet for much more than text messaging yet.


I also cannot wait for Apple to improve the product because it should force Garmin to actually charge reasonable prices for their service. There is a yearly fee just to keep the subscription "active" and then you pay for service on a rolling month basis. The extortionate per-message fees are reminiscent of early SMS days in the States.

The product and service itself is pretty reliable though. I've taken my Inreach in all sorts of places and had only rare issues with it.


> There is a yearly fee just to keep the subscription "active" and then you pay for service on a rolling month basis.

This is changing soon. They will now charge a keepalive fee per month. Previously you could pay 40 per year and just activate it for the months you needed it (the freedom plans). Those are gone unfortunately. For an irregular user it now goes from 40 to 120$ per year to keep it active.

And yeah I wish there was a bit more competition here. Right now inreach is still the best for me even compared to PLBs though. Because inreach can do 10-minute breadcrumbs. So even when you have a bad fall the homefront can see where you were last.

A PLB requires the user to be conscious to turn it on and unfold the antenna. And Apple's solution requires aiming at the satellite.


For backcountry I would love to have another option (and able to stream video and games lol). But I wouldn't drop the garmin for life saving reasons. I * might * if my garmin watch was able to use the garmin sos. Phone + very long lasting / hard to break watch might be enough for 90% of my nights out for me.


https://www.verizon.com/about/news/verizon-skylo-launch-dire...

The support is standards based and while they are all wetting their feet with a preferred partner, with minimal messaging support focused on emergency use cases, there is enough traction to flush out major use cases across all carriers, with out of the box enablement at the mobile platform layer integrated on supported newer devices with the requisite radio HW.


> Well AI is a trend in everything right now.

Seriously, I am fascinated by all the advanced in AI, but man, I don't need an AI assistant in my Adobe PDF reader.


Satellite limited to one country is useless for all of us living in the free world. AI on a phone is... well let's be honest Apple "Intelligence" is an utter gimmick.

Sadly there's been zero investment it seems in better user interfaces, settings screens have become a joke, cameras seem to have stagnated


And it was the cameras for the first 10 years or so.


Me to ChatGPT:

Assume you are a college instructor for a Freshman Computer Science course.

Your job is to take a pdf file from the internet and teach the topics to you students.

You will do this by writing paragraphs or bullet points about any and all key concepts in the PDF necessary to cover the topic in 2 hours of lectures

The pdf file is at https://arxiv.org/pdf/2501.09223

Build the lecture for me.


It can't read PDFs. If you ask it to, it generates code to read the first X characters of the PDF and does a bad job.

(Claude is much better at it.)


Yes it can - both via websearches and uploaded (atleast I'm doing it daily).

EDIT: This article says its only in ChatGPT Enterprise, but works for me on free plan: https://help.openai.com/en/articles/10416312-visual-retrieva...


That article is referencing visuals embedded in PDFs. As a free user you wouldn't be able to ask ChatGPT to analyze a graph inside a PDF, only text.


Mobile controls!!!!!!

It’s actually playable on my iPhone. Gotta love that!

Edit: I still suck at it just as much as I did back on the old 2600, but it’s still fun.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: