Why is crashing less often better in these cases? When it crashes, you know there is a problem with the code that needs to be fixed. When it doesn't, you might have nonsensical results that you're not aware of.
It depends on the exact context, but I think that in many cases, it's better to present wrong data to the user than to crash the page, since crashing the page makes it useless. Let's say Facebook accidentally introduces and ships some bug when computing the "like" count on about 1 in 5 news feed items. I could imagine two scenarios:
1.) The JS code crashes and the Facebook news feed page breaks for basically everyone. The flood of errors is reported to the error monitoring system and Facebook engineers frantically fix or roll back the problem to limit the amount of time Facebook is unusable.
2.) 1 in 5 news feed items shows "NaN people liked this", but Facebook is otherwise usable. The flood of errors is reported to the error monitoring system and Facebook engineers frantically fix or roll back the problem to limit the amount of time the weird "NaN people" message is shown.
Scenario #1 is a really bad outage, and scenario #2 is a temporary curiosity/annoyance that most people don't notice, and hopefully it's clear that scenario #2 is a better situation for everyone. But it really depends on context; if the bug is "a bank's website shows incorrect account balances", then crashing the page is probably a better user experience.
I'm certainly not saying JS got it right; JS doesn't do the part from #2 where it alerts you if there's a non-fatal error in production. But I think the basic idea of error resiliency has plenty of merit.