Oh 100%. If you actually browse through one of the lists a lot of people are nominated because they raised a bunch of investment to do X, not because they've actually done X. Venture capital sponsors brand events, forbes in return selects their prospects for marketing. Lists plural too, there are so many categories that I counted 1230 30u30 winners last year.
The robustness principle is locally optimal. If you want your software to not crash for users, then yes you should just silently correct weird inputs and you should make sure your outputs are following everyone else's happy paths. If you want a globally optimal ecosystem of reliable and predictable behaviour then you want everyone rejecting non-conforming inputs and outputing data that hits all the edge cases of the formats to shake out non-compliant servers.
Hypothetically if you took a random sample of present humans you would have a low # of say skin tones in wide range, whereas a small isolated and stable population should have a low # of low variety. So you can tell by that kind of difference whether it was a sudden drop or more of a sustained thing. Presumably in a more sophisticated statistical way across a wide variety of genes of course.
Apparently there is another bottleneck ~900K years ago that has decent fossil record support but the Toba one is more disputed.
If we're going by "all species die out in the end" then anything we transition to will die too: entropy will get the whole universe in the end. Past that there's nothing that says any species can't expand out, settle whatever planets they want and stick around to the end. I'm also pretty cool with my millions-of-years-from-now descendants looking back at me and thinking about how different they are from me. I would consider it an incredible success to go that long without catastrophically destroying ourselves.
I would not describe Herb as a memory safety skeptic. He's a skeptic of what is practically achievable w/r/t memory safety within the C++ language and community. All 100% memory safe evolutions of the language are guaranteed to break so much that they receive near zero adoption. In that context I think it makes sense to talk about what version of the language can we create to catch the most errors while actually getting people to use it.
On one hand it often feels like the tech industrial complex is all consuming here. When you fly you can sit and watch 12 different billboard ads in a row for different companies all claiming to be the "foundation of enterprise AI". All the money is there. On the other hand objectively most people are not tech bros. Even if literally all office workers were (and they're not), people need to eat, get around, shop, get their plumbing fixed, their teeth cleaned, etc etc. Your perception of the normalcy is probably mostly dictated by your willingness to engage with normal people activities, and perhaps your command of Spanish.
The excess return from investing the difference can be seen as the premium for the extra risk compared to paying down the principle. What makes sense for a person has to be evaluated in that light, and that the utility of money is not linear. I think a lot of people are willing to exchange a little upside to counter possibility of being unable to make their mortgage payments.
I was able to find this comment, linking to a talk that coined(?) it. https://news.ycombinator.com/item?id=36091791 I guess in short you would say that learning a leaky simplifying abstraction actually increases the amount you have to learn.
> KeyBanc analyst Justin Patterson downgraded shares sector weight from an overweight rating as the company’s focus on long-term product initiatives weighs on near-term growth and valuation.
> He wrote that “meaningful financial benefits” from these initiatives could take “several quarters” to materialize.
It's kind of beyond parody that you can grow revenue 41% YoY and people still complain that you're doing anything at all with a horizon as long as 6-9 months.
reply