Hacker Newsnew | past | comments | ask | show | jobs | submit | depletedgherkin's commentslogin

I would say two other big factors are easier access to junk food, as well as more people adopting a sedentary lifestyle.

To the second point, a lot of people from my culture still eat the same portions of food that their ancestors did, while living in a very different lifestyle in the present day. Eating a whole plate of rice as part of a meal is necessary if you're a laborer who's farming or fishing all day, but you'll pack on weight if you eat the same diet as an office worker.


>Any video that is 9 to 11 minutes long, I automatically skip because that's the sweetspot length for maximizing ad revenue.

I think that may be outdated? 10 minutes was the sweet spot for ad revenue a few years ago, but I think around 8 minutes is the sweet spot now.


It's a pleasant tangy taste that's pretty nice if you grew up eating it. Chocolate from other places sometimes feels like it's missing something...


I'm in Gen Z and I think I got it. The Tootsie pop commercial with the owl used to play all the time on Canadian TV.


Onne... Two-hooo!


I think this website is the main place keeping the term "pwned" alive haha. It's pretty much died out among Gen Z


The gen z equivalent is taking the L.


I feel like changes in culture are a bigger factor than economic reasons.

Around the world it's normally the poorer, not richer, countries that have more kids on average, so I don't think finances are the main story here. A lot of countries with excellent childcare policies like in the Nordics also have pretty low birth rates. (Not knocking those policies though, I'm sure the parents there do appreciate them)


Poorer places usually have lower costs for things like real estate and child care. Richer places tend to get “cost disease” especially in real estate.

This in turn means that people have to have higher level careers and usually two earners in a home to afford the cost of living.

Overall this all raises GDP but it results in an environment that is hostile to family formation.

It could be a feedback loop too. Fewer kids might coincide with more career oriented lifestyles which drive higher earnings but also bid up real estate and other costs, and so on.


In many parts of the world, it is common for grandparents to take care of the babies and even small kids. In other words, free daycare. Not so common in the US.


As people delay parenting, this option becomes less available. When people routinely had kids in their 20s, starting age of grandparents was about 40-50, when they were still had a lot of energy to help. In a world where people have kids in their 30s, the average grandparent has their first grandchild between 60-70. It means a sizeable fraction of them may be already too frail or even already dead, to contribute significantly in childrearing, needing assistance themselves. That compounds the problem, because then people have to raise small children at the same time their elderly parents start to need more assistance.


I wonder if we define rich/poor by real-estate affordability instead; will this change the stats? Many poor countries have extremely affordable housing even for their poor. Be careful of their homelessness stats, though. Because as a homeless in many of these countries you can go and build your "crib" with stuff you collect around. Many people live in these conditions but it's a legitimate home.


I also wonder what widely accepted mathematical conjectures were later proven to be wrong (for example, a hypothetical answer would be if it was proved that P = NP, since most computer scientists today believe that P =/= NP).


Analytic number theory has seen a fair number of such conjectures. The first that comes to mind is the Pólya conjecture [0]. The conjecture stated that for any positive integer N > 2, there are at least as many positive integers less than N with an odd number of prime factors as there are with an even number. The smallest counterexample is N = 906,150,258. [0] https://en.m.wikipedia.org/wiki/P%C3%B3lya_conjecture


For CS, CMU has a similar reputation to MIT and CalTech, even though it doesn't have as much name-brand recognition among the general public.


"Guns aren't lawful", after reading that line I thought she might be British, but it looks like she's American. Were gun laws stricter back then or something?


Yeah the Sullivan Act. Supreme Court struck it down last year.


Bit of an aside, but I wonder if the rise of LLMs will lead to new programming languages being much slower to be adopted.

Like you said, you might have given up on F# without ChatGPT assistance, and the main way ChatGPT is able to help with F# is because of all of the example code it's been trained on. If developers rely more and more on LLM aid, then a new language without strong LLM support might be a dealbreaker to widespread adoption. They'll only have enough data once enough hobbyists have published a lot of open-source code using the language.

On the other hand, this could also leading to slowing adoption of new frontend frameworks, which could be a plus, since a lot of people don't like how fast-moving that field can be.


I heard somewhere that ChatGPT is surprisingly good for human language translations even though it's not specifically trained for it. There seems to be just enough examples of e.g. Japanese that Japanese researchers use it to translate papers. I suspect that's largely true for programming languages too. I've had great success working with it in Clojure, even though there's relatively little published code compared to more popular languages.


ChatGPT is pretty good for translations of Japanese into English. It’s English to Japanese translation tend to sound somewhat stiff/formal/machine-generated, although it’s less prone to hallucinations than DeepL for larger texts. I expect this is because it was trained on a much larger corpus of English language texts than Japanese ones, which means the problem is not intractable.


Wouldn't you just need to publish a Rosetta stone type translation for it to be able to digest the new language fully? e.g. here is how you do this in python and here is how you do it in this new language


The crazy thing that a lot of people don’t realize is that all of that data generalizes to anything new you can throw at it. As long as there’s enough space in the prompt to provide documentation it can do it on the fly but you could also fine tune the model on the new info.


Which is what Phind tries (and mostly succeeds) in doing. LLM + Search Engine is way smarter than just LLM.


It could go the other way. LLMs might make porting code from one language to another easier which would speed the adoption of newer and more niche languages. And the future of documentation and tutorials might be fine-tuning an LLM.


I've also wondered this - including if we might see a breed of 'higher level' languages (i.e. much higher level than Python) which can then be 'AI compiled' into highly efficient low level code.

i.e. the advantages of an even-higher-level python that's almost like pseudo-code with assembly-level speed and rust-level safety, where some complexity can be abstracted out to the LLM.


I disagree. Chatgpt is helpful here because f# is a paradigm shift for this otherwise experienced programmer. The programmer probably knows juuussstt enough f# to guide the llm.


I mean why is f# the goal, and could we write a better f# with the use of AI.

As an example, why not write in f# and let an 'AI-compiler' optimise the code...

The AI-compiler could then make sure all the code is type safe, add in manual memory management to avoid the pitfalls of garbage collection, add memory safety etc - all the hard bits.

And then if we gave the AI-compiler those sort of responsibilities, then we can think about how this would impact language design in the longer term.

None of this is with current generation LLM's, but might be where we end up.


This doesn’t require AI, it requires higher-level languages than we have that can express intent more directly.


In current generation language there is definitely a trade-off between language productivity (i.e. speed of writing) and features such as speed and memory-safety.

So far we haven't been able to close this gap fully with current compilers and interpreters (i.e. python still runs slower than C).

It seems like that gap could be closed through, for example, automated refactoring into a rust-like language during compilation, or directly into more-efficient byte-code/ASM that behaves identically.

And surely if that is a possibility, that would affect language design (e.g. if you can abstract away some complexity around things like memory management).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: