The best functioning society that I experienced was when 90% of the people were (presbyterian) Christians. We replaced that with something very, very disfunctional.
At least for me that fits. I have quite enough graduate-level knowledge of physics, math, and computer science to rarely be stumped by a research paper or anything an LLM spits out. That may get me scorn from those tested on those subjects. Yet, I'm still an effective ignoramus.
It does if you're connected. I've seen many incompetent and under-skilled people given high-ranking positions in tech companies simply because they knew someone.
I can't see how AGI can happen without someone making a groundbreaking discovery that allows extrapolating way outside of the training data. But, to do that wouldn't you need to understand how the latent structure emerges and evolves?
We don't understand how the human brain works so it's not inconceivable that we can evolve an intelligent machine whose workings we don't understand either. Arguably we don't really understand how large language models work either.
LLMs are also not necessarily the path to AGI. We could get there with models that more closely approximate the human brain. Humans need a lot less "training data" than LLMs do. Human brains and evolution are constrained by biology/physics but computer models of those brains could accelerate evolution and not have the same biological constraints.
I think it's a given that we will have artificial intelligence at some point that's as smart or smarter than the smartest humans. Who knows when exactly but it's bound to happen within lessay the next few hundred years. What that means isn't clear. Just because some people are smarter than others (and some are much smarter than others) doesn't mean as much as you'd think. There are many other constraints. We don't need to be super smart to kill each other and destroy the planet.
LLMs are also anthrocentric simulatuon- like computers- and are likely not a step towards holistic universally aligned intelligence.
Different alien species would have simulations built on their computational, senses, and communication systems which are also not aligned with holistic simulation at all- despite both ours and the hypothetical species being made as products of the holistic universe.
Ergo maybe we are unlikely to crack true agi unless we crack the universe.
I’ve never used DuckDB, but I was surprised by the 30 GiB of memory. Many years ago when I used to use EMR a lot, I would go for > 10 TiB of RAM to keep all the data in memory and only spill over to SSD on big joins.
Does it take into account ESL? I know at least at my daughters school, kids with English-speaking parents score very well. The others score very low, on average. I notice when I pick up my daughter that the ESL kids group together and only speak their native language though. So, it's not surprising.
The article does not take into account ESL. In fact, greater participation of ESL kids or those with disabilities appears to be a major reason the NAEP test scores have changed over time:
> In addition, since 1996, main NAEP assessments have been providing accommodations to allow more students with disabilities and students who were not fluent in English to participate. Traditionally, the long-term trend assessments have not provided such accommodations. However, in 2004, it was possible to provide accommodations and assess a greater proportion of students.
reply