Given it has suppressed an entire planet of innovation, prosperity, possibility, and experience, I would say it has negatively impacted all of us in ways you can't fathom.
Covid measures, enshittification of internets. Chat control in Russia has outcomes in people being exploited by thugs in the whole country. How can you observe the world without immediately seing these?
The same reason most people are a bit “thick” , stuff just won’t go into their brains or register with them. They have the mind of children but are in adult bodies with adult voices
Don't worry, with generative AI, everyone's social media will look 10x more awesome and it will all be fake. Even wealthy people will take it to a new level.
One thing which isn't discussed enough is how bad the Tsunami is environmentally. The coastline of Japan is absolutely littered with crap from the 2011 Tsunami. All manner of chemicals were sucked into the nearby sea, tires, carpets, hard and soft plastics, absolutely everywhere. It almost looks unrecoverable.
I know that seems like a non-issue in the grand scale of things, but Tsunami's of the past didn't do this. Most pollution was biodegradable.
I can't imagine what the coast of LA would look like after a similar event :(
More of a concern in the bay area, the geological composition is sand, mud, basic river delta sediments--not the most stable stuff.
eg: During a hypothetical Earthquake in the Bay area, the liquefaction of sand occurs, which causes underground structures to move upward due to buoyancy (think of a lower level parking garage), and the soil mass from the structure collapses.
the liquefied sand diffuses to the surrounding area after the earthquake, causing the sand beneath the underground structure to be lost, and the building will return to its original position or even collapse.
I recently saw Hinton give a talk where he very, very, very excitedly and confidently gave us an example to demonstrate how incredibly intelligent and creative LLMS are.
He asked an LLM a question, but he didn't give us the answer. He let us have time to answer it ourselves. Personally I knew the answer instantly. He gave us the answer and sort of assumed no one would've known the answer and then used it as as justification for how smart these systems were. It honestly didn't feel very reassuring to me and honestly, I'd be surprised if it wasn't a topic covered somewhere on the internet before. With all due respect to Mr Hinton, I felt it showed his age a bit honestly.
What is difficult about Hinton's statements is that he can't really give evidence to back up these sort of claims. How do you measure how much a person knows, and how do you objectively measure how much an LLM knows? How smart is an LLM? You can't really know. It seems almost rhetorical. How many notes in a saxophone ?
We can make observations but that's not a great way to measure anything precisely.
There is a limit to language and I think this is one of those topics where that limit is touched or even breached.I don't even know if "intelligence" is a sufficient enough word to describe what's going on with these systems. It's the best word we have but it doesn't seem to adequately describe what we're observing.
How do you measure how much a person knows, and how do you objectively measure how much an LLM knows?
Here’s a very basic example of where an LLM is clearly more capable than a human: language translation. I would bet $10k at 10:1 that there are no humans who can reliably translate to and from as many languages as an LLM can.
It is very easy to measure knowledge: test the subject.
Personally, I can’t ever imagine scoring higher on a general knowledge test than a contemporary LLM.
Also, I don’t know of any humans that can run as fast as a car so I don’t know why any of this is surprising or farfetched.
I'm not saying that they can't be more capable, I'm saying the guy can get a little overly excited about things which are hard to measure or quantify.
We're observing these systems and making up our own interpretations about how good they are at certain tasks, but it's not really easy to measure how much better or worse these things can be overall.
Your example about language translation is a good example of where these things aren't really "better", just different. I speak multiple languages and while these systems are fantastic, they can fail in ways a professional translator wouldn't and it doesn't seem to automatically know it failed and should fix itself.
The car example is also great because it again proves my point. We can easily measure a car and a person and workout a car is faster, but we can also see that a car can't walk. So it's faster but it;s also entirely different.
I'm saying the guy can get a little overly excited about things which are hard to measure or quantify.
Let's back this up a little bit. We've got Marvin Minksy who comes along and destroys the perceptron. Then we have decades of knowledge systems that go nowhere. All the while Geoff Hinton is tirelessly working on neural networks. Finally after decades of hard work the fruits of his labor are recognized with ImageNet.
And then a bunch of people in a comment section criticize the guy for getting "a little overly excited" about the stunning range of neural networks that validates his life's work.
>> Here’s a very basic example of where an LLM is clearly more capable than a human: language translation. I would bet $10k at 10:1 that there are no humans who can reliably translate to and from as many languages as an LLM can.
See, translation is exactly the kind of domain where there are no good measures of performance and where performance is open to subjective interpretation, and a lot of it. That's because we don't know what is a "good translation" and, crucially, machine translation systems and language models have not helped us find out.
The way machine translation systems are evaluated is generally by a metric based on the similarity to an arbitrarily chosen "gold standard" translation. What that means in practice is that we have some corpus of parallel texts, we train a machine translation system on a part of the corpus and then test it on the held-out test set. The way we test is that we take each e.g. sentence in a text translated by the system and we compare it, as a bag-of-words or a set of n-grams, to the text in the original translation. If there is a high amount of overlap, the system scores highly. That's the way BLEU scores work and similar metrics like ROUGE.
It is important to note how arbitrary is this metric: out of all possible translations we choose one to be the "reference" translation and compare machine translations to it. The only accepted alternative is eyballing, where we give the machine translation to a bunch of humans and ask them how they feel about it.
My point is that we don't know how to measure knowledge, and language models are trained to maximise similarity, not knowledge. So there's no way to go from observations of their behaviour to a measure of their knowledge. All you can say about a language model is that it is good, or bad, at generating text that's similar to its training corpus. Everything else is an assumption.
Good god, people, we measure knowledge all the time with testing. We have a difficult time measuring intelligence but we have no problem measuring someone’s knowledge about the major events that led up to the Battle of Waterloo.
Just give the participates the final from my French 3 exam but also in 100 different language combinations. I bet you do worse than ChatGPT.
>> Good god, people, we measure knowledge all the time with testing.
In humans. Not in machines.
You're proposing to use a test of human knowledge as a test of computer knowledge, when the question in the first place is whether a computer can have knowledge at all. It's like giving an IQ test to a frog and concluding that the frog has no IQ because it can't answer the questions, only reversed: the machine answers the questions, therefore it has knowledge. Who cares about mechanisms, who cares how the answers are generated, if I see answers, that's knowledge.
Well that is a pre-scientific way to look at the world. I observe the sun, it looks like it's moving around the Earth, therefore the sun turns around the Earth. No room left for critical inquiry or understanding of the cause of phenomena. We have a test? Bash it against anything and we'll get some answers, and then we'll claim that they're the right answers because that's the right test, since it gave us the right answers. And all that, not for some mysterious physical phenomenon that we're not responsible for, but for a machine, created and programmed by humans, and we know exactly how.
No no. That's not good engineering, and it's not good science: it doesn't explain the how, and it doesn't explain the why.
Sometimes I do get the feeling that he is just slowing down and innovation and change just keep going at the same rate. He seems to parrot the same sort of ideas over and over again.
I find he can get really fixated on certain examples, things he has seen an LLM do and he can't seem to get beyond it, as if the singularity is at hand. Where as a 22 year old I work with just accepts the technology and tries to get work done with it.