Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Cory Doctorow: The real AI fight (pluralistic.net)
39 points by ohjeez on Nov 27, 2023 | hide | past | favorite | 34 comments


As much as I can appreciate Doctorow’s exasperation, I find his dismissal of the doomer vs accelerationist debate rather pithy. I would love to be convinced that “dumb” LLMs can never gain sentience (or be finagled into sentience with a wrapper).

What is the actual argument for why that’s true?

(I realize you could turn the question around and ask why I think it might be possible in the first place, but I feel like my expectations have been blown out of the water so regularly and so increasingly frequently that I can’t default to being a naysayer anymore)


That has been covered quite well in multiple places. Andrej, as usual, has some good reasons in his recent talk: https://www.youtube.com/watch?v=zjkBMFhNj_g

Just increasing the size of pre-trained LLMs is not considered a likely simple path to AI by most professionals working in the technical side of the field.


Thank you, that was a fascinating talk and I learned quite a bit.

However, it did not provide a convincing argument as to why LLMs cannot be a part of a "doomer" AI. In fact, I got the opposite vibe from Andrej explaining expected future developments. The whole section on System 2 thinking sounds like a layer constructed around dumb LLMs that would result in vastly improved and more generalizable intelligence.

I agree that just scaling the size of LLMs is probably not sufficient for AGI...but that just seems like one relatively minor piece of all the possible ways it might be achieved.


No argument from me. LLMs would be a component of AI, much like we kind of have long term read only memory. But the extra bits could in the form of some dynamic functions on each tensor network node (G* functions LOL!)

It's interesting to go back and read the MIT OpenAI story from 2020 to hear how thinking about these moral implications was an important part OpenAI.


Most professionals didn't think we were close to surpassing human capability in chess, go, or dota, until after it happened. I've seen little evidence of expert domain knowledge improving AI forecasting ability, if anything it seems the experts are often late to the party.

Besides expert consensus, is there any other actual argument against LLMs achieving generalizability?


> Besides expert consensus,

Well there are solid technical reasons, as described in the video. One of them is based on that these models are 'pre-trained' and AGI may be a result of a more dynamic knowledge base that can change more than just the local context and update the model, as our brain does.

Andrej also suggests that an attribute of a more advanced AI would have the ability to ask it to spend longer thinking to get a better answer, like a chess engine.

This said, expert consensus is probably the best answer we have. It's not like the consensus of a bunch of youtube vids and articles that only exist for getting clicks. These experts are famously sharp. I have done his course video series (it took a huge effort, even though he is an amazing lecturer) and had existing python and linear algebra experience and I understand his argument.


>these models are 'pre-trained' and AGI may be a result of a more dynamic knowledge base

Why couldn't the knowledge base be used in conjunction with the LLM? As the GP said, why can't LLM's gain sentience or be finagled into sentience with a wrapper'. The Knowledge base you're describing is the wrapper.

>Andrej also suggests that an attribute of a more advanced AI would have the ability to ask it to spend longer thinking to get a better answer, like a chess engine.

This is another method that is already being deployed with LLMs. So the question stands, why won't LLMs be the foundation for nearing AGI?

For my money, LLMs likely are that base. AI Experts are either too shy from the memory of AI winters past to see the nose on their faces, or too busy developing paradigm breaking models to care. Regardless of what Chomsky or any other 'expert' says should be possible, the practical results of LLM growth are literally speaking for themselves.

Maybe we should have suspected a 'large language game' to be the catalyst for AGI from the start. Was human intelligence truly general before we developed language? Could it be general without it?


I watched the talk and I don't saw him giving those reasons.


I think his main point is for all of us to focus more on the impact of current uses of current software systems that treat people as commodities. The rest of this amusing rant was not material to his main point—a distraction that generated clicks. Cory and Yanis Varoufakis are focused on our rapid descent into technofeudalism—where we all become data generating and self-consuming serfs to silicon valley barons and knights. They both ironically need the clicks as much as Google.


Right. It's not the distant robot overlords to worry about, it's the proximate human ones. The whole speculative agi debate is a distraction from the real, current debate about jobs. Spicy Autocomplete is good enough to start.


I bet if we went back in time 2 years and asked Cory if “next-word-predictors” could produce anything remotely close to gpt-4, he would have the same arrogant / confident tone and say, of course not.

I think for the simple fact that our ability to predict emergent properties of complex systems is nascent at best, warrants an extreme humility when discussing these topics. We simply are terrible at predicting what will emerge.

Also this debate of his is single-dimensional. Even in the unlikely case that significant advances in LLMs were not possible, combining certain technologies / tech stacks here with LLM/LLM(s) is likely (I would say guaranteed) to result in additional surprises in emergent behaviors that will continue to look eerily similar to our intelligence and sentience. We’re unlikely to be near the local maxima.


AIs won't grow teeth, but people controlling AIs will use their own. If you want a scary outcome of AIs, is giving those few players (big money, big corporations, tech giants, governments and their intelligence/defense agencies, etc) even more power and control.

It is not something totally new. Internet did the same for them in different ways. but this will put things up a notch.

And as with internet, you can't just refuse to use that tool, at least without serious disadvantages for you in the modern society.

If you want to draw a line, leave the control out, like with open source, let everyone innovate, try different things, push the boundaries and democratize the access to this tools. Community is the opposite of centralization of power in the hands of a few.

And if there are risks, for sure the ones in control won't be the ones putting limits on what they can do, or giving teeth to them.


I think the problem is that he's quite correct, but the article is missing the second half of itself, where Cory explains the specifics of the so-called actual debate. I would like to see that spelled out in a much longer article instead of a rant. Oh, and the whole 'Evil begins when you treat people like things' is one of the basic rules of Deontology, the philosophy of Immanuel Kant. It's probably one of the first things I would teach the AGI.


I would say rest of his blog and his books are the missing second half (and other halfs). He is known to write A LOT. I dont think his blogposts are supposed to be fleshed out arguments but more like discussions for/with himself.


His main point is that the specifics are irrelevant when it hinges on something that isn't there.

The debate should instead be about something that is tangible now


Ugh, it's not about LLMs. Us "doomers" don't know if you can make an intelligent system with a sufficiently advanced LLM as the core component. We mostly don't think an LLM that's not part of a bigger system with appropriate feedback loop and storage systems is likely to be an X-risk.

It's about whenever we DO make true AGI making sure it actually doesn't end up with goals we don't really want.


rich people really go out of their way to avoid admitting where their power comes from.


How is that relevant? Doctorow is not very rich.


> This is the side of AI Ethics – the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others." As White says, shifting the debate to existential risk from a future, hypothetical superintelligence "is incredibly convenient for the powerful individuals and companies who stand to profit from AI."

I'd argue that the opposite is the case. People who rather like to warn about alleged bias are the ones with a convenient agenda. Because of course they don't complain about things like ChatGPT having a left-wing bias. They only try to complain about the things that suits their political worldview. And of course they like complaining about ultimately rather minor problems (which they wouldn't admit). Because thinking about real AI risk would be uncomfortable, it would mean thinking about the possibility of human extinction. Instead they just complain about non-issues, similar to "but think of the children" people. More a form of virtue signalling perhaps, than a form of AI "ethics".


Doctorow is saying AGI is non-issue used to distract from the present issues like labor, bias, rights. You are saying these are non-issues used to distract from main issue AGI.

I think the whole article is him not agreeing with your position.


Yes. I say he is wrong. My previous argument was aimed at what appears to be his core argument.


Cory seems to think LLMs are little better than Markov chains. His anger is exhausting in its ineloquence (“enshittification”), ignorance (“spicy autocomplete”), and general disdain.


enshittification is a useful lens for understanding various aspects of platforms. I think it's actually pretty eloquent.


I would agree, it’s a good evocative term for the current trend in a lot of sites. They are reducing their value to the customer and trying to force the choices of the operator onto the client.

As examples, Google, Amazon and meta are corrupting their search with extra items (suggestions or sponsored or some other terms) making it harder to see the items that I asked for


I like Cory, but the problem with useful lenses is that naughty kids use them to roast ants in the sunlight. If people just used them as lenses, they would indeed be useful. As they are being currently used, it's causing people to give up. After all, everything's shitty. Burn it all down, right?

Edited to add: This is the problem with what happened at Reddit, and it's what's happening at Twitter. People are expending all this energy to point out how horrible it is, and not spending any energy making anything better. There is no activism, only destruction. It's not helping. Terms like "enshittification" are self-fulfilling prophecies.


> Large Language Models (AKA "spicy autocomplete")

> This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive

Part of me thinks I used to be fine with the tone of TFA, but nowadays I have neither the time nor the patience to suffer through this, even if I'm not an "accelerationist" in the slightest.

I'm just able to do my own thinking, thank you very much—you don't need to force-feed me your opinion with half-baked analogies, snarky "aka"s and name-calling.

This whole article boils down to "two sides are arguing in some debate I KNOW is irrelevant because I'm on the side of justice, fairness and all that is good, and everyone else is evil".

Hard pass. I'll debate whatever I want.


I had the exact same reaction when I read this. A lot of intelligent people have thought about these issues. I'm willing to grant that there is some probability that you've understood their problems better than they have[1], but when you write it like that, it seems quite obvious to me that you have not had any such insight. In order to have such an insight you would have to have the curiosity to fully understand the problem space; claiming everyone on both sides is a moron exempts you of any such curiosity.

[1]: After all, a lot of smart people thought about crypto.


Your reaction to Doctorow's writing is similar to Doctorow's reaction to the so-called AI debate


I thought about that as I wrote my comment, but you still have to subtract the snark and holier-than-thou attitude to get to where I am.

And then again I'm commenting on HN about "TFA", not writing about Cory Doctorow's writing on a blog saying he should stop writing.


I figured his disgusted and over-the-top demeanor was a way to elbow through the much more common points of view that he's criticizing. If the proponents of the AI debate were in a room discussing things with each other, it would be like him barging in and just yelling loudly so that people hear him. But yelling, like snark, is usually off-putting, despite drawing attention. At least that's how I see it

That said, I'm not meaning to imply that your initial response was either loud or snarky. Rather, you and Doctorow are both clearly annoyed by what you've been reading


I've never found Doctorow insightful--even his fiction. I don't know what people see in him.


> Large Language Models (AKA "spicy autocomplete")

Yup, really fucking tired of this. I'm somewhere in the middle of this whole debate, but it's a debate worth having. And I think the people belittling AI to autocomplete are likely putting a little too much magical thinking into their own consciousness.


Add to that people that communicate this way seem disproportionately more likely to be wrong too (and overconfident in their wrongness).


It's a style that seems common with people who spend to much time on Twitter. Which it looks like he is one.

I was wondering the other day how much of ai is driven by over excited people on Twitter. If it is in a significant one, it was thinking there's probably a negative impact becasue of the type of content that gets promoted on Twitter




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: