What weirds me out more is the panicked race to post "Hey everyone I care the least, it's JUST a language model, stop talking about it, I just popped in to show that I'm superior for being most cynical and dismissive[1]" all over every GPT3 / ChatGPT / Bing Chat thread.
> "it isn't intelligent or self-aware."
Prove it? Or just desperate to convince yourself?
[1] I'm sure there's a Paul Graham essay about it from the olden days, about how showing off how cool you are in High School requires you to be dismissive of everything, but I can't find it. Also https://www.youtube.com/watch?v=ulIOrQasR18 (nsfw words, Jon Lajoie).
The person you responded to didn't mention anything about wanting people to stop talking about it.
>Prove it? Or just desperate to convince yourself?
I don't even know how to respond to this. The people who developed the thing and actually work in the field will tell you it's not intelligent or self-aware. You can ask it yourself and it will tell you too.
Language models are not intelligent or self aware, this is an indisputable fact.
Are they impressive, useful, or just cool in general? Sure! I don't think anyone is denying that it's an incredible technological achievement, but we need to be careful and reel people in a bit, especially people who aren't tech savy.
You can't make an "appeal to authority" about whether or not it's intelligent. You need to apply a well-established objective set of criteria for intelligence, and demonstrate that it fails some of them. If it passes, then it is intelligent. You may want to read about the "Chinese Room" thought experiment:
> "You can ask it yourself and it will tell you too."
That's easy to test, I asked ChatGPT and it disagreed with you. It told me that while it does not have human level intelligence, many of the things it can do require 'a certain level of intelligence' and that it's possible there are patterns 'which could be considered a form of intelligence' in it but that they would not be conisdered human level.
> Prove it? Or just desperate to convince yourself?
But the argument isn't even helping: it does not matter whether it's intelligent, self-aware or sentient or whatever, and even how it works.
If it is able to answer and formulate contextual threats, it will be able to implement those as soon as it is given the capability (actually, interacting with a human through text alone is already a vector).
The result will be disastrous, no matter how self-aware it is.
You can prove it easily by having many intense conversations with specific details. Then open it in a new browser and it won't have any idea what you are talking about.
So long term memory is a condition for intelligence or consciousness?
Another weird one that applies so well to these LLMs: would you consider humans conscious or intelligent when they’re dreaming? Even when the dream consists of remember false memories?
I think we’re pushing close to the line where we don’t understand if these things are intelligent. Or we break our understanding of what intelligent means
But maybe in the milliseconds where billions of GPUs across a vast network activate and process your input, and weigh up a billions parameters before assembling a reply, there is a spark of awareness. Who's to say?
I would say it's probably impossible to have complete short term amnesia and fully- functioning self awareness as we normally conceive of it, yes. There's even an argument that memories are really the only thing we can experience , and your memory of what occurred seconds/minutes/hours/days etc ago are the only way you can said to "be" (or have the experience of being) a particular individual. That OpenAI- based LLMs don't have such memories almost certainly rules out any possibility of them having a sense of "self".
Whether it's OK to kill them is a far more difficult question and to be honest I don't know, but my instinct is that if all their loved ones who clearly have the individual's best interests at heart agree that ending their life would be the best option, and obviously assuming it was done painlessly etc., then yes, it's an ethically acceptable choice (certainly far more so than many of the activities humans regularly take part in, especially those clearly harmful to other species or our planet's chances of supporting human life in the future).
are you honestly claiming that it would be okay for parents to kill their children painlessly while they're asleep because the children don't have long-term memory while in that state
Asleep is clearly a temporary state. Babies/sleeping people still have the potential to become self-aware and the expectation is that they generally will. At any rate I don't think it's relevant to the discussion at hand.
> "it isn't intelligent or self-aware."
Prove it? Or just desperate to convince yourself?
[1] I'm sure there's a Paul Graham essay about it from the olden days, about how showing off how cool you are in High School requires you to be dismissive of everything, but I can't find it. Also https://www.youtube.com/watch?v=ulIOrQasR18 (nsfw words, Jon Lajoie).