> A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience
> Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience
I find such solipsism pointless - you can't differential the zombie world from this one: how do you prove you are not the only conscious person that ever existed and everyone else is, and was a p-zombie?
Through the upturned glass I see
a modified reality--
which proves pure reason "kant" critique
that beer reveals das ding an sich--
Oh solipsism's painless,
it helps to calm the brain since
we must defer our drinking to go teach.
...
> Artificial intelligence researcher Marvin Minsky saw the argument as circular. The proposition of the possibility of something physically identical to a human but without subjective experience assumes that the physical characteristics of humans are not what produces those experiences, which is exactly what the argument was claiming to prove.
> Let's get back to those suitcase-words (like intuition or consciousness) that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can't yet explain. This in turn leads us to regard these as though they were "things" with no structures to analyze. I think this is what leads so many of us to the dogma of dualism-the idea that 'subjective' matters lie in a realm that experimental science can never reach. Many philosophers, even today, hold the strange idea that there could be a machine that works and behaves just like a brain, yet does not experience consciousness. If that were the case, then this would imply that subjective feelings do not result from the processes that occur inside brains. Therefore (so the argument goes) a feeling must be a nonphysical thing that has no causes or consequences. Surely, no such thing could ever be explained!
> The first thing wrong with this "argument" is that it starts by assuming what it's trying to prove. Could there actually exist a machine that is physically just like a person, but has none of that person's feelings? "Surely so," some philosophers say. "Given that feelings cannot not be physically detected, then it is 'logically possible' that some people have none." I regret to say that almost every student confronted with this can find no good reason to dissent. "Yes," they agree. "Obviously that is logically possible. Although it seems implausible, there's no way that it could be disproved."
---
My take on it is "does it matter?"
On approach is:
> "Haven't I taught you anything? What have I always told you? Never trust anything that can think for itself if you can't see where it keeps its brain?”
If you can't see my brain, can you tell if I'm human or LLM... and if you can't tell the difference, why should one behave differently t'wards me?
Alternatively, if you say (at some point in the future with a more advanced language model) "that's an LLM and while its consistent at saying what it likes and doesn't, but its brain states are just numbers and even while it says its uncomfortable with a certain conversation... its just a collection of electrical impulses manipulating language - nothing more."
Even if it is just an enormously complex state machine that doesn't have recognizable brain states and when we turn it off and back on it is in the same state each time... does that mean that it is ethical to mistreat it just because don't know if its a zombie or not?
And related to this is a "if we give an AI agency, what rights does that have when compared to a human? when compared to a corporation?" The question of if it is a zombie or not becomes a bit more relevant at that point... or we decide that it doesn't matter.
https://en.wikipedia.org/wiki/Philosophical_zombie
> A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience