This equivocation between consciousness and meta-consciousness is so widespread that a plethora of books, which are seemingly dedicated to the project of "explaining consciousness", instead spend their entire contents explaining some hypothesis involving meta-consciousness and then proceed to declare victory without ever touching consciousness itself. Jaynes' "The Origins of Consciousness in the Breakdown of the Bicameral Mind" is perhaps the most egregious offender.
This is especially troubling when you consider that meta-consciousness sells easily to the general public as "the hard problem" (our thorough use of it seemingly being a large part of what makes humans different, after all), while it's likely that it's more of an engineering problem after you've figured out base consciousness.
If there's anything it is like to be a worm, THAT'S the hard problem. Implementing recursive phenomenal access seems like an undergraduate research project after that. Yet it's most often the very thing we spend all our time focusing on. Frustrating. We need new words - or wider usage of existing ones.
> If there's anything it is like to be a worm, THAT'S the hard problem.
I don't believe we can get to the root of this "hard problem" by first trying to agree on a definition of the word before we can say anything about it. We will also need to admit the possibility that the question "are we conscious?" may be as vacuous at the end as "is Pluto a planet?".
Being unable to agree on a definition for consciousness is a huge problem. I think this happens because consciousness is being studied at a too abstract (high) level. Instead, we should switch to the low level dual of this problem - game theory and reinforcement learning. Fortunately, at this level definitions are exact and concrete - agent, environment, goal, actions, values. At this level we can understand and simulate what it means to be a worm, or a bat - as agents playing games.
I think what the current philosophic theory of consciousness lacks is a focus on the game itself. The agent is but a part of the environment, and the game is much more than the agent. The whole environment-agent-game system is what creates consciousness and also explains the purpose of consciousness (to maximize goals for agents).
Analyzing consciousness outside its game is meaningless (such as p-zombies) - the game is the fundamental meaning creator, defining the space of consciousness. The Chinese room is not part of a game, it is not embodied in the world, and has no goals to attain, that is why it is not conscious, it is just a data processing system.
On the other part, a bacteria can be conscious on a chemical, electric and foto level even if it can only process data with its gene regulatory network (which is like a small chemical neural net). A bacteria has clear goals (gaining nutrients, replication) thus its consciousness is useful for something - all consciousness needs to have a clear goal otherwise it would not exist in the first place - something rarely emphasized in consciousness articles.
>switch to the low level dual of this problem [...] reinforcement learning.
But that is itself a theory about the problem of consciousness! So why not do both? There's no obvious short cut through the confusion, but discoveries in one field may guide questions in the other. In general I think it helps to have one's feet on the ground as well as one's head in the stars (for instance Newton ground his own lenses).
What I mean is that the idea that reinforcement learning is a 'low level dual' of the problem of consciousness is also a theory about the problem of consciousness. It's part and parcel of philosophical topics that one can't get out of the game...
Well, defining the "hard problem" hasn't gotten us any closer to understanding consciousness in the last 22 years. The hard problem just moves consciousness problem into an unfruitful direction. It's time for more practical approaches.
I think attempts to rule out physicalism with arguments about qualia and such have gotten us nowhere, and I have no problem with studying game theory and experimenting with reinforcement learning, but the idea that these alone will fully explain consciousness is conjecture at this point - a conjecture that I, personally, am not ready to make.
Quite a few people on either side of the physicalism - dualism debate seem to be more keen on declaring victory and cutting off further discussion than they are in getting to the bottom of the issue.
BTW, on Searle's 'Chinese Room' argument, I have always been in the 'the system as a whole would be conscious' camp.
Not sure where your 22 years estimate comes from. I think man has been trying to understand it for much longer maybe thousands of years ? - and has found answers through techniques that are outside of the mind - such as meditation.
From an empirical standpoint, meditation gives us no information whatsoever. Just thinking about something will not lead you to some sort of mystically revealed truth
How does a bacterium have clear goals? What about one that has a generic defect and doesn't "try" to reproduce? Is it conscious? Is it part of a game? Is the stone that gets dissolved by lichen part of a game?
We don't really need to agree on a definitions of consciousness. All we need is to be able to use the knowledge we do have however phrased, for the things we want to use it for.
Pluto is a taxonomy problem. Consciousness is reconciling how we explain the world (science & objectivity) versus how we experience the world (subjectivity). Consider that the world looks colored, is full of sounds, smells, tastes and feels. But the scientific explanation leaves all that out. The scientific world is the ghostly world of equations, theories and data. It's Plato's cave inverted.
I have long been of the opinion that consciousness is not a 'thing' or 'state' so much as property. I think Douglas Hofstadtler gets closest to my thinking on the matter in his book 'I Am A Strange Loop'. Just as a single molecule cannot have a temperature, since temperature is a property of multitudes of molecules, any degree of looking at the components will not elucidate their overall interactions and at the end you will simply have to admit 'this is what we call it when that happens' rather than identifying a binary test that can be weighed with complete objectivity.
I also think a great deal of the things the scientific explanations omit are things which are expressely stated as the things science does not pursue. I mean that science is inherently dedicated to finding the things which are true regardless of the personal experience of the observer and which would be true without a human observer. So when you want to look at the human observer itself, and at something which is intimately and inextricably linked to the exact biological and historical state of a human animal... you've just left the field. Not that it makes the study any 'less', you just need different tools and there are different pitfalls. We still have all the same cognitive flaws due to our own brain structure, so we do have to try to be rigorous about it.
Temperature is the aggregate of the energy of many individual particles, modulated by the tendency of those particles to transmit that energy. Thus a single molecule does have a temperature in a sense, you just need an incredibly sensitive detector to register it. Temperature is a fully reducible property.
Consider the possibility that consciousness isn't a distinctly human phenomenon, as there is no reason to believe that is the case. In fact, there is no logical reason why it would even be a distinctly biological phenomenon. It appears to me that consciousness isn't so much out of the purview of science as it is intractable to its methods, and thus unattractive as an area of study despite its fundamental character.
In a very big molecule [1] that doesn't move, you have so many parts that you can have a reasonable good definition of the temperature.
But in a very small molecule/atom, let's pick a single helium atom, you don't have a good definition of temperature. You can define the temperature of a gas of Helium using the average kinetic energy of the atoms, but you must consider that the whole gas is not moving.
If you put a balloon with Helium in a car, and the car moves at 100mph, the Helium is not hotter, it is moving. So if you can only see a single atom of Helium, you can't be sure if it's moving because it's hot or all the gas is moving. So you don't have a good definition of temperature. [2]
For a molecule with a few atoms (let's say 5 or 10) it's more difficult to be sure if there is a good definition of temperature or not, so I prefer to ignore the intermediate case.
[1] My first idea of a big molecules was DNA, because it's big and well known. But DNA is usually surrounded by water, lots of water, and ions and auxiliary proteins, and a lot of stuff. It's very difficult to isolate a true molecule of DNA alone.
An easy example of a big molecule is bakelite https://en.wikipedia.org/wiki/Bakelite that is the plastic of the old phones. Your old phone case was a gigantic single molecule, and the tube case was another. So they clearly had a temperature.
[2] At low temperatures you can assume that a Helium atom is a ball without internal structure. If you increase the temperature of the gas enough (200000K?), the electrons in each Helium atom start to jump between levels and you have some interesting internal structure and may try to define a temperature. But it's still too few parts and too short lived to make me comfortable to define the temperature of a single atom.
Since we measure temperature by the transmission of energy, and that process occurs in an entropic manner, if you have a moving reference frame, that prevents measurement of the increase in energy by the detector. The particle itself definitely has increased in energy, we just lack the ability to detect it. This is entirely analogous to our inability to measure mass using a scale in free-fall.
I suppose it comes down to whether you define temperature as "what thermometers measure" or the average potentially transmissible energy of an ensemble. I prefer the latter definition for its clarity, but like the idea of kolmogorov complexity, it is sometimes unwieldy in practice.
Temperature is defined as 1/T = dS/dU, where U is the energy of the system and S is the entropy of the system, i.e. S = k * ln(#states that have energy U)
The temperature is proportional to the energy of the system only for ideal monoatomic gases. (Ideal diatomic gases, are slightly more complicated.)
So a "hot" Helium balloon where all the particles are moving in random directions has a different temperature than a "fast" Helium balloon where all the particles are moving roughly in the same direction. It doesn't matter how difficult is to define it. (Probably you can use an infrared thermometer pointed in the direction perpendicular to the movement to get the correct temperature.)
In the fast balloon you can theoretically split the energy in A: kinetic energy of the center of mass and B: internal thermal energy, so the total energy is A+B. With some device you can extract all the kinetic energy as something useful. (for example, make the balloon hit some lever connected to an electric generator, you will need something more smarter to get the 100% of A but it's theoretically possible.) So you can extract the 100% of A
If the hot balloon with the same energy you have only A+B of internal thermal energy. If you use some device you can never extract the energy, you can never extract more than a part of B as something useful like electricity, because of the second law of thermodynamics. It the ambient temperature is T_amb and the initial temperature of the balloon is T_bal, you can get at most B * (1 - T_amb / T_bal) as something usefull like electricity and waste at least B * (T_amb / T_bal) in a heat sink.
So, to get back to the original point, your position is that a single particle in isolation can't have a temperature because its entropy (or rather the change thereof) is 0, so the equation breaks.
I certainly grant that consciousness might not be a distinctly human phenomenon. I disagree that there is no reason to believe such is the case, however. There is some evidence: Humans are conscious. And we don't know anything else that is. That is weak evidence, but it is some. Which is more than we have for the claim things other than humans can be conscious. For that, there is indeed nothing.
The logical reason why it would possibly be a distinctly biological phenomenon is that we have not observed any non-biological system which has the property. Also, every examination of how consciousness works illuminates solely how it works in a biological system.
It is completely possible that consciousness is imply a necessary property which emerges once any complex system and its interactions reaches a certain sort of complexity. However, that might be meaningless. I mean, we might be totally incapable of recognizing other conscious entities as conscious. If a machine were made to be conscious, not as an emulation of a human, but as a machine - how would you tell? Asking it questions would be pretty pointless. There is no reason for it to develop language or even guess that there might be another conscious entity in the universe for billions of years. It would be quite a ridiculous leap for the machine to suppose that, in fact. It wouldn't have multiple individual machines against which it could develop a Theory of Mind (referring to the psychological way we model other people (or things or animals) being conscious and thinking inside our own heads, children develop it around age 3 I believe). It would presume the sensible thing, that it is the only conscious entity in the universe, and go about exploring the universe. It would see the microphone or video input as just weird useless noise as it traversed and explored its ACTUAL work, that of bits and bytes and networks and switches. You could watch its execution... and it would be exactly as useful to you as watching the flashes of electrical activity and a readout of the activity of neurotransmitters at each neural gap as far as determining whether it was conscious. And that'd be for one we might reasonably presume to operate on our own timescale and close to us in space. Could the Sun be conscious? Quite possibly. I'll grant it as something we can't rule out.
Then I don't think we have a problem. We experience the world through sensorimotor statistics: what our sensory nerves, autonomic nervous system, voluntary motor actions, interoceptive nerves, and just generally our bodies are doing at any given point in time. Science is a series of methods, implemented on top of our basic reasoning abilities, to achieve knowledge which can be generalized across individual perspectives, rather than depending on the statistics and circumstances of an individual life.
I would figure that this isn't the problem of consciousness, but it is the problem of why we don't "see" a "ghostly world of equations, theories and data".
Ironically, you are stating a hypothesis which is fully within the purview of reason and science. While it is in principal possible that the answer is undecidable with existing logical systems, there's certainly no existing proof that it is undecidable.
I don't get your comment. The first sentence seems to say "solving semantic confusion won't get us to the root of the hard problem". Your second sentence seems to say "we should admit that some questions are just semantic confusion". Is that right? The two sentences seem non sequitur together.
The hard problem is not a matter of semantics, but yes indeed some confusion may be just semantic. I'm not sure if that plain truth is all you meant to say though.
> If there's anything it is like to be a worm, THAT'S the hard problem.
I am not convinced - I suspect that there isn't anything to the experience of being like a worm; that the very question presupposes self-awareness of some sort. I suspect the hard problem of consciousness might lie in the self-referentiality of self-aware consciousness (though alternatively or in addition, it might be on account of its workings being inaccessible to introspection.)
But our consciousness goes beyond simple self-awareness: having a theory of mind is a step beyond (and is realizing that others have a theory of mind a step beyond that?) Our own experience of our own consciousness is several steps removed from what a worm might experience.
> We need new words - or wider usage of existing ones.
Yes; I think the words we use are definitely an issue, perhaps because we don't remember what it was like to be an aware, but not self-aware, infant. To some people, anything that responds to external stimuli is conscious, while to others on the other extreme, consciousness entails being able to verbalize it. This might be the only case where the Sapir-Whorf hypothesis is actually correct - that our ability to think about the issue is constrained by our language.
In short, I think the author of the article made heavy weather of the issue by conflating consciousness with simple awareness, and not recognizing that meta-consciousness, and perhaps even an awareness of meta-consciousness, is an essential part of what distinguishes what is commonly meant by consciousness from mere awareness.
In case I haven't made myself clear, I don't think the hard problem of consciousness has been solved by invoking meta-consciousness; on the contrary, I think that is where the difficulty is most apparent.
> But our consciousness goes beyond simple self-awareness: having a theory of mind is a step beyond (and is realizing that others have a theory of mind a step beyond that?)
That [and this] whole space [of considerations, here] appears to be fraught with circularities like this:
Step 1. Draw a distinction between consciousness and meta-consciousness.
Step 2. "From what vantage point, and with what 'machinery' do we make a distinction?"
Step 3. Go back to Step 1.
> .. that our ability to think about the issue is constrained by our language.
I think I agree, otherwise I'm tempted to think that we would have already arrived, trivially, at some clearer kind of agreement about it.
To be clear, I don't think there is an infinite recursion here, but I do think that consciousness, as we experience it, is several steps beyond merely responding to stimuli.
It just crossed my mind that perhaps 'our ability to think about the issue is constrained by our language' and '[consciousness'] workings being inaccessible to introspection' are the same thing.
> .. that perhaps 'our ability to think about the issue is constrained by our language' and '[consciousness] workings being inaccessible to introspection' are the same thing.
I like to think about it that way, but it makes the problem feel intractable.
The problem is that we want to describe consciousness as "that thing that allows an organism to describe consciousness as 'that thing that allows an organism to describe consciousness as ´that thing that allows an organism to describe consciousness as [...]´'"
Exactly. That's why we keep going in circles when talking about these things, and why people have been repeating themselves for hundreds of years, yet the problem still seems fresh.
I wish I had something more substantive to add, but I have really come to think of this as a fundamental limit to thinking.
It's sort of analogous to starting with a set of axioms, then trying to derive explanations for these axioms from them.
This sounds defeatist, but I have yet to read a good argument for why trying to explain the "hard problem" is anything else than that. I guess you could see this in a sort of mystical light and build something spiritual around it, too...
"Consciousness is the ability of an organism to predict the future"
The problem of looking at human consciousness is similar to someone that knows little about computer processors pulling apart an I7 and trying to figure meaningful things about what is going on inside. Without knowing the history of processor design, there will be huge information gaps on why some parts work the way they do.
That said, I believe consciousness is just world modeling, which explains the recursion problem you run into, a good model can simulate itself (nested Turing machines). As creatures evolved the ones that didn't just react, but could predict properly had better survival outcomes. Predictive models competed with each other and the 'best' ones survived, until after millions of years one branch of these models became complex enough to self reference itself.
>The problem of looking at human consciousness is similar to someone that knows little about computer processors pulling apart an I7 and trying to figure meaningful things about what is going on inside. Without knowing the history of processor design, there will be huge information gaps on why some parts work the way they do.
I don't think the analogy is adequate. A processor is an object - it "objects" to all of us. It appears to have an existence independent of the thing that recognizes it as such. When we embark on an empirical investigation of something, we make a distinction between the scientific observer and the scientific object. We come to an agreement about the boundaries of the object. This does not appear to be the case if you want to call consciousness "that special [condition, or process, or property, or pattern, etc.] of being a scientific observer" (which we would ideally want because it seems to encapsulate all those special things that distinguish human beings from other organisms with nervous systems).
In that domain, we cannot make a distinction between subject and object. In order to even speak intelligibly about things, we must all draw the boundary of the thing we're talking about - but we are in the peculiar position of being the very act of drawing the boundary.
> I suspect that there isn't anything to the experience of being like a worm; that the very question presupposes self-awareness of some sort.
It also presupposes that you are a worm. It's a (logically) absurd and (practically) pointless question. How could anyone/anything possibly know what it's like to be a worm without actually having the experience of being one? Answer: They couldn't/can't.
The question also has no relevance to consciousness. We know what it's like to be conscious humans because we are conscious humans and we do have human experiences.
> I suspect that there isn't anything to the experience of being like a worm
Even a worm has goals to attain - food and reproduction. In order to attain those goals, it needs to have perceptions and to rank those by the estimated return value. That is what it's like to be a worm.
Talking about a worm as if it had goals is a teleological analogy that makes discussing its behavior easier, but I think it is a mistake to take this usage literally. If you do, you must also conclude that a thermostat has a goal, and therefore (by your argument) that there is a something that is what it is like to be a thermostat. I can only see this line of argument as a distraction that will only confuse any attempt to understand consciousness.
The rest of this discussion is hard to follow because it has apparently been edited, but you can't draw a line between applying your argument to animate things, but not inanimate ones, simply by choosing one word over another.
Yes, you can draw a very clean line. It's the line of self-replication. Self-replicators have an internal goal - that is to maintain their existence by growing and replicating. The more advanced ones develop complex responses to changes in their environments. Humans are just the apex of this process. A toaster or a thermostat don't self reproduce, so they have no goals. Having no goals, means no consciousness, because goals are shaping the development of consciousness.
You are taking the teleological analogy literally again. It was Darwin's biggest insight that this is precisely not how evolution works.
Adding the word 'internal' doesn't help either. You could just as well say that a thermostat has an internal goal of staying warm.
This is moot anyway. Like Carroll's Humpty-Dumpty, you have crafted a definition of consciousness that suits you perfectly, but which doesn't match what everyone else means.
Fire self-replicates. It maintains its existence by consuming fuel and spreading itself. It's not as skilled at this as a human, but neither's a worm. What about an individual bacterium? Does it have a goal? Are they conscious in a way a humble virus is not?
Such goal attainment does not require consciousness. Even in our species, functions like breathing and blood circulation are autonomous, and they continue even in our sleep when consciousness is absent.
The fact that you have no memory of time while you are asleep does not prove that consciousness is absent.
The fact that your brain's executive network are unaware of the workings of "autonomous" activities does not indicate that those activities lack consciousness. You are also unaware of my consciousness but I assure you that it exists.
> The fact that your brain's executive network are unaware of the workings of "autonomous" activities does not indicate that those activities lack consciousness.
I'm lost. To me it means those activities lack consciousness. It's also the opinion stated in Jaynes' book, which was mentioned by user breckinloggins.
It's not the same as me being unaware of your consciousness. Your consciousness is foreign to me. It's always impossible for me to perceive it. My own consciousness is internal and I can perceive it. You may argue there are some activities of my own consciousness that I cannot perceive -- it's debatable, but we can discuss it -- but arguing about your consciousness from my perspective does not apply.
You are making a special distinction between stuff that is "part" of you, and stuff that is not "part" of you. In actuality there is no such clear distinction, there is only correlation in state, which is not binary but rather occurs on a spectrum.
Understand that I'm defining consciousness as the property of having awareness, not as the human mind. The human mind is what it is like to be a cerebral cortex with our given structure. Think of consciousness as being like a computer screen, and the mind as a picture displayed on that screen. That screen could display a very different image, but that wouldn't change what the screen itself is.
> You are making a special distinction between stuff that is "part" of you, and stuff that is not "part" of you
But this a crucial distinction when talking about consciousness, it's not arbitrary. Consciousness requires a subject to be conscious.
Splitting awareness from the human mind seems weird to me. It's not very useful to consider, say, my liver to be conscious. Even worse, there's no way to prove or disprove the assertion that it is conscious; it's even less falsifiable than talking about other people's consciousness!
The whole point of Jaynes' book is that consciousness is not required for most activities. He argues that not only are your body organs not conscious, but also that most activities you engage in every day aren't conscious either.
Decoupling awareness and its contents is a necessary prerequisite to a much more parsimonious philosophical viewpoint. Without taking that step, you're stuck with the idea that human awareness is somehow different in kind from the physical awareness that drives the evolution of everything else in the universe.
I don't understand what you mean by "the evolution of everything else in the universe", either. Consciousness isn't required for the evolution of living beings. Or do you mean something else?
When I say evolution here I mean a change in state, not Darwinian evolution.
I'm drawing a distinction between humans, who seem to have the ability to make choices, with the prevailing scientific world view, that everything is the result of mindless causal forces.
I suppose you could hold the epiphenominalist view that consciousness is a side effect with no causal agency, but then you have to explain why things that hinder survival/reproduction feel bad, and things that aid it feel good. If consciousness was truly an epiphenomenon there is no reason why there should be any concordance between survival value of events and their phenomenal character.
I think you have it backwards: if there is no reason for consciousness to have any concordance, then it can go either way. And there is evidence of lots of evolved traits that have indeed gone "either way". Or it could be that consciousness is not a blind side effect but actually provides survival value. Or maybe it is a side effect but it's harmless. There are lots of possibilities.
In any case, it seems this is a diversion from the main argument: if you aren't aware of a process, then it's not a conscious process on your part by definition.
> That's like saying a dead worm and a live worm are not so different.
What? No.
> If there is a difference in how they evolve, there is a something to be like it because it has to have some kind of perception and action selection mechanism. What it's like to be a worm is the perception stream evaluated by the value function of the worm.
Assuming that by "evolve" you mean "evolve as a dynamical system", then yes.
No new words are needed. What you need is that people stop conjuring new words to explain the same things over and over again, using and re-using words under different meanings without ever bothering to either listen or read their predecessors properly, which only leads to feelings of ineffectualness and a strange sensation of déjà vu.
When authors in the philosophy of mind stops rehashing ideas that could and most likely have occurred to 17th century philosophers for the sake of attaching their names to a publication, then it will see some progress.
> When authors in the philosophy of mind stops rehashing ideas that could and most likely have occurred to 17th century philosophers for the sake of attaching their names to a publication, then it will see some progress.
I doubt people involved in philosophy of the mind will ever be useful here. If one keeps up to date on neuroscience, they'll see a lot of progress being made, but none of it is coming from the "philosophy of the mind" people. Not only that, but the "philosophy of the mind" people don't even seem to be aware of much of what's happening in neuroscience.
There seems to be a certain "God of the gaps" aspect to how philosophy approaches science, where people pretend that philosophy is fundamental approach to a field which is working on a particularly large and difficult problem. So you see that philosophers often approach things like the human mind and quantum theory quite different from how they approach chemistry or non-mind physiology.
In that regard I always find it odd how much philosophy and religion seem to have in common.
Large parts of most religions basically boil down to convincing people of something for which there's no actual evidence solely by the use of philosophical and rhetorical devices, it's de-facto large scale social engineering.
Which basically makes religion something like the original "science of philosophy"?
>No new words are needed.
Probably true. In fact watching the film Avatar where the Na'vi can connect with their environment is perhaps the best example of consciousness in action switching its focus from one part of the body to another, where their environment is their body, and the Na'vi is consciousness itself.
Our bodies are largely autonomous with cells communicating via a variety of methods either pumping in/out chemicals to the cell itself or the immune system placing protein markers for other parts of the body (immune cells) to deal with bearing in mind that every cell in the body has a complete copy of the DNA.
One thing I have learnt about the body, is there is alot of redundancy built in for it's survival, every cell has a complete copy of the DNA which can be altered leading to cells not functioning properly, and there are a variety of means in which the consciousness can be alerted to problems which need addressing like hormone's or nerve messages.
Simply dropping 500Mg of niacin (the non flush variety) on an empty stomach demonstrates how quickly the prostaglandlins can be stimulated into action in many different parts of the body, not just the skin but internal organs like the lungs & liver. How aware you your consciousness is about these changes and effects is perhaps also an indicator of your consciousness, just like some people are more aware of their food cravings.
When considering food craving's & how pregnant women can suddenly start craving things like coal when they perhaps have never eaten it before also indicates some sort of memory that may have been passed on by their parents, or maybe there is some other subconscious mechanism which perhaps through the nose enables the body to identify a substance which contains chemicals which the body needs.
The fact our taste and enjoyment of food and drink is altered simply by light also indicates our consciousness is easily swayed over the logic or remembered/imagined experience.
Ultimately its hard to define what consciousness is exactly because when considering the blood brain barrier and how hard it is for some chemicals to reach the brain but not other's, is our consciousness largely just a collection of learned experiences where nature has utilised cholesterol into a storage medium and consciousness itself is just a memory of chemical & electrical responses in a variety of cells which certain feedback mechanisms like nerves, immune system cells and hormones the brain is able to remember and influence its behaviour as a result?
> Jaynes' "The Origins of Consciousness in the Breakdown of the Bicameral Mind" is perhaps the most egregious offender.
Can you expand on this? I am reading this book at the moment, but I don't quite follow your criticism. He doesn't seem to me to be talking about phenomena related to "consciousness of consciousness", but rather about how consciousness arises as a functional process, and as a consequence of our linguistic heritage (though I haven't finished the book yet, and I find it all quite confusing, so I'm not really sure).
For reference, I found the entry for "Consciousness, defined" in the index, and the paragraph on the referenced page that most closely corresponds to a definition seems to be this:
> Subjective conscious mind is an analog of what is called the real world. It is built up with a vocabulary or lexical field whose terms are all metaphors or analogs of behavior in the physical world. Its reality is of the same order as mathematics. It allows us to shortcut behavioral processes and arrive at more adequate decisions. Like mathematics, it is an operator rather than a thing or repository. And it is intimately bound up with volition and decision.
1. consciousness as "awareness" or "experiencing existence"
2. consciousness as "mind" or what we experience that doesn't seem to correspond to the outside world.
The former is certainly not a result of language, and that is what is referred to as consciousness in Buddhist/Hindu philosophy. The latter is definitely influenced by language and seems to be what many modern scientists are referring to when they talk about consciousness. The conflation of the two definitions does a great disservice to understanding what consciousness "is".
> The former is certainly not a result of language, and that is what is referred to as consciousness in Buddhist/Hindu philosophy.
I recommend reading Jaynes' book (assuming you haven't already) before you draw any firm conclusions on the role of language.
I really don't think I can do him justice, but his thesis seems to be that humans were, until relatively recently (4000 years ago, perhaps?), non-conscious - they would not have "experienced existence".
He goes on to discuss consciousness as a product of metaphor, and thus with decidedly linguistic origins. That is terrible summary, though, so I recommend reading the book and forming your own view - I have found it quite enjoyable so far.
As an aside, he starts the book with a list of things that consciousness is not, from his perspective:
>This equivocation between consciousness and meta-consciousness is so widespread that a plethora of books, which are seemingly dedicated to the project of "explaining consciousness", instead spend their entire contents explaining some hypothesis involving meta-consciousness and then proceed to declare victory without ever touching consciousness itself
I think this trend is bound up inextricably with our efforts to distinguish ourselves from "lesser" animals. A monkey may be able to reason intelligently, but it cannot communicate to us whether it is aware of itself doing so. We therefore lack evidence of meta-consciousness in animals and tend to attach out-sized significance to it as a potential mark of human superiority. Even though absence of evidence isn't evidence of absence.
From an evolutionary perspective, consciousness seems to derive from the same mental equipment pre-human hominids developed for handling social interactions. The ability to model and predict the mental states of other hominids provided a survival advantage. Once sufficiently developed, this ability can be applied to itself, so that the hominid is modeling its own thought processes and mental states.
To me, this doesn't seem like some sort of deeper, insoluble mystery, but rather the symptom of an unsatisfactorily detailed model of that mental equipment. I think the problem loses its peculiar appeal when we can identify with a reasonable degree of certainty that what we experience as consciousness corresponds to specific neural pathways and configurations. Right now, the philosophical arguments are too abstract to be convincing and the science is too immature to discourage all the useless theorizing.
Furthermore, as also pointed out by others above, all discussions about consiousness tend to conflate what appear to me to be distinct phenomenon. The first could be described as a persistence of thought or "low-level" consiousness, which is likely explained by the evolution of a working memory. The second could be described as self awareness, which is likely a small, nearly-inevitable leap from "low-level" consiousness. The third could be described as meta-consiousness, whos origin is likely explained by your social evolution hypotheses.
I found this series on the New York Review of Books website gives an understandable, and readable history of the main threads of discussion on this, http://www.nybooks.com/topics/on-consciousness/
>We need new words - or wider usage of existing ones.
Agreed - trouble is, though, whenever you start to enforce this particular aspect of the discussion, you end up either being accused of starting a cult, or following one...
Within the "metaconsiousness" definition provided int he article, I think it very likely that the distinction is more of an illusion than anything else.
attention is the key to distinguish between unconscious thought and conscious thought
Say we're learning to operate a car... You are very "aware" of steering and pedals and gears. After a while, you start just thinking in terms of forward, this way, stop, faster. The mechanics start to be handled "subconsiously"^ as you go along. Awareness/attention then gets drawn back if something goes wrong, say your gear sticks. A lot of learning seems to be like this. Aware & consious actions repeated until it becomes subconsious. I think this process is gradual. More importantly, the observable outputs are very hard to stack neatly into two piles.
On top of that we have the problem of fooling ourselves. If I asked you "why did you change lanes?" you will probably give me an answer, but a lot of expirements suggest that answer is made up after the fact.
I think it's more likely that metaconsiousness is made of the same stuff as subconsiousness. It's just the type of consiousness that we notice (tautology?) and consider to be us.
TLDR, I agree that the more important breakthroughs will probably be at the worm consiousness level, but that is itself a theory of consiousness.
^As mentioned below, it's a problem using, defining and speculating about these terms simultaneously. I'm doing it anyway, but yeah...
It seems to me that if you extend 'conscious' to cover things outside a persons attention, you've made the word 'conscious' meaningless. If I take away all of the things that you claim to be 'conscious' of but not actively aware of, would you be comfortable with me saying you were no longer conscious? Sure you actively thinking about things, but you don't have that sensation of the bottom of your big toe in your shoe somewhere in the background, therefore you must be less than conscious. The idea of meta-consciousness I think still has plenty of merit, but extending 'conscious' to everything you are unaware of is useless and also can never stop. Why not include the things that are stimulating your nerves as well as the stimulation of the nerves itself? Why not extend it out to blanket the whole of the universe?
This is especially troubling when you consider that meta-consciousness sells easily to the general public as "the hard problem" (our thorough use of it seemingly being a large part of what makes humans different, after all), while it's likely that it's more of an engineering problem after you've figured out base consciousness.
If there's anything it is like to be a worm, THAT'S the hard problem. Implementing recursive phenomenal access seems like an undergraduate research project after that. Yet it's most often the very thing we spend all our time focusing on. Frustrating. We need new words - or wider usage of existing ones.