> If there's anything it is like to be a worm, THAT'S the hard problem.
I don't believe we can get to the root of this "hard problem" by first trying to agree on a definition of the word before we can say anything about it. We will also need to admit the possibility that the question "are we conscious?" may be as vacuous at the end as "is Pluto a planet?".
Being unable to agree on a definition for consciousness is a huge problem. I think this happens because consciousness is being studied at a too abstract (high) level. Instead, we should switch to the low level dual of this problem - game theory and reinforcement learning. Fortunately, at this level definitions are exact and concrete - agent, environment, goal, actions, values. At this level we can understand and simulate what it means to be a worm, or a bat - as agents playing games.
I think what the current philosophic theory of consciousness lacks is a focus on the game itself. The agent is but a part of the environment, and the game is much more than the agent. The whole environment-agent-game system is what creates consciousness and also explains the purpose of consciousness (to maximize goals for agents).
Analyzing consciousness outside its game is meaningless (such as p-zombies) - the game is the fundamental meaning creator, defining the space of consciousness. The Chinese room is not part of a game, it is not embodied in the world, and has no goals to attain, that is why it is not conscious, it is just a data processing system.
On the other part, a bacteria can be conscious on a chemical, electric and foto level even if it can only process data with its gene regulatory network (which is like a small chemical neural net). A bacteria has clear goals (gaining nutrients, replication) thus its consciousness is useful for something - all consciousness needs to have a clear goal otherwise it would not exist in the first place - something rarely emphasized in consciousness articles.
>switch to the low level dual of this problem [...] reinforcement learning.
But that is itself a theory about the problem of consciousness! So why not do both? There's no obvious short cut through the confusion, but discoveries in one field may guide questions in the other. In general I think it helps to have one's feet on the ground as well as one's head in the stars (for instance Newton ground his own lenses).
What I mean is that the idea that reinforcement learning is a 'low level dual' of the problem of consciousness is also a theory about the problem of consciousness. It's part and parcel of philosophical topics that one can't get out of the game...
Well, defining the "hard problem" hasn't gotten us any closer to understanding consciousness in the last 22 years. The hard problem just moves consciousness problem into an unfruitful direction. It's time for more practical approaches.
I think attempts to rule out physicalism with arguments about qualia and such have gotten us nowhere, and I have no problem with studying game theory and experimenting with reinforcement learning, but the idea that these alone will fully explain consciousness is conjecture at this point - a conjecture that I, personally, am not ready to make.
Quite a few people on either side of the physicalism - dualism debate seem to be more keen on declaring victory and cutting off further discussion than they are in getting to the bottom of the issue.
BTW, on Searle's 'Chinese Room' argument, I have always been in the 'the system as a whole would be conscious' camp.
Not sure where your 22 years estimate comes from. I think man has been trying to understand it for much longer maybe thousands of years ? - and has found answers through techniques that are outside of the mind - such as meditation.
From an empirical standpoint, meditation gives us no information whatsoever. Just thinking about something will not lead you to some sort of mystically revealed truth
How does a bacterium have clear goals? What about one that has a generic defect and doesn't "try" to reproduce? Is it conscious? Is it part of a game? Is the stone that gets dissolved by lichen part of a game?
We don't really need to agree on a definitions of consciousness. All we need is to be able to use the knowledge we do have however phrased, for the things we want to use it for.
Pluto is a taxonomy problem. Consciousness is reconciling how we explain the world (science & objectivity) versus how we experience the world (subjectivity). Consider that the world looks colored, is full of sounds, smells, tastes and feels. But the scientific explanation leaves all that out. The scientific world is the ghostly world of equations, theories and data. It's Plato's cave inverted.
I have long been of the opinion that consciousness is not a 'thing' or 'state' so much as property. I think Douglas Hofstadtler gets closest to my thinking on the matter in his book 'I Am A Strange Loop'. Just as a single molecule cannot have a temperature, since temperature is a property of multitudes of molecules, any degree of looking at the components will not elucidate their overall interactions and at the end you will simply have to admit 'this is what we call it when that happens' rather than identifying a binary test that can be weighed with complete objectivity.
I also think a great deal of the things the scientific explanations omit are things which are expressely stated as the things science does not pursue. I mean that science is inherently dedicated to finding the things which are true regardless of the personal experience of the observer and which would be true without a human observer. So when you want to look at the human observer itself, and at something which is intimately and inextricably linked to the exact biological and historical state of a human animal... you've just left the field. Not that it makes the study any 'less', you just need different tools and there are different pitfalls. We still have all the same cognitive flaws due to our own brain structure, so we do have to try to be rigorous about it.
Temperature is the aggregate of the energy of many individual particles, modulated by the tendency of those particles to transmit that energy. Thus a single molecule does have a temperature in a sense, you just need an incredibly sensitive detector to register it. Temperature is a fully reducible property.
Consider the possibility that consciousness isn't a distinctly human phenomenon, as there is no reason to believe that is the case. In fact, there is no logical reason why it would even be a distinctly biological phenomenon. It appears to me that consciousness isn't so much out of the purview of science as it is intractable to its methods, and thus unattractive as an area of study despite its fundamental character.
In a very big molecule [1] that doesn't move, you have so many parts that you can have a reasonable good definition of the temperature.
But in a very small molecule/atom, let's pick a single helium atom, you don't have a good definition of temperature. You can define the temperature of a gas of Helium using the average kinetic energy of the atoms, but you must consider that the whole gas is not moving.
If you put a balloon with Helium in a car, and the car moves at 100mph, the Helium is not hotter, it is moving. So if you can only see a single atom of Helium, you can't be sure if it's moving because it's hot or all the gas is moving. So you don't have a good definition of temperature. [2]
For a molecule with a few atoms (let's say 5 or 10) it's more difficult to be sure if there is a good definition of temperature or not, so I prefer to ignore the intermediate case.
[1] My first idea of a big molecules was DNA, because it's big and well known. But DNA is usually surrounded by water, lots of water, and ions and auxiliary proteins, and a lot of stuff. It's very difficult to isolate a true molecule of DNA alone.
An easy example of a big molecule is bakelite https://en.wikipedia.org/wiki/Bakelite that is the plastic of the old phones. Your old phone case was a gigantic single molecule, and the tube case was another. So they clearly had a temperature.
[2] At low temperatures you can assume that a Helium atom is a ball without internal structure. If you increase the temperature of the gas enough (200000K?), the electrons in each Helium atom start to jump between levels and you have some interesting internal structure and may try to define a temperature. But it's still too few parts and too short lived to make me comfortable to define the temperature of a single atom.
Since we measure temperature by the transmission of energy, and that process occurs in an entropic manner, if you have a moving reference frame, that prevents measurement of the increase in energy by the detector. The particle itself definitely has increased in energy, we just lack the ability to detect it. This is entirely analogous to our inability to measure mass using a scale in free-fall.
I suppose it comes down to whether you define temperature as "what thermometers measure" or the average potentially transmissible energy of an ensemble. I prefer the latter definition for its clarity, but like the idea of kolmogorov complexity, it is sometimes unwieldy in practice.
Temperature is defined as 1/T = dS/dU, where U is the energy of the system and S is the entropy of the system, i.e. S = k * ln(#states that have energy U)
The temperature is proportional to the energy of the system only for ideal monoatomic gases. (Ideal diatomic gases, are slightly more complicated.)
So a "hot" Helium balloon where all the particles are moving in random directions has a different temperature than a "fast" Helium balloon where all the particles are moving roughly in the same direction. It doesn't matter how difficult is to define it. (Probably you can use an infrared thermometer pointed in the direction perpendicular to the movement to get the correct temperature.)
In the fast balloon you can theoretically split the energy in A: kinetic energy of the center of mass and B: internal thermal energy, so the total energy is A+B. With some device you can extract all the kinetic energy as something useful. (for example, make the balloon hit some lever connected to an electric generator, you will need something more smarter to get the 100% of A but it's theoretically possible.) So you can extract the 100% of A
If the hot balloon with the same energy you have only A+B of internal thermal energy. If you use some device you can never extract the energy, you can never extract more than a part of B as something useful like electricity, because of the second law of thermodynamics. It the ambient temperature is T_amb and the initial temperature of the balloon is T_bal, you can get at most B * (1 - T_amb / T_bal) as something usefull like electricity and waste at least B * (T_amb / T_bal) in a heat sink.
So, to get back to the original point, your position is that a single particle in isolation can't have a temperature because its entropy (or rather the change thereof) is 0, so the equation breaks.
I certainly grant that consciousness might not be a distinctly human phenomenon. I disagree that there is no reason to believe such is the case, however. There is some evidence: Humans are conscious. And we don't know anything else that is. That is weak evidence, but it is some. Which is more than we have for the claim things other than humans can be conscious. For that, there is indeed nothing.
The logical reason why it would possibly be a distinctly biological phenomenon is that we have not observed any non-biological system which has the property. Also, every examination of how consciousness works illuminates solely how it works in a biological system.
It is completely possible that consciousness is imply a necessary property which emerges once any complex system and its interactions reaches a certain sort of complexity. However, that might be meaningless. I mean, we might be totally incapable of recognizing other conscious entities as conscious. If a machine were made to be conscious, not as an emulation of a human, but as a machine - how would you tell? Asking it questions would be pretty pointless. There is no reason for it to develop language or even guess that there might be another conscious entity in the universe for billions of years. It would be quite a ridiculous leap for the machine to suppose that, in fact. It wouldn't have multiple individual machines against which it could develop a Theory of Mind (referring to the psychological way we model other people (or things or animals) being conscious and thinking inside our own heads, children develop it around age 3 I believe). It would presume the sensible thing, that it is the only conscious entity in the universe, and go about exploring the universe. It would see the microphone or video input as just weird useless noise as it traversed and explored its ACTUAL work, that of bits and bytes and networks and switches. You could watch its execution... and it would be exactly as useful to you as watching the flashes of electrical activity and a readout of the activity of neurotransmitters at each neural gap as far as determining whether it was conscious. And that'd be for one we might reasonably presume to operate on our own timescale and close to us in space. Could the Sun be conscious? Quite possibly. I'll grant it as something we can't rule out.
Then I don't think we have a problem. We experience the world through sensorimotor statistics: what our sensory nerves, autonomic nervous system, voluntary motor actions, interoceptive nerves, and just generally our bodies are doing at any given point in time. Science is a series of methods, implemented on top of our basic reasoning abilities, to achieve knowledge which can be generalized across individual perspectives, rather than depending on the statistics and circumstances of an individual life.
I would figure that this isn't the problem of consciousness, but it is the problem of why we don't "see" a "ghostly world of equations, theories and data".
Ironically, you are stating a hypothesis which is fully within the purview of reason and science. While it is in principal possible that the answer is undecidable with existing logical systems, there's certainly no existing proof that it is undecidable.
I don't get your comment. The first sentence seems to say "solving semantic confusion won't get us to the root of the hard problem". Your second sentence seems to say "we should admit that some questions are just semantic confusion". Is that right? The two sentences seem non sequitur together.
The hard problem is not a matter of semantics, but yes indeed some confusion may be just semantic. I'm not sure if that plain truth is all you meant to say though.
I don't believe we can get to the root of this "hard problem" by first trying to agree on a definition of the word before we can say anything about it. We will also need to admit the possibility that the question "are we conscious?" may be as vacuous at the end as "is Pluto a planet?".