A funny thing is even though we're pretty good at a text-based turing test, and we can have very convincing human generated speech we still don't have something that can pass the audio based turing test. Natural pausing and back and forth gives the AI away.
And when we pass that we can just add an optical component and judge that the AI has failed because its lack of facial expression gives it away[1], moving the goalpost one communication component at a time. But in any case we can just add the audio (or for that matter facial expression) component to the Chinese room though experiment and the Turing test remains equally invalid.
Although I am scrutinizing Turin’s philosophy and, no doubt, I am personally much worse at doing philosophy then Turing, I firmly hold the belief that we will never be able to judge the intelligence (and much less consciousness) of a non-biological (and probably not even non-animal, nor even non-human) system. The reason, I think, is that these terms are inherently anthropocentric. And when we find a system that rivals human intelligence (or consciousness) we will simply redefine these terms such that the new system isn’t compatible any more. And I think that has already started, and we have done so multiple times in the past (heck we even redefined the term planet when we discovered the Kuiper belt) instead favoring terms like capability when describing non-biological behavior. And honestly I think that is for the better. Intelligence is a troubled term, it is much better to be accurate when we are describing these systems (including human individuals).
---
1: Though in honesty I will be impressed when machine learning algorithms can interoperate and generate appropriate human facial expressions. It won’t convince me of intelligence [and much less consciousness] though.