From artificial intelligence to artificial consciousness?

The arrival of a new publicly accessible artificial intelligence (or AI) language model, developed by OpenAI and imaginatively named ‘ChatGPT’, has once again provoked conversation about a future dominated by super-intelligent tech. Most people have probably heard about ChatGPT by now, and for good reason; the software is almost too good at what it does. The model has been trained on a large amount of text data, making it capable of generating human-like responses. It can also do a lot of things that are pretty impressive, such as answering questions, generating text, and even participating in a conversation. ChatGPT can be applied in a wide range of fields, from customer service and language translation to content creation.

Speaking of which, it wrote the last three sentences for me. So yes, human-like and pretty impressive.

ChatGPT isn’t alone, though. From Google’s LaMDA to DeepMind’s ‘Chinchilla’, the field is developing at a rapid pace. It should come as no surprise, then, that the technology’s scope, potential applications, and possible misuse are becoming something of a hot topic. One thing’s for sure: for better or for worse, AI is here to stay, and it’s only going to get better. In fact, you won’t have to look far to see someone casually float the idea that we may soon be faced, not just with artificial intelligence, but artificial consciousness; the notion makes for some particularly enticing clickbait, but is it something that could really happen?

As far as we know, consciousness arises only in biological entities. The problem is that, beyond this observation, we don’t really know that much at all. The question at hand boils down to a more general one involving something that has come to be known as ‘substrate independence’: essentially, could consciousness develop in, or be developed by, something that isn’t organic in nature? More generally, is consciousness achievable regardless of the material (or substrate) by which it appears to be generated? 

I’d hazard a guess that most people probably fall on the ‘yes’ side of this question. At least, the notion is hardly inconceivable; in fact, our generation – reared on Sci-Fi films with morally conflicted androids, robot-induced apocalypses, and super-intelligent machines – has consistently been asked to imagine just this. Perhaps when you think about it, though, it’s kind of hard to believe that something as apparently intangible as consciousness can be reduced simply to a complex of mere inputs and outputs – whether this is in the human brain, or in an advanced computer. Not only are there several worrying consequences that follow from the acceptance of substrate independence (‘What if we’re the computer!?’ is unsurprisingly high on that list), but the notion also seems to produce some pretty counterintuitive results.

Here’s one, originally proposed by American philosopher Ned Block: imagine we took the population of China and connected all of their brains together, generating a sort of super-brain. Would this ‘China Brain’ (not to be confused with the more famous Chinese Room) be conscious? Block asks us to consider a more easily conceivable scenario: we don’t bother connecting people’s brains (whatever exactly this entails) but instead hand every Chinese person a walkie-talkie and ask them to each simulate the exact role of a single neurone, communicating with other ‘neurones’ using the radio, reacting to external stimuli in the necessary ways, working to pass various types of information to different parts of the country, and so on. Now, there aren’t anywhere near as many people in China as there are neurones in the human brain (in fact, we’d need about ten times the population of the entire planet to model a single human’s brain in this way), but the country’s population is almost three times greater than the number of neurones in a dog’s brain, and we’re pretty sure that dogs are conscious.

So, here’s the thing: most people likely want to avoid the conclusion that the China Brain would possess any sort of consciousness. And that’s the point. The thought experiment is designed to make us question our assumptions about what consciousness is and how it may arise. If your intuition is that the entire nation of China wouldn’t somehow find itself possessing some strange sort of collective consciousness, then you may wish to reconsider just how likely it is that we’ll ever be sharing the planet with conscious AI. This isn’t an argument against the idea that consciousness is a physical phenomenon, but it is supposed to suggest that substrate independence – and its accompanying theory of mind, called functionalism – is unlikely to be true.

But maybe Block doesn’t play fair. In reality, ‘the job of a neurone’ is likely something that no human could ever do and the intricate ways in which neurones are interrelated are far from achievable using a few – or even the required 1.41 billion – walkie-talkies. As implausible as it may sound, then, if, in the highly unlikely situation that each person in China were able to receive and transmit huge amounts of the relevant information almost instantaneously, if they could do it in perfect coordination with one another, if they did so with immeasurably complex interconnections and without losing any information in the process, then perhaps, just maybe, China would be conscious. What exactly this consciousness would look like is anybody’s guess.  

I’m not sure that most people will be on board with this solution to the problem, but those that are probably think that the development of conscious AI is perfectly possible, perhaps even probable. After all, whilst the population of China may struggle to play the role of a human brain, everything that I described above seems perfectly achievable (at least on paper) by an advanced computer. For those that aren’t so sure, why should consciousness only be limited to biological organisms? Is there some middle ground between conscious computer programmes and conscious countries? What reason could we ever have for thinking so? The China Brain thought experiment is yet another reminder that consciousness is a mystery of which we are yet to even scratch the surface.

Lastly, it seemed only fair to see what ChatGPT had to say on this issue. I asked whether it thought that it may one day achieve consciousness. Here’s how it replied:

“While it is possible for researchers to continue to develop more advanced AI systems with greater capabilities, it is unlikely that I or any current AI systems will ever truly be conscious”.

But that’s just what it would say.

 

Image: DeepMind on Unsplash

Leave a Reply

Your email address will not be published.

Our YouTube Channel