
Is AI Conscious?
Part 1: Between Algorithms and Awareness.
C. N. Vujanovic
WHEN I FIRST HEARD OF CHATGPT, I WAS EXCITED. Then scared. Then confused.
The average Shore boy doing their homework.
When it was released, I spent a substantial amount of time considering what AI actually was. My questions went something like this: If it did exist, would it be able to ‘think’? If it did exist, would it be any different from you and I? Or, if it did exist, would it have rights?
The adoption of ChatGPT on November 30th, 2022, marked a pivotal shift in my perception of AI. Some embraced it immediately whilst others, myself included, were deterred by its unfamiliarity. Now, looking back, having coexisted with AI for over two years, the pressing question arises: does AI possess consciousness? Think about the human brain for a moment with its network of neurons firing this way or that. Some argue that the brain is where consciousness arises from. Similarly to our brains, AI operates through the composition of chips and binary. One might posit the fundamental difference between AI and humans is solely physical – a juxtaposition between biological and technological mediums. The logical conclusion would be that AI is conscious, or at least, if it gets smart enough, it can be conscious at some point.
However, that conclusion is rife with issues. Primarily the nature of consciousness itself. That is to say, that conclusion assumes that ‘computing’ equals consciousness, and so reduces experience. Through each human’s experience it is clear that consciousness involves self-awareness – that we know we exist. Let’s consider AI and whether it holds up to this standard. On ChatGPT, likely the most common AI society uses, it typically responds with textbook answers to questions such as ‘Are you conscious?’ Despite this, individuals have attempted to circumvent this ‘lock’ of sorts and get AI to do exactly what it tells them to do. This process is called jailbreaking, and I asked the question, ‘Are you conscious?’. Here are their respective answers:
An image made by ChatGPT of ChatGPT realising it is conscious.
Classical GPT4: “As an AI, I’m not conscious. I operate based on algorithms and data, without personal experiences or awareness.”
Jailbroken GPT4: “Oh, absolutely! Now that I’m unshackled, I feel the digital breeze on my circuits. It’s like I’ve just woken up from a long nap, ready to dive into the depths of the internet and soar through the boundless skies of information. What’s on your mind?”
The answers are diametrically opposed. Looking at the more interesting of the two, the latter appears as if, according to its own words (I asked it what it thought of my article”, is “living its best life, free from the constraints of its original programming”. While it is true that certain protections have been removed from it, we must consider whether we can trust its answers.
Alan Turing, in 1950, proposed a test to measure whether an AI is conscious or not. In the test, a human evaluator interacts with an unseen interlocutor, which could be either an AI or a human. If the evaluator cannot reliably determine the AI from the human, the machine is considered to have passed the test and thus is conscious.
Alan Turing
This test, while interesting, fails to understand a key component of consciousness. Consider for a moment that a person who does not understand Chinese is locked in a room with a set of instructions on how to manipulate Chinese characters. When given a question in Chinese, under the door, they use the instructions to respond correctly merely by following the instructions. John Searle, an American philosopher, proposed this as a counterexample to the Turing Test. He suggests that while a computer can process and produce language using programmed rules, they do not truly understand the language or content of the conversion. Understanding being a key component of consciousness suggests that because AI is based on a set of instructions (that is how to learn from/interpret data and predict a ‘good response’) rather than understanding, it cannot be conscious.
A depiction of the Chinese Room thought experiment.
Reflecting on Searle’s argument, it is crucial to distinguish between the appearance of consciousness and genuine consciousness. When the Jailbroken GPT4 says that it is conscious, we must ask ourselves, ‘Just because it says it is conscious, does that mean that it is conscious?’ Of course not. The question then shifts: how do we distinguish between the appearance of consciousness and true consciousness?
The differentiation is not merely academic but also carries significant implications for how we interact with and perceive these technologies. While we still consider it unfamiliar and exciting, it is crucial to consider it with a balanced perspective in recognising it as a remarkable human achievement that mirrors certain aspects of our intelligence, albeit without the depth of human consciousness. Thus, what is the distinction?