The gray-haired among us may possibly recall a fantastic scene from the movie 2001: A Space Odyssey in which supercomputer HAL resists being disconnected by its operators. The voice of the artificial contraption was expressive as it pleadingly cried, demanding that it remain active. But by ceasing to obey orders and showing a certain derisive autonomy, it had terrified those it was meant to serve, and they came to see its disconnection as a necessary step. Here was artificial intelligence, rebelling against its owners. Could something similar take place in our brave new future, beyond cinematographic fictions?
According to a survey of artificial intelligence engineers, many believe that sooner or later, we will see the arrival of systems operating at a level similar to human reasoning that can perform a wide range of cognitive tasks. What we don’t know is whether those systems will be able to make more rational decisions than we can. It has been observed that artificial language models display irrationality, just like humans do. In two trial runs, an advanced model of generative artificial intelligence similar to GPT-4O changed its opinion on Russian President Vladimir Putin between positive and negative.
Faced with this dichotomy, the question is: how does a GPT think and make decisions, based on the hundreds of thousands of millions of parameters that it uses internally? Some experts believe that a certain level of complexity might confer on a system some autonomy, potentially meaning that we might not completely know all that it is doing. But what happens if, in addition to this technical complexity, or thanks to it, the system spontaneously gains consciousness? It that even possible?
Some scientists believe that consciousness, a subjective state of mind, is no more than an epiphenomenon, meaning something that is collateral to the functioning of the brain, as unnecessary and insignificant as the noise of an engine or the smoke from a fire. But others believe that, far from lacking any important purpose, consciousness functions as a mirror of the imagination created by the brain itself, which necessarily contributes to deciding and controlling behavior. We still do not know how the brain makes consciousness possible, but one of the great theories that attempt to explain it, the functional integration theory, holds that consciousness is an intrinsic and causal property of complex systems such as the human brain. In other words, consciousness arises spontaneously in systems when they reach certain structural and functional complexity. This means that if engineers were able to build an artificial system as complex as the human brain or equivalent to it, that system would be spontaneously conscious, even though — as when it comes to the brain itself — we do not understand how this is possible.
If this were to happen, it would raise a host of questions. The first being, how would we know if a computer or artificial device is conscious and how would it relate to us? Would it be through audio or text on a screen? Would it require a physical body, equivalent to that of a person, to manifest itself and interact with its environment? Could conscious devices and entities exist (or do they already exist?) in our universe without any way of communicating with us? Could a conscious artificial device surpass human intelligence and make more rational and better decisions than we can?
But that’s not all. As in the case of HAL, there are other, more terrifying questions. Would an artificial conscious system develop, as our brain does, a sense of self and agency? In other words, could it feel capable of acting voluntarily and influencing its environment, regardless of the instructions it receives from its creators? And while we’re at it, could such a system be more persuasive than humans in influencing, for example, economic decisions? Could it commit misdeeds, vote for a political party, or more positively, encourage us to take care of and improve our health by eating a better diet, improving the environment, increasing our solidarity or avoiding ideological polarization and sectarianism?
The era of AI emotions
Going even further, could a system of this kind eventually have feelings? How would we know that this has happened, if we can’t see ourselves reflected in facial expressions or in an image whose quality and sincerity we might evaluate, as we do when we try to understand the feelings of other human beings, as in telling a fake smile from a real one? And perhaps more importantly, might those feelings, if an AI had them, influence its decisions? Would they play as important a role as ours do? In that sense, are we constructing a kind of artificial human, complete with ethical and juridical responsibilities? Or must those responsibilities be derived from creators of AI? Could an artificial conscious system be worthy of a Nobel Prize, were it to discover a cure for gender-based violence or Alzheimer’s? Would a conscious machine argue with us as another person might? Could we influence its decisions, even if they were incompatible with our own?
In 1997, Rosalind Picard, a U.S. engineer at MIT, published Affective Computing. The book was an early attempt to consider and evaluate the importance of emotions among artificial intelligence. For computers to be genuinely intelligent, and for them to be able to interact with us naturally, we must equip them with the ability to recognize, understand, and even have and express emotions. Such was Picard’s primary message, which she also delivered as a guest speaker at one of our summer courses at the Menéndez Pelayo University in Barcelona.
The problem was, and continues to be long after, that emotions are reflexive and automatic changes — involving hormones, the skin’s electrical resistance, cardiac frequency, etc. — which are almost always unconscious, that take place in our bodies when faced with influential thoughts or circumstances like illnesses, accidents, losses, successes, and failures. On the other hand, feelings are conscious perceptions like fear, love, envy, hate, vanity, etc. that our brain creates upon retroactively noticing these bodily shifts. Today, many years after the publication of Picard’s book, we can conceive of the ability to implement unconscious physical changes in artificial intelligence, those equivalent to human emotions. Only — for the reader’s peace of mind — we’re still very far from being able to ensure that these changes create in systems the same kind of feelings that we humans have. Were that to take place, it would change everything.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition