If you could make your mind immortal — free it from the transient, organic bonds of the body — would you do it?
In Richard Morgan’s cyberpunk-noir thriller, Altered Carbon (now a Netflix series), it’s the 25th century and the human mind can be recorded, uploaded, and transported interstellar distances with ease — all thanks to a small device implanted during infancy.
This ‘cortical stack’ sits at the base of the neck, storing its wearer’s memories over the course of a lifetime and up to the moment of death — at which point the stack can be recovered and ‘resleeved’, i.e. inserted into a new human body.
Your sleeve is as disposable as clothing; stacks can be transferred into natural bodies or implanted into customized sleeves grown and augmented in the lab.The world of ‘Altered Carbon’ on Netflix. (Netflix)
Though fantastical at first glance, the subjects explored in the novel — consciousness uploading, the mind-body problem, personal identity — are far from the purely theoretical concerns of science fiction or philosophy. Today, there are whole startups and institutes dedicated to the goals of mind preservation, uploading, or emulation: Carboncopies Foundation, Nectome, and Alcor Life Extension Foundation to name just a few. As humanity’s scientific prowess continues to grow, so will interest in these topics. Altered Carbon tackles these perennial tropes of science fiction with grit and flair.
Let’s take a look at a few of them.
Can Our Consciousness Be ‘Uploaded’?
The novel begins with Takeshi Kovacs, former U.N. Envoy soldier turned hard-boiled detective, being downloaded and resleeved after a one hundred eighty light-year journey to Earth.
This instantaneous journey is possible because Kovacs’ mind is transported as Digitized Human Freight — pure information. But whether or not the mind can be digitized will ultimately hinge on a critical, philosophical question: what, exactly, is the nature of the mind?
As you might expect, there are many competing theories. According to the Computational Theory of Mind (CTM), the human mind is literally a computing system: our conscious mind is ‘software’, and the brain is analogous to the ‘hardware’ on which it runs.
In this sense, consciousness is ‘substrate-independent,’ meaning that it can be implemented in different physical materials. While our minds happen to be built from the firing patterns of billions of interconnected neurons, conscious awareness could conceivably be recreated with silicon chips or some other types of material. To put it another way, the three pounds of grey matter sitting in your skull is only one possible hardware option.
Let’s assume that CTM is correct: our minds basically work like computers. The question then becomes, logistically, how can our consciousness be recorded and uploaded?
There are two broad (hypothetical) strategies for accomplishing this feat, one of which would destroy our bodies in the process and another which would spare us. A ‘destructive’ consciousness transfer would record our mind’s contents while killing the physical brain. As the philosopher David Chalmers discusses in Mind Uploading: A Philosophical Analysis, one such destructive method would entail carefully preserving our brain tissue and then using powerful microscopes to image our trillions of neuronal connections (synapses).
These synaptic connections would be painstakingly mapped and then — voila! — recreated within a digital environment. This is more or less the strategy currently being pursued by Nectome, an organization whose goal is to “develop biological preservation techniques to better preserve the physical traces of memory.”
‘Non-destructive’ uploading, on the other hand, encompasses any method that can record conscious states without destroying our underlying brain tissue. One possible strategy would be to record our neuronal activity in real time, perhaps by some high-resolution form of magnetic resonance imaging, and then use the data to recreate the firing patterns digitally. The cortical stack is one example of a non-destructive consciousness transfer: the stack continuously records its wearer’s brain states, and that data can then be ported between sleeves or to a completely virtual environment.
What Is the Relationship Between Mind and Body?
We can all remember this image: Wile E. Coyote running enthusiastically off a cliff, only to realize moments later that he’s floating unsupported in mid-air (and then plummeting straight to the canyon floor, of course).
If states of the mind are identical to states of the brain, as is held by the Mind/Brain Identity Theory, then is it really possible for the mind to exist as a separate entity, unsupported by the physical brain?
The technology of Altered Carbon implies that our consciousness can — at least partly — be abstracted from the body. The cortical stack is essentially a storage device, capturing the memories of its wearer for later access. But there’s a potential dark side to this immortality of the mind. If our consciousness can be manipulated within artificial, digital environments, it can also be molded into a sort of prison.
As a former Envoy, Kovacs is all too familiar with the technique: “Digital human storage hasn’t made interrogation obsolete, it’s just brought back the basics.” During one sequence, Kovacs is kidnapped, sleeved into a female body and subjected to just this kind of virtual interrogation. Within digital captivity, a subject can be exposed to anything: pleasure, pain, or any other variety of sadistic torture imaginable. Whole environments can be built around the worst memories plucked from a victim’s stack.
The cortical stack technology also highlights the fuzzy boundaries between mind, body, and personal identity: How much of our first-person experience is owed to our ‘pure’ mind, and how much to the physical qualities of the body?
After his long-distance needlecast download, Kovacs notes that there is a typical period of adjustment to a new sleeve — the time it takes for a user to mentally acclimate to their sleeve’s particular physical capabilities and reflexes. He describes his initial psychological shock upon seeing his new sleeve’s reflection, a detached feeling akin to some sort of dissociative identity disorder:
“As I dressed in front of the mirror that night, I suffered the hard-edged conviction that someone else was my sleeve and that I had been reduced to the role of a passenger in the observation car behind the eyes. Psychoentirety rejection, they call it.”
In light of this dynamic relationship between mind and body, Envoy Corps training is specifically designed to mentally harden its soldiers, allowing them to quickly adapt to new sleeves and environments. Since the pure mind is an Envoy’s only constant asset, it must also be his primary weapon:
“Neurachem conditioning, cyborg interfaces, augmentation — all this stuff is physical. Most of it doesn’t even touch the pure mind, and it’s the pure mind that gets freighted. That’s where the corps started.”
Sleeves, like any 25th-century commodity, come in a variety of qualities, running the gamut from your naturally born body up to customized models fitted with high-end enhancements — and for the budget-minded, even cheap synthetic sleeves that can be leased at bargain prices. And since all sensory input — sight, taste, hearing — is channeled through the body, a low-quality sleeve can flatten a user’s perceptual experience:
“I’d worn my fair share of synthetic sleeves; they use them for parole hearings quite often. Cheap, but it’s too much like living alone in a drafty house, and they never seem to get the flavor circuits right. Everything you eat ends up tasting like curried sawdust.”
Can Machines Become Conscious?
Here in the 21st century, machines come with a variety of computational powers, memory capacities, and software options. If the human mind indeed works like a digital computer, then, presumably, there is no reason why digital computers should not be able to (one day) achieve their own forms of consciousness.
Although artificial intelligence (AIs) appear throughout the world of Altered Carbon, often with inscrutable motives, it is never made explicit whether these machines are conscious — or just merely intelligent.
There is no doubt that machines can behave intelligently; the flight of an autonomous drone, a chess match with Deep Blue, or even an Amazon search query should be enough to convince us of that. But intelligence and consciousness are two different things.
In his paper What is it like to be a bat?, philosopher Thomas Nagel laid out some criteria for what qualifies as consciousness:
“An organism has conscious mental states if and only if there is something that it is like to be that organism — something it is like for the organism to be itself.”
In other words, consciousness has a unique, subjective, ultimately indescribable quality for each individual who possesses it. If that is really the case — that consciousness is irreducibly subjective — how can we ever hope to determine whether a machine is conscious or not?
Alan Turing thought that the entire question of whether machines can ‘think’ was misguided. His famous Turing Test (also known as the Imitation Game) proposed, as an alternative, to ask whether a machine could behave so much like a human that it would no longer be possible to distinguish the two.
The rough setup would consist of an interrogator screened off from one human and one machine, both of whose true identities were unknown; after a period of questioning, the investigator would have to determine which is which. An AI is considered to have ‘passed’ the Turing test if an interrogator cannot reliably say whether it is human or machine.
It may be a long while before we’re able to resleeve or inject our minds into cyberspace. In the meantime — with the help of good science fiction — we can at least ponder the hard problems of consciousness.
John holds a Ph.D. in Biology from the City University of New York. His short fiction has appeared in Kasma SF, Theme of Absence, 600 Second Saga, Flash Fiction Magazine, among others. You can find him here on the web.