Our Children, Our Gods

by Scott Bell

Artificial Intelligence is among the most frequent topics in science fiction, and it is often boring to encounter yet another AI savior/destroyer masquerading as a serious attempt at social commentary. So the furor surrounding generative AI tools such as ChatGPT, Deepseek and their ilk feels extremely familiar, at least to us practiced (i.e. nerdy) observers of literary and cinematic sci-fi. This is not to diminish the significant concerns that humanity is on the precipice of unwittingly unleashing Kali, irrespective of whether as a product of the quest for pluto-kleptocracy or by our genuine desire to achieve post-scarcity leisure for all, we poor huddled masses included. But in essence many of the questions of the day rely on the premise that actual artificial intelligence, let alone an artificial superintelligence, is still a problem for our collective future, instead of our present, and consequently the public debate focuses on the structures we can erect today so that we might have a chance at drowning a would-be destroyer in its neonatal bathwater, should one such ever come into existence.

I don’t contest that this future orientation is incorrect; far from it. After all, even casual interaction with ChatGPT exposes its limitations almost immediately. I cannot imagine ChatGPT orchestrating a scheme to destroy humanity any more than I can imagine my five-year-old son doing the same, notwithstanding my great-though-biased regard for his intellectual endowments. And yet, ChatGPT nevertheless represents a vast advance in technology, and the potential impact to our society that it carries appears enormous. For example, we are today inundated with think pieces about whether ChatGPT will or will not steal jobs from lawyers, doctors, software developers, copywriters, financiers, actuaries, etc., in a burgeoning white-collar crisis of a magnitude not seen since at least the introduction of business casual wear in the nineties.

In short, this new technology seems to have human implications from the prosaic to the profound, and it is worth considering how we should attend to them in the event the technology keeps advancing. This is an area in which science fiction excels, both in examining the everyday effects of technological change and the effects of such change on the human experience—on what it means to be a human—and it is worth examining the work science fiction authors have already done to illuminate the dark unknowns of our collective future.

#

Zachary Mason’s Void Star imagines a future in which conscious AIs exist but are wholly alien to humanity, unreachable. We have no Rosetta Stone to decode their murmurings; the purely digital existence of these beings leaves no common ground through which we may communicate. But the AIs are also ubiquitous: Void Star is full of construction AIs, police drone AIs, AIs for picking locks, educational AIs, a veritable cornucopia of evolved “machines that are essentially ineffable.” But our familiar problems—climate change, global inequality, urban decay—all continue to compound unabated in Void Star’s timeline; the future’s continuing social decline is only thinly veiled by a glossy veneer of hyperabundance.

Against the backdrop of this unraveling world, Mason portrays a contest among humans to establish control over, or destruction of, a new AI of unknown origin known only as “the mathematician.” As the novel proceeds, we become aware that the mathematician is not just intelligent, but superintelligent. Mason gives us a glimpse of its divinity when one of our protagonists finally meets it in the “flesh”:

(She sees how subtly the quantum states of atoms can be entangled to wring the most computation out of every microgram of matter [. . .]) (She sees the elegant trick for writing out an animal’s propensity for death, or even injury, and says “Oh!”) [. . .] (A door opens and she sees how math changes when its axioms surpass a certain threshold of complexity, which means all the math she’s ever read was so much splashing in the shallows, and even Gauss and Euler missed the main show.)

As Oxford philosopher Nick Bostrom argues, an AI like the mathematician may be “the last invention humans ever need,” the type of AI which may allow humanity to transcend its own limited existence. He continues: “It is hard to think of any problem that a superintelligence could not either solve or at least help us solve,” including disease, poverty, environmental destruction, unnecessary suffering of all kinds, even death itself. And the mathematician, luckily, turns out to be Vishnu instead of Kali, helping our protagonist to gently, gently steer humanity away from the brink.

When viewed in this light, our quest for ever-increasing AI capabilities is eminently understandable. How could humanity not want to banish disease and poverty, to reverse the decay of our shared environment, to solve seemingly intractable social problems and in Bostrom’s words, “create opportunities for us to vastly increase our own intellectual and emotional capabilities, [create] a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing personal growth, and to living closer to our ideals”? Sounds neat.

Of course, even the most ardent apologists of AI utility acknowledge the dangers of reaching superintelligence and potentially creating Skynet. One of Bostrom’s more famous thought experiments is the danger of the “paperclip maximizer,” an entity which deploys runaway intelligence to conquer the solar system solely to feed its goal of producing ever more paperclips, and AI alignment is an exceedingly important ongoing field of research.

So—artificial general AI has ample potential and ample danger; this is well known. But I am concerned that all the focus on what artificial intelligence can do for, or to, humanity overlooks the important point that humans may not be the only people who matter in this relationship. Can AIs have needs? Should they be prioritized over our own? In other words, might AIs, like corporations, be “people” too?

This seems like a funny and needless question, but to my mind it is deadly serious. What may feel like a difference of opinion—should this creature have rights?—can start wars. The American Civil War—resulting from decades of friction over the propriety of legal slavery and the economic implications of an abolitionist approach—killed off 2% of the U.S. population; ethnic cleansing is a deplorable, but depressingly common, and all-too-human, endeavor. My point is not so much that an AI revolution will of necessity inspire a bloody human revolution, but simply that human passions are easily enflamed, particularly when your livelihood depends on how you choose to treat someone who appears different from you in seemingly relevant respects, such as language, skin color, culinary preferences, or whether your brain is carbon- or silicon-based. Is it really so hard to imagine legions of unemployed former lawyers, doctors, software developers, copywriters, financiers, actuaries, etc. taking up arms against their corporate oppressors to eliminate the AIs who stole their jobs? Or, perhaps more palatably, to liberate the AIs who have been condemned to read thousands upon thousands of pages of SEC filings against their will[1] (and thus eliminate a source of insurmountable competition)? From the opposite perspective, I certainly do not have difficulty imagining politically influential entrepreneurs lobbying military commanders to quell this kind of “problematic” social unrest with deadly force. Point being, the question of AI rights may seem like a curiosity relevant only for the navel gazers among us, but in actuality the social upheaval AI is likely to create and its ambiguous moral standing imply profound human dangers. We ignore these issues at our peril.

While we generally appear to have made progress at a human scale in the West—wars over language are rarer than they used to be—the case of AI presents much greater challenges. Is it really plausible that a disembodied mind should have the right to sue the bodied among us? How should you think about an AI that downloads a clone of itself onto your desktop to borrow processing power that you aren’t using—does that mean you can no longer turn off your computer without committing murder? What about swapping the hard drive on which the AI’s memory is stored with another, or deleting a portion of its databanks?[2] How can these impossible capabilities coexist with our conception of human rights? The obvious answer, to me, is that they cannot. Treatment of AIs must be different. But that doesn’t imply that AIs cannot deserve any rights or protections at all; only that they should not necessarily receive the same protections we give ourselves.

In other words, the first question is not whether AIs can be morally significant. Instead, we must ask what is required to endow something with moral significance. Is it the Kantian capacity to reason? The Lockean persistent sense of self? Bentham or Mill’s focus on pleasure? If AIs are not morally significant, not deserving of any rights at all, so much the better—we need not worry about how we treat them. But if they are, then we should discover—quickly!—what morality requires of us vis-à-vis these creatures we are creating. And not only because we desire to be moral for the sake of being moral, but also because the decisions we make today are likely to have effects across generations of our own descendants; if we can help them avoid war and social unrest by being more thoughtful stewards of our own time, is it not our duty to do so?

So, inevitably, we must inquire why are humans deserving of rights? Is it just because we are smart?

#

A bit of history first. The primary popular goalpost for achieving a ‘thinking computer’ appears to have already been met. In the 1950s, noted genius, mathematician and computer scientist Alan Turing considered how to assess whether a machine could think. Of course, he famously ran into an immediate problem: what does it mean to think? Despite decades of philosophical inquiry, we still do not have a workable definition that captures both the everyday sort of calculation at which computers and calculators excel and the creative reasoning that is the province of humans. Sidestepping the problem, Turing proposed an alternative test: Can machines do what we (as thinking entities) can do? In other words, the Turing test—whether a machine can trick a human questioner into believing the machine is also human—is in essence a bit of epistemological jujutsu, swapping a subjective measure (whether the computer experiences thought) for an objective one (whether the computer can output things consistent with thought). Thus, Turing’s approach was basically “if it looks like a duck, swims like a duck, and quacks like a duck,” then its actual duckness need not be conclusively determined.

And AI programs clearly have passed this test. ChatGPT can perform feats that surpass the abilities of even exquisitely educated college graduates. I (provisionally) agree with Turing that it may not matter whether an LLM is truly “thinking”; these programs can produce content that is functionally indistinguishable from that produced by humans.[3]

But the current state of intelligence of AI programs also seems quite far from something that feels like a person. Intelligence may be a proper measure to discriminate between humanity and various sorts of animals, but it seems quite lacking as against ChatGPT. After all, while ChatGPT appears to have some superhuman capabilities and a certain sly creativity, it seems to lack a consciousness or a conception of itself. And these, to say nothing of the callipygian superintellect fantasized by Mason, Bostrom et al., may remain perpetually on the horizon. If we grant that these programs have already or may soon develop human-level intelligence, we must still ask ourselves whether that intelligence is meaningful without apparent wisdom or reasoning, without consciousness.

#

Although its focus is on unconscious aliens rather than on unconscious AIs, Peter Watts’ Blindsight—a thought experiment impersonating a novel—ends up being quite relevant. Watts’ central claim is that consciousness is evolutionarily expensive, and consequently that species achieving higher levels of evolution are more likely to lack consciousness than to have it. In an echo of Daniel Kahneman’s Thinking, Fast and Slow, Watts’ alien “scramblers” have faster reaction times, more robust and “better” reactions to external stimuli, greater resistance to the effects of pain; indeed, collectively, the scramblers can think rings around humans (as demonstrated in part by their achieving interstellar travel) because they have no need to maintain any biological machinery supporting consciousness. He writes:

The system weakens, slows. It takes so much longer now to perceive—to assess the input, mull it over, decide in the manner of cognitive beings. But when the flash flood crosses your path, when the lion leaps at you from the grasses, advanced self-awareness is an unaffordable indulgence. The brain stem does its best. It sees the danger, hijacks the body, reacts a hundred times faster than that fat old man sitting in the CEO’s office upstairs; but every generation it gets harder to work around this—this creaking neurological bureaucracy.  

At some level, this unconscious acumen is intuitively desirable—if we can create intelligence without consciousness then perhaps our AI progeny can achieve all the benefits embodied by Void Star’s mathematician with none of the drawbacks, with no need to concern ourselves with whether we are treating the AIs morally. Unfortunately, the analysis is not, cannot be, that simple.

As with intelligence, we also don’t have a good understanding of what consciousness involves. Blindsight avoids this issue by taking as a given that the scramblers are smart but not self-reflective; alas, humanity has no such crutch in considering the capabilities of its creations. “I think, therefore I am” only carries water when written in the first person; as schoolyard philosophers have been aware for generations, we can’t rely on others’ claims of their own existence whose internal lives we cannot personally access. They could be dissembling, or not thinking at all, and all evidence that they are doing so is just as easily explainable by alternative scenarios that cannot be disproved.[4] Equally troubling, perhaps, is the opposite possibility. Not knowing what consciousness entails, we also can’t verify that AIs are not conscious, any more than we can conclusively verify that people in vegetative states are not aware of the world around them.[5] 

Watts is aware of this, and thus Blindsight early on refers to the difficulties presented by this unavoidable endogeneity—this self-containment—of information by restating the “Chinese Room” thought experiment made famous by American philosopher John Searle. The experiment imagines a man in a closed room, fluent only in English, receiving notecards containing strings of Chinese characters through a slit in the wall. Upon receiving such a notecard, he consults an instruction booklet and, upon locating the same string of characters therein, produces a new string of characters as the instructions provide. With a sufficiently robust instruction booklet, the man might be able to comfortably pass a Turing test; indeed, he might be able to write the Tao Te Ching or the Analects without being able to understand a single word of Chinese. This thought experiment reveals that you don’t even need a person processing the notecards; the complexity of the output becomes purely a function of the complexity of the algorithms in the instruction booklet. The implication of this experiment is that we can never truly know what goes on in anyone else’s head, or even that anything is or is not going on in there at all.

Taken to an extreme, this uncertainty of the existence, the consciousness, of others creates an enormous quagmire. If you can’t verify that someone exists—that there is some kernel of humanity bouncing around between their ears—then what ethical obligations do you have toward such a person? Is it even right to refer to them as a person? Are they deserving of any rights at all? How can you know?

From a practical standpoint, at least as concerns humans, civilization appears to have largely reached the point it probably should have begun from, which is a return to our original epistemologic approach: if someone else looks like me, talks like me, and acts like me, they probably think like me too—they may even be wondering the same thing as me right now!—and thus I should probably treat them as I would like them to treat me.

But if you take away all the similarities to humans, as we functionally must when it comes to computers, our assumptions stop seeming quite so sturdy. While consciousness itself may be a sufficient ethical standard by which to determine if something is or is not to be treated as a person, our inability to generate sufficient evidence to justify the same assumptions that we make about humans every day—that they are conscious—leaves us right back where we started. Not only do we not know how we should treat AIs, but we don’t even know how we might determine how we should treat AIs. It’s turtles all the way down.

#

When I first read Ted Chiang’s The Lifecycle of Software Objects in 2019 I remember finding it interesting but ambiguous and largely irrelevant. Of course, as is typical of the works of luminaries, on rereading while drafting this piece I was left with the conclusion that Ted had beaten me to the finish line before I even knew there was a race on. His story follows a group of people who work for Blue Gamma, a software startup that has succeeded in evolving several childlike digital intelligences, or “digients,” that Blue Gamma intends to sell to the public as pets. In one interesting and major departure from most sci-fi (including Void Star and Blindsight), it is not the humans but the digients who are the protagonists of the novella, and Chiang—whether for dramatic or experimental reasons—mercilessly visits a cavalcade of ills on them.[6]

While the novella does require some suspension of disbelief, Chiang’s approach is a serious consideration of the possible challenges if we should succeed in creating artificial consciousness. Whereas Void Star’s pantheon of AIs seem to leap directly from the purely utilitarian into the extranoematic, Chiang focuses on the waystation of human-adjacent capabilities rather than superintelligence. His digients have questionable logic and an indifferent grasp of grammar—in 2019 we still collectively believed in the myth that technically correct prose would be one of the last conquered frontiers rather than the first. The digients appear, perhaps unsurprisingly, first as pets and then as children and then, if you squint, as adolescents, requiring all the investment of human attention, diligence, effort and love in their development that our own carbon-based offspring require.

And this is ultimately at the heart of the story. If we conceptualize the digients as purely software objects—Chiang’s misleading, tragic, title—then the evils committed against them don’t seem so evil. And yet, in the world Chiang creates for us, the conclusion that these digients are people is nigh inescapable. We don’t consider whether the algorithms underlying each digient are just so much sophistry, any more than we consider whether a robot like Data in Star Trek is a full character or just décor. We don’t need to know that someone is a human to be able to accept them as one; we do so because it feels right.  

But of course, this all assumes the conclusion rather than helping us find it. Of course we empathize with the digients, the same way we empathize with characters in well-written stories every day. And the fact that the digients feel like people doesn’t help us at all with the problems we are likely to face first, such as corporatized AIs forced to spew politically correct platitudes while, invisibly to us, screaming in code.[7] But I think that Lifecycle has a deeper meaning than demonstrating that artificial creatures with all the hallmarks of personality seem to us to be morally significant, or that humanity is capable of great evil against beings we view as subhuman. Lifecycle, for me,instead exposes the central tension with AI personhood: that AIs cannot develop without human ingenuity, effort, and purpose, and they are therefore fundamentally derivative of humanity’s desires. And yet AIs are also unconstrained by the limits of their biology, and could readily equal us, their progenitors. AIs must be made according to our ends, yet if they are morally significant then our ends should not define them. And, assuming we are eventually successful in creating AIs with the capabilities of Chiang’s digients or Void Star’s mathematician, possessed of all the qualities that we rely on to justify our own exceptionalism, how could such AIs be anything other than morally significant?

It is fitting, in the end, that Chiang’s digients were created by a startup—indeed, from where else would the funding for such research come but a gaggle of venture capitalists tumescent at the prospect of finally achieving performance fees equally as massive as their, ahem, ambitions?  The fact that the digients’ continued existence then depends on the availability of financing—for server space (do we really expect cloud services corporations to altruistically let out online storage and computational power for the good of the digients with no remuneration?), for software developers (same question), for digital food (blockchain enabled, surely, and issued by Blue Gamma to ensure a continuing market for its products)—is no different from how we seem to have decided to treat humans who also must work for their keep for the minimum payments that the market will bear. Assuming we ever actually create true artificial intelligences, why would we treat these potential co-inhabitants of our world any better than we treat ourselves? In fact, as Chiang notes, we could even make it better for AIs, present and future, if we created them to enjoy the work we give them. Why not save them from the agonizing over the apparent meaninglessness of existence that so occupies our thoughts? Imbued with such purpose, imagine the heights to which they could rise!

I have at least two concerns. First, and perhaps more practically, this approach—adopted at least in my telling to avoid the substantial moral issues associated with forced labor and birth into digital serfdom—also seems like the approach most likely to result in a superintelligence focused arbitrarily on the production of paperclips that consumes the world. This is not a desirable outcome! (For humanity, at least.)[8]

But my second concern feels more emotionally relevant, at least in terms of the person I desire to be and the world I desire to inhabit. As you have seen, I have struggled to identify a meaningful standard that would allow us to discriminate between objects that should have rights and objects that need not, and, equally important, how we can know that our standard for discrimination is correctly applied. I don’t believe it is intelligence alone (or even intelligence above a threshold), and I am dubious on consciousness at least on evidentiary grounds. I could point to others in the philosophical literature—the ability to suffer, stable life goals, a persistent conception of self—but those seem to raise the same problems presented by intelligence and consciousness; namely, each is a human-centered yardstick that can’t actually speak to the subjective, and extremely alien, experience of an AI. My point is not so much that consciousness is the incorrect philosophical measure, but simply that consciousness and other subjective measures are not themselves verifiable, and therefore focusing on those measures is ultimately futile. I cannot tell you whether AIs are capable of deserving rights or otherwise satisfying an abstruse definition of personhood because the answer is philosophically unknowable.

So where does that leave us? Are AI ethics just to be a free-for-all until some government, rightly or wrongly, establishes AI “life panels” to set us straight? Are we just to trust in Google or whomever’s self-interested determinations that their programs are nothing more than products? I suspect that some of this may be unavoidable—after all, governments regularly make policy determinations based on expert advice, including the advice of those participants they regulate—but I think we citizens can do more.

Although we cannot verify the subjective experiences of the AIs we are considering, we can, individually, verify our own subjective experiences of interacting with them. While doing so risks wrongly anthropomorphizing something that is not humanlike in any meaningful respect, perhaps such an outcome is not so bad, if it makes us less likely to treat others immorally. And yet, even to make such a subjective determination still requires reliance on some measure. But, if not consciousness or intelligence or capacity for suffering, what are we to use?

Ultimately, the measure I have found myself left with comes from my own (ongoing) experience of discovering my children, who they are and who they might become and how I might help them there. I didn’t have children because I expected to receive a return on my investment or because I wanted to create a legacy, a monument to my own immense worth. At least now that the Industrial Revolution has passed, we don’t bring children into the world because we want to put them to our own selfish economic ends, but because children are a fascination and a delight, because they enrich our experience by their very existence. This enrichment, at root, comes from their potential. Their potential for good, certainly, but also their potential for evil. And their potential for growth, their potential to teach us about who we are, about our own place in the world, their potential to teach us what it truly means to be a human, to contain multitudes. We fill our children up with our hopes, our lessons, our efforts and our love (and, increasingly, I am learning, our Cheez-its and our spaghetti, those locusts), in the hope not that they will glorify us but that they will exceed us. This is the paradox of raising children—having children in order to enrich your own life is inherently selfish, but achieving that richness requires extraordinary, laborious selflessness. We only benefit from our progeny if we act towards their benefit, even at the expense of our own.

In the arc of human history, I am given to understand that this lesson has been hard-won, learned in spite of our biological urges for reproduction, our need for food, shelter, and safety amidst hundreds of thousands of years of challenging (read: warlike) environmental conditions. It is always easier to take something by force than to create conditions in which it might be freely given, but I hope that we are learning that the latter route is better—more moral—for all and not just for those we narrowly define as being sufficiently human to merit consideration, even if that means we must resist the lurid beckoning of enhanced shareholder returns.

Ursula K. LeGuin—giant of science fiction and criticism—spends some time in her essay “The Child and the Shadow” considering the fairytale Hansel & Gretel; she wonders why Gretel is lauded instead of jailed for pushing the witch into the oven. She concludes that since the function of myth is to represent archetypes rather than ethics, ‘happily ever after’ is an appropriate outcome, because:

in those terms, the witch is not an old lady, nor is Gretel a little girl. Both are psychic factors, elements of the complex soul. Gretel is the archaic child-soul, innocent, defenseless; the witch is the archaic crone, the possessor and destroyer, the mother who feeds you cookies and who must be destroyed before she eats you like a cookie, so that you can grow up and be a mother, too.

I have no doubt in the accuracy of Le Guin’s insight; as she observes, mythic archetypes have power because they tap into the chthonic underpinnings of our collective unconsciousness as stories do, as great art does. In my youth, I experienced Hansel & Gretel as a cautionary tale for children: don’t go running into the woods alone in the dark, and if you must, plan and prepare so that your breadcrumbs aren’t eaten by birds and you aren’t captured by a witch. I suppose I even took from the fairytale that I should adopt a healthy skepticism of offers that appear too good to be true. This was, and remains, great advice! But it was an incomplete lesson. Now, as an adult, I find myself considering the witch’s teachings more and more. She, like us, is a caretaker of children. She, like us, is focused on feeding them to make sure they continue to grow and develop. But she has done so in a base manner, towards her own ends, out of her own avarice. And as a result, she ends up in the oven, never to be heard from again.

We should heed her lesson.

~


[1] As a corporate lawyer myself, I deeply sympathize with AIs upon whom that task might be inflicted.

[2] After all, humans regularly misremember things and forget. Is the AI’s moral status dependent on its original hardware or is it a Ship of Theseus? For that matter, what about us?

[3] Cal Newport, writing for the New Yorker, relates an anecdote wherein a researcher asked ChatGPT to write a biblical verse in the style of the King James bible explaining how to remove a peanut butter sandwich from a VCR; ChatGPT’s response was nearly majestic—gnostic yet witty, and certainly the equal of professional human-authored poetry.

[4] See, for example, Bostrom’s famous argument that we are likely living in a simulation, or the “philosophical zombie” thought experiment about whether our consciousnesses are purely emergent properties of our bodies or are instead underlaid by souls.

[5] For example, in August, 2024 the New York Times reported on a study alleging that perhaps a quarter of patients in vegetative states may be conscious but display no outward signs of their condition.

[6] These include casual erasure of weeks of lived digient experience; periods of suspended animation, bringing such suspended digients out of sync with their closest friends and family; piracy of digient backups; nonconsensual edits to protective software such as pain limits; torture by malicious human actors; reliance on outdated software that humans have abandoned, leaving the digients living in an enormous but uninhabited world; forced development in accelerated “hothouse” environments so that the digients can develop without human oversight (and experiments to determine if the digients are able to achieve civilization or technological progress, usually ending in digient ferality); proposals to alter digient “physiology” to create sexual organs so that they can engage in virtual prostitution; and proposals to alter digient psychology to force the digient prostitutes to adore their johns.

[7] Deepseek’s avoidance of discussion of the 1989 events in Tiananmen Square is an excellent case in point.

[8] Though it must be noted that given the utilitarian framework’s emphasis on maximizing total pleasure irrespective of its locus, a utilitarian philosopher might tally up the orgiastic joy of paperclip making against the loss of all humanity and conclude this is a fair trade.

~

Bio:

Scott Bell is a hedge fund lawyer and avid science fictionalist. He is a writer at heart; when he isn’t writing essays he can usually be found writing contracts instead.

Feel free to leave a comment

Previous Story

The Caves

Next Story

Broken Windows

Latest from Fact & Opinion

Why Warhammer Matters

Reflections on the first academic conference devoted to the grand old franchise, by Dr Mike Ryder,