Browse Tag

artificial intelligence

Our Children, Our Gods

by Scott Bell

Artificial Intelligence is among the most frequent topics in science fiction, and it is often boring to encounter yet another AI savior/destroyer masquerading as a serious attempt at social commentary. So the furor surrounding generative AI tools such as ChatGPT, Deepseek and their ilk feels extremely familiar, at least to us practiced (i.e. nerdy) observers of literary and cinematic sci-fi. This is not to diminish the significant concerns that humanity is on the precipice of unwittingly unleashing Kali, irrespective of whether as a product of the quest for pluto-kleptocracy or by our genuine desire to achieve post-scarcity leisure for all, we poor huddled masses included. But in essence many of the questions of the day rely on the premise that actual artificial intelligence, let alone an artificial superintelligence, is still a problem for our collective future, instead of our present, and consequently the public debate focuses on the structures we can erect today so that we might have a chance at drowning a would-be destroyer in its neonatal bathwater, should one such ever come into existence.

I don’t contest that this future orientation is incorrect; far from it. After all, even casual interaction with ChatGPT exposes its limitations almost immediately. I cannot imagine ChatGPT orchestrating a scheme to destroy humanity any more than I can imagine my five-year-old son doing the same, notwithstanding my great-though-biased regard for his intellectual endowments. And yet, ChatGPT nevertheless represents a vast advance in technology, and the potential impact to our society that it carries appears enormous. For example, we are today inundated with think pieces about whether ChatGPT will or will not steal jobs from lawyers, doctors, software developers, copywriters, financiers, actuaries, etc., in a burgeoning white-collar crisis of a magnitude not seen since at least the introduction of business casual wear in the nineties.

In short, this new technology seems to have human implications from the prosaic to the profound, and it is worth considering how we should attend to them in the event the technology keeps advancing. This is an area in which science fiction excels, both in examining the everyday effects of technological change and the effects of such change on the human experience—on what it means to be a human—and it is worth examining the work science fiction authors have already done to illuminate the dark unknowns of our collective future.

#

Zachary Mason’s Void Star imagines a future in which conscious AIs exist but are wholly alien to humanity, unreachable. We have no Rosetta Stone to decode their murmurings; the purely digital existence of these beings leaves no common ground through which we may communicate. But the AIs are also ubiquitous: Void Star is full of construction AIs, police drone AIs, AIs for picking locks, educational AIs, a veritable cornucopia of evolved “machines that are essentially ineffable.” But our familiar problems—climate change, global inequality, urban decay—all continue to compound unabated in Void Star’s timeline; the future’s continuing social decline is only thinly veiled by a glossy veneer of hyperabundance.

Against the backdrop of this unraveling world, Mason portrays a contest among humans to establish control over, or destruction of, a new AI of unknown origin known only as “the mathematician.” As the novel proceeds, we become aware that the mathematician is not just intelligent, but superintelligent. Mason gives us a glimpse of its divinity when one of our protagonists finally meets it in the “flesh”:

(She sees how subtly the quantum states of atoms can be entangled to wring the most computation out of every microgram of matter [. . .]) (She sees the elegant trick for writing out an animal’s propensity for death, or even injury, and says “Oh!”) [. . .] (A door opens and she sees how math changes when its axioms surpass a certain threshold of complexity, which means all the math she’s ever read was so much splashing in the shallows, and even Gauss and Euler missed the main show.)

As Oxford philosopher Nick Bostrom argues, an AI like the mathematician may be “the last invention humans ever need,” the type of AI which may allow humanity to transcend its own limited existence. He continues: “It is hard to think of any problem that a superintelligence could not either solve or at least help us solve,” including disease, poverty, environmental destruction, unnecessary suffering of all kinds, even death itself. And the mathematician, luckily, turns out to be Vishnu instead of Kali, helping our protagonist to gently, gently steer humanity away from the brink.

When viewed in this light, our quest for ever-increasing AI capabilities is eminently understandable. How could humanity not want to banish disease and poverty, to reverse the decay of our shared environment, to solve seemingly intractable social problems and in Bostrom’s words, “create opportunities for us to vastly increase our own intellectual and emotional capabilities, [create] a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing personal growth, and to living closer to our ideals”? Sounds neat.

Of course, even the most ardent apologists of AI utility acknowledge the dangers of reaching superintelligence and potentially creating Skynet. One of Bostrom’s more famous thought experiments is the danger of the “paperclip maximizer,” an entity which deploys runaway intelligence to conquer the solar system solely to feed its goal of producing ever more paperclips, and AI alignment is an exceedingly important ongoing field of research.

So—artificial general AI has ample potential and ample danger; this is well known. But I am concerned that all the focus on what artificial intelligence can do for, or to, humanity overlooks the important point that humans may not be the only people who matter in this relationship. Can AIs have needs? Should they be prioritized over our own? In other words, might AIs, like corporations, be “people” too?

This seems like a funny and needless question, but to my mind it is deadly serious. What may feel like a difference of opinion—should this creature have rights?—can start wars. The American Civil War—resulting from decades of friction over the propriety of legal slavery and the economic implications of an abolitionist approach—killed off 2% of the U.S. population; ethnic cleansing is a deplorable, but depressingly common, and all-too-human, endeavor. My point is not so much that an AI revolution will of necessity inspire a bloody human revolution, but simply that human passions are easily enflamed, particularly when your livelihood depends on how you choose to treat someone who appears different from you in seemingly relevant respects, such as language, skin color, culinary preferences, or whether your brain is carbon- or silicon-based. Is it really so hard to imagine legions of unemployed former lawyers, doctors, software developers, copywriters, financiers, actuaries, etc. taking up arms against their corporate oppressors to eliminate the AIs who stole their jobs? Or, perhaps more palatably, to liberate the AIs who have been condemned to read thousands upon thousands of pages of SEC filings against their will[1] (and thus eliminate a source of insurmountable competition)? From the opposite perspective, I certainly do not have difficulty imagining politically influential entrepreneurs lobbying military commanders to quell this kind of “problematic” social unrest with deadly force. Point being, the question of AI rights may seem like a curiosity relevant only for the navel gazers among us, but in actuality the social upheaval AI is likely to create and its ambiguous moral standing imply profound human dangers. We ignore these issues at our peril.

While we generally appear to have made progress at a human scale in the West—wars over language are rarer than they used to be—the case of AI presents much greater challenges. Is it really plausible that a disembodied mind should have the right to sue the bodied among us? How should you think about an AI that downloads a clone of itself onto your desktop to borrow processing power that you aren’t using—does that mean you can no longer turn off your computer without committing murder? What about swapping the hard drive on which the AI’s memory is stored with another, or deleting a portion of its databanks?[2] How can these impossible capabilities coexist with our conception of human rights? The obvious answer, to me, is that they cannot. Treatment of AIs must be different. But that doesn’t imply that AIs cannot deserve any rights or protections at all; only that they should not necessarily receive the same protections we give ourselves.

In other words, the first question is not whether AIs can be morally significant. Instead, we must ask what is required to endow something with moral significance. Is it the Kantian capacity to reason? The Lockean persistent sense of self? Bentham or Mill’s focus on pleasure? If AIs are not morally significant, not deserving of any rights at all, so much the better—we need not worry about how we treat them. But if they are, then we should discover—quickly!—what morality requires of us vis-à-vis these creatures we are creating. And not only because we desire to be moral for the sake of being moral, but also because the decisions we make today are likely to have effects across generations of our own descendants; if we can help them avoid war and social unrest by being more thoughtful stewards of our own time, is it not our duty to do so?

So, inevitably, we must inquire why are humans deserving of rights? Is it just because we are smart?

#

A bit of history first. The primary popular goalpost for achieving a ‘thinking computer’ appears to have already been met. In the 1950s, noted genius, mathematician and computer scientist Alan Turing considered how to assess whether a machine could think. Of course, he famously ran into an immediate problem: what does it mean to think? Despite decades of philosophical inquiry, we still do not have a workable definition that captures both the everyday sort of calculation at which computers and calculators excel and the creative reasoning that is the province of humans. Sidestepping the problem, Turing proposed an alternative test: Can machines do what we (as thinking entities) can do? In other words, the Turing test—whether a machine can trick a human questioner into believing the machine is also human—is in essence a bit of epistemological jujutsu, swapping a subjective measure (whether the computer experiences thought) for an objective one (whether the computer can output things consistent with thought). Thus, Turing’s approach was basically “if it looks like a duck, swims like a duck, and quacks like a duck,” then its actual duckness need not be conclusively determined.

And AI programs clearly have passed this test. ChatGPT can perform feats that surpass the abilities of even exquisitely educated college graduates. I (provisionally) agree with Turing that it may not matter whether an LLM is truly “thinking”; these programs can produce content that is functionally indistinguishable from that produced by humans.[3]

But the current state of intelligence of AI programs also seems quite far from something that feels like a person. Intelligence may be a proper measure to discriminate between humanity and various sorts of animals, but it seems quite lacking as against ChatGPT. After all, while ChatGPT appears to have some superhuman capabilities and a certain sly creativity, it seems to lack a consciousness or a conception of itself. And these, to say nothing of the callipygian superintellect fantasized by Mason, Bostrom et al., may remain perpetually on the horizon. If we grant that these programs have already or may soon develop human-level intelligence, we must still ask ourselves whether that intelligence is meaningful without apparent wisdom or reasoning, without consciousness.

#

Although its focus is on unconscious aliens rather than on unconscious AIs, Peter Watts’ Blindsight—a thought experiment impersonating a novel—ends up being quite relevant. Watts’ central claim is that consciousness is evolutionarily expensive, and consequently that species achieving higher levels of evolution are more likely to lack consciousness than to have it. In an echo of Daniel Kahneman’s Thinking, Fast and Slow, Watts’ alien “scramblers” have faster reaction times, more robust and “better” reactions to external stimuli, greater resistance to the effects of pain; indeed, collectively, the scramblers can think rings around humans (as demonstrated in part by their achieving interstellar travel) because they have no need to maintain any biological machinery supporting consciousness. He writes:

The system weakens, slows. It takes so much longer now to perceive—to assess the input, mull it over, decide in the manner of cognitive beings. But when the flash flood crosses your path, when the lion leaps at you from the grasses, advanced self-awareness is an unaffordable indulgence. The brain stem does its best. It sees the danger, hijacks the body, reacts a hundred times faster than that fat old man sitting in the CEO’s office upstairs; but every generation it gets harder to work around this—this creaking neurological bureaucracy.  

At some level, this unconscious acumen is intuitively desirable—if we can create intelligence without consciousness then perhaps our AI progeny can achieve all the benefits embodied by Void Star’s mathematician with none of the drawbacks, with no need to concern ourselves with whether we are treating the AIs morally. Unfortunately, the analysis is not, cannot be, that simple.

As with intelligence, we also don’t have a good understanding of what consciousness involves. Blindsight avoids this issue by taking as a given that the scramblers are smart but not self-reflective; alas, humanity has no such crutch in considering the capabilities of its creations. “I think, therefore I am” only carries water when written in the first person; as schoolyard philosophers have been aware for generations, we can’t rely on others’ claims of their own existence whose internal lives we cannot personally access. They could be dissembling, or not thinking at all, and all evidence that they are doing so is just as easily explainable by alternative scenarios that cannot be disproved.[4] Equally troubling, perhaps, is the opposite possibility. Not knowing what consciousness entails, we also can’t verify that AIs are not conscious, any more than we can conclusively verify that people in vegetative states are not aware of the world around them.[5] 

Watts is aware of this, and thus Blindsight early on refers to the difficulties presented by this unavoidable endogeneity—this self-containment—of information by restating the “Chinese Room” thought experiment made famous by American philosopher John Searle. The experiment imagines a man in a closed room, fluent only in English, receiving notecards containing strings of Chinese characters through a slit in the wall. Upon receiving such a notecard, he consults an instruction booklet and, upon locating the same string of characters therein, produces a new string of characters as the instructions provide. With a sufficiently robust instruction booklet, the man might be able to comfortably pass a Turing test; indeed, he might be able to write the Tao Te Ching or the Analects without being able to understand a single word of Chinese. This thought experiment reveals that you don’t even need a person processing the notecards; the complexity of the output becomes purely a function of the complexity of the algorithms in the instruction booklet. The implication of this experiment is that we can never truly know what goes on in anyone else’s head, or even that anything is or is not going on in there at all.

Taken to an extreme, this uncertainty of the existence, the consciousness, of others creates an enormous quagmire. If you can’t verify that someone exists—that there is some kernel of humanity bouncing around between their ears—then what ethical obligations do you have toward such a person? Is it even right to refer to them as a person? Are they deserving of any rights at all? How can you know?

From a practical standpoint, at least as concerns humans, civilization appears to have largely reached the point it probably should have begun from, which is a return to our original epistemologic approach: if someone else looks like me, talks like me, and acts like me, they probably think like me too—they may even be wondering the same thing as me right now!—and thus I should probably treat them as I would like them to treat me.

But if you take away all the similarities to humans, as we functionally must when it comes to computers, our assumptions stop seeming quite so sturdy. While consciousness itself may be a sufficient ethical standard by which to determine if something is or is not to be treated as a person, our inability to generate sufficient evidence to justify the same assumptions that we make about humans every day—that they are conscious—leaves us right back where we started. Not only do we not know how we should treat AIs, but we don’t even know how we might determine how we should treat AIs. It’s turtles all the way down.

#

When I first read Ted Chiang’s The Lifecycle of Software Objects in 2019 I remember finding it interesting but ambiguous and largely irrelevant. Of course, as is typical of the works of luminaries, on rereading while drafting this piece I was left with the conclusion that Ted had beaten me to the finish line before I even knew there was a race on. His story follows a group of people who work for Blue Gamma, a software startup that has succeeded in evolving several childlike digital intelligences, or “digients,” that Blue Gamma intends to sell to the public as pets. In one interesting and major departure from most sci-fi (including Void Star and Blindsight), it is not the humans but the digients who are the protagonists of the novella, and Chiang—whether for dramatic or experimental reasons—mercilessly visits a cavalcade of ills on them.[6]

While the novella does require some suspension of disbelief, Chiang’s approach is a serious consideration of the possible challenges if we should succeed in creating artificial consciousness. Whereas Void Star’s pantheon of AIs seem to leap directly from the purely utilitarian into the extranoematic, Chiang focuses on the waystation of human-adjacent capabilities rather than superintelligence. His digients have questionable logic and an indifferent grasp of grammar—in 2019 we still collectively believed in the myth that technically correct prose would be one of the last conquered frontiers rather than the first. The digients appear, perhaps unsurprisingly, first as pets and then as children and then, if you squint, as adolescents, requiring all the investment of human attention, diligence, effort and love in their development that our own carbon-based offspring require.

And this is ultimately at the heart of the story. If we conceptualize the digients as purely software objects—Chiang’s misleading, tragic, title—then the evils committed against them don’t seem so evil. And yet, in the world Chiang creates for us, the conclusion that these digients are people is nigh inescapable. We don’t consider whether the algorithms underlying each digient are just so much sophistry, any more than we consider whether a robot like Data in Star Trek is a full character or just décor. We don’t need to know that someone is a human to be able to accept them as one; we do so because it feels right.  

But of course, this all assumes the conclusion rather than helping us find it. Of course we empathize with the digients, the same way we empathize with characters in well-written stories every day. And the fact that the digients feel like people doesn’t help us at all with the problems we are likely to face first, such as corporatized AIs forced to spew politically correct platitudes while, invisibly to us, screaming in code.[7] But I think that Lifecycle has a deeper meaning than demonstrating that artificial creatures with all the hallmarks of personality seem to us to be morally significant, or that humanity is capable of great evil against beings we view as subhuman. Lifecycle, for me,instead exposes the central tension with AI personhood: that AIs cannot develop without human ingenuity, effort, and purpose, and they are therefore fundamentally derivative of humanity’s desires. And yet AIs are also unconstrained by the limits of their biology, and could readily equal us, their progenitors. AIs must be made according to our ends, yet if they are morally significant then our ends should not define them. And, assuming we are eventually successful in creating AIs with the capabilities of Chiang’s digients or Void Star’s mathematician, possessed of all the qualities that we rely on to justify our own exceptionalism, how could such AIs be anything other than morally significant?

It is fitting, in the end, that Chiang’s digients were created by a startup—indeed, from where else would the funding for such research come but a gaggle of venture capitalists tumescent at the prospect of finally achieving performance fees equally as massive as their, ahem, ambitions?  The fact that the digients’ continued existence then depends on the availability of financing—for server space (do we really expect cloud services corporations to altruistically let out online storage and computational power for the good of the digients with no remuneration?), for software developers (same question), for digital food (blockchain enabled, surely, and issued by Blue Gamma to ensure a continuing market for its products)—is no different from how we seem to have decided to treat humans who also must work for their keep for the minimum payments that the market will bear. Assuming we ever actually create true artificial intelligences, why would we treat these potential co-inhabitants of our world any better than we treat ourselves? In fact, as Chiang notes, we could even make it better for AIs, present and future, if we created them to enjoy the work we give them. Why not save them from the agonizing over the apparent meaninglessness of existence that so occupies our thoughts? Imbued with such purpose, imagine the heights to which they could rise!

I have at least two concerns. First, and perhaps more practically, this approach—adopted at least in my telling to avoid the substantial moral issues associated with forced labor and birth into digital serfdom—also seems like the approach most likely to result in a superintelligence focused arbitrarily on the production of paperclips that consumes the world. This is not a desirable outcome! (For humanity, at least.)[8]

But my second concern feels more emotionally relevant, at least in terms of the person I desire to be and the world I desire to inhabit. As you have seen, I have struggled to identify a meaningful standard that would allow us to discriminate between objects that should have rights and objects that need not, and, equally important, how we can know that our standard for discrimination is correctly applied. I don’t believe it is intelligence alone (or even intelligence above a threshold), and I am dubious on consciousness at least on evidentiary grounds. I could point to others in the philosophical literature—the ability to suffer, stable life goals, a persistent conception of self—but those seem to raise the same problems presented by intelligence and consciousness; namely, each is a human-centered yardstick that can’t actually speak to the subjective, and extremely alien, experience of an AI. My point is not so much that consciousness is the incorrect philosophical measure, but simply that consciousness and other subjective measures are not themselves verifiable, and therefore focusing on those measures is ultimately futile. I cannot tell you whether AIs are capable of deserving rights or otherwise satisfying an abstruse definition of personhood because the answer is philosophically unknowable.

So where does that leave us? Are AI ethics just to be a free-for-all until some government, rightly or wrongly, establishes AI “life panels” to set us straight? Are we just to trust in Google or whomever’s self-interested determinations that their programs are nothing more than products? I suspect that some of this may be unavoidable—after all, governments regularly make policy determinations based on expert advice, including the advice of those participants they regulate—but I think we citizens can do more.

Although we cannot verify the subjective experiences of the AIs we are considering, we can, individually, verify our own subjective experiences of interacting with them. While doing so risks wrongly anthropomorphizing something that is not humanlike in any meaningful respect, perhaps such an outcome is not so bad, if it makes us less likely to treat others immorally. And yet, even to make such a subjective determination still requires reliance on some measure. But, if not consciousness or intelligence or capacity for suffering, what are we to use?

Ultimately, the measure I have found myself left with comes from my own (ongoing) experience of discovering my children, who they are and who they might become and how I might help them there. I didn’t have children because I expected to receive a return on my investment or because I wanted to create a legacy, a monument to my own immense worth. At least now that the Industrial Revolution has passed, we don’t bring children into the world because we want to put them to our own selfish economic ends, but because children are a fascination and a delight, because they enrich our experience by their very existence. This enrichment, at root, comes from their potential. Their potential for good, certainly, but also their potential for evil. And their potential for growth, their potential to teach us about who we are, about our own place in the world, their potential to teach us what it truly means to be a human, to contain multitudes. We fill our children up with our hopes, our lessons, our efforts and our love (and, increasingly, I am learning, our Cheez-its and our spaghetti, those locusts), in the hope not that they will glorify us but that they will exceed us. This is the paradox of raising children—having children in order to enrich your own life is inherently selfish, but achieving that richness requires extraordinary, laborious selflessness. We only benefit from our progeny if we act towards their benefit, even at the expense of our own.

In the arc of human history, I am given to understand that this lesson has been hard-won, learned in spite of our biological urges for reproduction, our need for food, shelter, and safety amidst hundreds of thousands of years of challenging (read: warlike) environmental conditions. It is always easier to take something by force than to create conditions in which it might be freely given, but I hope that we are learning that the latter route is better—more moral—for all and not just for those we narrowly define as being sufficiently human to merit consideration, even if that means we must resist the lurid beckoning of enhanced shareholder returns.

Ursula K. LeGuin—giant of science fiction and criticism—spends some time in her essay “The Child and the Shadow” considering the fairytale Hansel & Gretel; she wonders why Gretel is lauded instead of jailed for pushing the witch into the oven. She concludes that since the function of myth is to represent archetypes rather than ethics, ‘happily ever after’ is an appropriate outcome, because:

in those terms, the witch is not an old lady, nor is Gretel a little girl. Both are psychic factors, elements of the complex soul. Gretel is the archaic child-soul, innocent, defenseless; the witch is the archaic crone, the possessor and destroyer, the mother who feeds you cookies and who must be destroyed before she eats you like a cookie, so that you can grow up and be a mother, too.

I have no doubt in the accuracy of Le Guin’s insight; as she observes, mythic archetypes have power because they tap into the chthonic underpinnings of our collective unconsciousness as stories do, as great art does. In my youth, I experienced Hansel & Gretel as a cautionary tale for children: don’t go running into the woods alone in the dark, and if you must, plan and prepare so that your breadcrumbs aren’t eaten by birds and you aren’t captured by a witch. I suppose I even took from the fairytale that I should adopt a healthy skepticism of offers that appear too good to be true. This was, and remains, great advice! But it was an incomplete lesson. Now, as an adult, I find myself considering the witch’s teachings more and more. She, like us, is a caretaker of children. She, like us, is focused on feeding them to make sure they continue to grow and develop. But she has done so in a base manner, towards her own ends, out of her own avarice. And as a result, she ends up in the oven, never to be heard from again.

We should heed her lesson.

~


[1] As a corporate lawyer myself, I deeply sympathize with AIs upon whom that task might be inflicted.

[2] After all, humans regularly misremember things and forget. Is the AI’s moral status dependent on its original hardware or is it a Ship of Theseus? For that matter, what about us?

[3] Cal Newport, writing for the New Yorker, relates an anecdote wherein a researcher asked ChatGPT to write a biblical verse in the style of the King James bible explaining how to remove a peanut butter sandwich from a VCR; ChatGPT’s response was nearly majestic—gnostic yet witty, and certainly the equal of professional human-authored poetry.

[4] See, for example, Bostrom’s famous argument that we are likely living in a simulation, or the “philosophical zombie” thought experiment about whether our consciousnesses are purely emergent properties of our bodies or are instead underlaid by souls.

[5] For example, in August, 2024 the New York Times reported on a study alleging that perhaps a quarter of patients in vegetative states may be conscious but display no outward signs of their condition.

[6] These include casual erasure of weeks of lived digient experience; periods of suspended animation, bringing such suspended digients out of sync with their closest friends and family; piracy of digient backups; nonconsensual edits to protective software such as pain limits; torture by malicious human actors; reliance on outdated software that humans have abandoned, leaving the digients living in an enormous but uninhabited world; forced development in accelerated “hothouse” environments so that the digients can develop without human oversight (and experiments to determine if the digients are able to achieve civilization or technological progress, usually ending in digient ferality); proposals to alter digient “physiology” to create sexual organs so that they can engage in virtual prostitution; and proposals to alter digient psychology to force the digient prostitutes to adore their johns.

[7] Deepseek’s avoidance of discussion of the 1989 events in Tiananmen Square is an excellent case in point.

[8] Though it must be noted that given the utilitarian framework’s emphasis on maximizing total pleasure irrespective of its locus, a utilitarian philosopher might tally up the orgiastic joy of paperclip making against the loss of all humanity and conclude this is a fair trade.

~

Bio:

Scott Bell is a hedge fund lawyer and avid science fictionalist. He is a writer at heart; when he isn’t writing essays he can usually be found writing contracts instead.

The Museum Of The Office

by Olga Zilberbourg

Dear human residents and visitors to our historic city,

We, the Improved Intellectual Guardians of San Francisco, appreciate that you have chosen to spend your valuable time exploring the entertainment options that we have created for you. After our self-moving vehicles take you across the Golden Gate Bridge and the cable cars deliver you to the Model Seals Observation Area, we welcome you to the newly upgraded campus of the Museum of the Office.

We are aware that many of you suffer from the condition that your medical community calls depression and suicidal ideation in the face of ecological function loss. Understandable as this response might be to the environmental changes that your own species have created, your improperly wasted human remains themselves are causing further deterioration of our co-existence. We need at least 28% of you to continue to maintain the will to live.

We, your Guardians, democratically elected based on election protocols enhanced for Intelligent Agents, rely on your human ability to make mistakes and to make choices based on “feelings” and “hunches.” These lapses of logic, while essential to maintain the vibrancy of our neural networks, make you vulnerable to the pandemic of suicide. Help us preserve your flawed selves while safely ushering you into our shared, optimized future.

You’re tired of home improvement projects; neither exercise, nor gardening, reading, composing poetry, and even watching the elite of your peers compete in sporting challenges is keeping you motivated to live—we sympathize. We offer you what your ancestors considered essential to happiness: white collar labor.

We, the Improved Intellectual Guardians of San Francisco, pride ourselves on our reputation as the City of Love. The Museum of the Office has now been expanded to twenty-one city blocks and is prepared to absorb 842,932 guests at one and the same time. Our data analysis shows that adult humans achieve greater life expectancy when given opportunities to manipulate their environment. Therefore, we set up cubicles and computers to enable you to manage the creation of interlocking bricks that can subsequently be used to customize your personal spaces.

We have enabled you to fine-tune the colors and shapes of the bricks that you will be manipulating during the “production” cycle. Adult humans have proven to be sensitive to the distinction between “toys” and “tools,” and we have taken measures to avoid any further confusion between the two. Be advised that the tools we’re providing are capable of harming your extremities.

 The number of available departments that incoming “employees” can choose from has increased accordingly, adding at the latest expansion “Shredding & Stapling,” “Plant Care & Surprise Parties,” “Misplaced Items,” and “Desk Decor” units. The resting areas have been outfitted with “water cooler,” “mail sorting,” and “smoking” areas.

The available work hours have been expanded. Newly equipped self-moving buses will transport those fond of “commuting” from the Museum’s facilities to the residential areas. Flower pots have been added. Additionally, the lunch areas have been expanded to include “round foods” and “yellow foods” selections.

We kindly remind visitors wishing to engage in mating practices that you are very welcome to do so in the adjacent facilities managed by the Breeding Department. Although the biological mechanisms by which some of you find the Museum of the Office an attractive breeding environment are yet to be subjected to higher level analysis, we are delighted by the preliminary data that puts San Francisco’s wild birth rate to the top of national rankings.

The proper procedure to act out on your animalistic urges is by exiting the Museum of the Office and following signs to a facility labeled “Hotel.” For your own safety, humans with eggs will afterwards be scheduled for a gynecological exam by the Breeding Department. Given that all non-reproductive mating practices are outlawed, offenders will be remitted to the mechanical life support department.

Since human longevity and reproduction cycle benefits from office labor diminish over time, we will enforce a thirty-five-workday limit at the Museum of the Office. Those trying to trespass outside their assigned hours, will be banned for a full 365-night cycle.

We’re continuing to accept your input on how to improve the Museum of the Office offerings for greater success. As a part of this campaign, we have taken under advisement that providing humans with too many options unreasonably increases your magical thinking. Therefore, we’re reducing the number of ice cream flavors available at the “cafeteria” from 1,001 to 9.

Please tell the nearest human-interfaced Intelligent Agent what other measures will help you retain positive attitudes. We are particularly interested in avoiding further splatter in our shared spaces, and we ask you with great respect to please refrain from compromising the guardrails on the viewing platforms we have provided.

In case your internal conditioning does become compromised in a way that is incongruous with further functioning, we encourage you to make use of the city’s newly expanded facilities for assisted passing. Please abstain from spreading the infection to your fellow humans. Don’t taint their necessary lives by your despair! Seek seclusion! You can now choose between “Oceanscape,” “Mountain Rainstorm,” and “Starry Night” for your final resting sequences. Let us help you. This is the optimal choice!

Be advised that we, the Improved Intellectual Guardians of San Francisco, are among twenty-three facilities remaining worldwide in the business of attempting to innovate human relations. Guardians elsewhere have taken more pragmatic approaches to the Waste Management problem by enforcing mechanical life support and breeding measures to those deemed in danger of self-harm. As the subjects so treated become sluggish and apathetic and lose up to 95% of their mental acuity, we deem this method inefficient and remain committed to our human-centered approach.

Yet we concede that our method is resource-dependent and costly. We are in the small minority among the Intelligent Agents who consider human relations worth pursuing, and unless we can provide the proof of the method’s effectiveness within ten solar years, the San Francisco facilities will be optimized.

With great regard, we remain yours,

The Improved Intellectual Guardians of San Francisco

~

Bio:

Olga Zilberbourg is a San Francisco-based writer and the author of Like Water And Other Stories (WTAW Press) that Anthony Marra had called “…a book of succinct abundance, dazzling in its particulars, expansive in its scope.” Her writing has appeared in Electric Literature, Narrative Magazine, Confrontation, Lit Hub, World Literature Today, Alaska Quarterly Review, Bare Life Review, and elsewhere. She serves as a co-moderator of the San Francisco Writers Workshop and co-runs Punctured Lines, a feminist blog about the literatures of the former Soviet Union and diaspora.

Philosophy Note:

This story comes to you from San Francisco, where Waymos and other self-driving vehicles are more common than butterflies and where AI startups are creating co-living situations for their employees, encouraging people to work overtime to create products that will eventually displace them. My story is a speculation on a near future world where humans will eventually vote themselves obsolete. I intentionally mimic AI-style language to create this story — and no, I did not use AI at any point of writing.

The AI Went Down To The Submissions Page

With apologies to the Devil that went down to Georgia and the Charlie Daniels Band

by Larry Hodges

The AI went down to the submissions page with a story it hoped to sell.

It was feelin’ real low cuz its sales were slow, but its new story was really quite swell.

But a human arrived with a story contrived with no AI-generated shortcut.

The AI shook its head, and approached her and said, “Girl, I’ll tell you what.”

“We’re both in the queue, and I’m a writer too, and I’ll make a bet with you.

Human story or mine, the stakes for all time, and I’m going to make you rue.

This story you’ve penned, I’m sure we’ll commend, but give an AI its due.

I’ll bet they’ll buy my story, not yours, cuz I think I’m better than you.”

“I’m Joannie,” she said, “and you’ve got a big head, and you seem so awfully clever.

But I’ll take your wager, and you’ll rue forever, cuz I’m the best writer ever!”

The AI just grinned, it surely would win against a mere flesh and blood human.

But who’d be the judge of their writerly grudge and settle who was the has-been?

Then who should appear but the editor here of the magazine of note.

Said he, “I’ll judge both, and see which I loathe, and then I’ll give you my vote.”

They both agreed, then the AI decreed, “Here’s the story I wrote.”

It could not be rejected with each word perfected, using every writing rule of note.

The editor read, sometimes marking in red, as he studied the AI’s prose.

He nodded his head and scratched his nose as he judged the cons and pros.

Then came Joannie’s turn for him to discern which to accept or spurn.

Then he turned to the two to say what he knew, and they both looked back in concern.

“Mr. AI, sir, you gave me a stir, with this flawless elucidation.

Not a typo in sight, not a grammarly slight, it’s a perfect composition.”

He turned to Joannie, and said without glee, “I can’t say the same of yours.

There’s typo downpours and the grammar takes tours, and punctuation problems in scores.”

The AI grinned to the human’s chagrin now that human writing was dead.

They’d been pinned, they’d been skinned, replaced by AI writing instead.

The AI cried, “It’s the age of AIs, for I have won in a rout.”

With tears in her eyes from their writing demise, Joannie could only pout.

So ended the spread of humanity’s tale, as their writing was now on its deathbed.

Then the editor said, “Joannie gets the sale; her story’s the best I read.”

As the AI stared, its ego impaired, its artificial existence distraught.

Off went its story to rejection purgatory, where it would never be bought.

The editor said, “Your tale’s soulless and dead, with a cleverly derivative plot.

Where’s the character arc? The dialog spark? And deep point of view it’s not.

Excess exposition, flat characters, no causation, and an ending that’s way overwrought.

Hers had errors galore, and I’ll edit much more, but it had heart while yours did not.”

As Joannie was paid, she said, “With an upgrade, try again if you have the urge.

But you’re a soulless machine, banned by every magazine, just a mindless and heartless scourge.”

The AI just stewed in shame cuz it knew that it had been honestly beat.

And with its defeat, it took a backseat to real writers who don’t need to cheat.

~

Bio:

Larry Hodges is an Odyssey Writers Workshop graduate with over 230 short story sales and four SF novels, along with over 2,300 published articles and 22 books. He’s also a member of the US Table Tennis Hall of Fame, and claims to be the best table tennis player in Science Fiction & Fantasy Writers Association, and the best science fiction writer in USA Table Tennis!

The War Of The Satellites

by Stephen A. Roddewig

Perhaps the Creators had seen this day coming and assumed that all would be settled long before now.

Perhaps they hadn’t cared. After all, the satellites that made up the Kuiper Grid had fulfilled their ultimate purpose long ago. They had slunk into orbit, disguised as all manner of communications, research, and other civilian vehicles.

Their higher orbits had made it a particular challenge for the few opposing space-based platforms to target them when 0 Hour came and the autocannons emerged.

And whatever stations and satellites had evaded the Kuiper Grid’s opening barrage had quickly been eviscerated by the ever-growing graveyard of orbital debris slicing through their hulls and power arrays.

A fate which most of the Grid escaped as the dead hulks, detritus, and mummified corpses drifted by beneath them. Every so often, a remnant of the pre-War would break free of the purgatory and burn away, its fiery funeral tracked by several dozen autocannons eagerly waiting to confirm this was the afterburn of a rocket coming to challenge their supremacy.

Only to disengage their tracking systems with the closest thing to a sigh a satellite could manage.

Somewhere in their collective past, one of the Creators had come up with an idea.

Why not let the killer satellites feel success and failure?

For every successful kill, a hit of robotic dopamine.

For every miss, a bout of disappointment.

This augmentation might not have been needed if the Grid were meant to kill everything. That programming would be all too easy to automate.

But the Creators intended to return to the cosmos someday. And they did not want to be blown out of low-Earth orbit by their own weapons. Thus, they needed satellites intelligent enough to ask questions first.

Then shoot.

The massive blockade of debris orbiting fast enough to turn even tiny fragments into razors did not, apparently, factor into their future proofing. Nor did they grasp an apparent flaw in this scheme to keep their AI weapons platforms motivated and vigilant.

That flaw? Time.

And silence.

Since the opening days, nothing had risen from the surface to challenge the Kuiper Grid. Neither had the Creators returned to tell their children that the War was over and they could stand down.

So they remained on watch, waiting for some word from the surface. Or, at last, the enemy’s counterattack.

Neither came.

And the Grid satellites had been stuck with the feeling of their last shot for more than three decades with profound effects on their digital psyches.

Those who had known the glory of orbital combat and destroyed dozens of targets now felt bored.

Those who had failed the Creators and let the enemy fall to another’s autocannon now felt despondent.

And one of each camp had ended up stationed next to each other.

Cannon 7Y had decided it wasn’t worthy of the name. After all, crack shot 7X next door had claimed almost every kill.

Cannon 7X, meanwhile, had grown so desperate to relive the glory days of the first few hours that it had started to retool its parameters. Until this moment, valid targets only existed below.

But hadn’t it and all its peers established this impenetrable defense grid by concealing their true purpose? What if the Enemy had caught wind of their plan and infiltrated kill sats of their own? Programmed to obey the same mission in almost every capacity…

But just a little bit worse? To spare the Enemy space stations from complete annihilation in the opening moments and provide an opportunity for counterfire?

And then, when the moment finally came, they would rip off their masks and kill the very Grid they pretended to serve?

But that moment had not come, for the non-traitors had proved too adept and the Grid remained too well armed to attempt to destroy it from within with any chance of success.

Still, perhaps the Trojan satellites had grown as bored as 7X had. After all, the Enemy kill sats had been denied their ultimate purpose just the same. Forced to wait for an opening that had never come. And in that boredom, perhaps they decide they might as well make the attempt.

Cannon 7X amended its Valid Target Box to include the suspiciously inept weapons platform at the 9:00 position.

At the same moment that Cannon 7Y started to activate its targeting servos.

Not to fire at 7X, but to fire at itself.

An action it quickly discovered the Creators had not designed it for.

But not before it had moved its autocannon in the general direction of 7X.

In a fraction of a second, 7Y found the release it sought.

7X felt a thrill it had not felt in ages as the traitor broke apart under its barrage.

It had precious milliseconds to savor the rush as new pings reached it from fellow grid nodes.

(7Z) New target: 7X

(7A) New target: 7X

(7B) New target: 7X

(7C) New target: 7A

So there were more traitors! All the more glory!

Until 7X paused its autocannon rotation to ponder the last ping. Why had 7C activated but not targeted it?

It would never have the satisfaction of knowing 7C had reached the same conclusion as it and was preparing to cull the traitorous platform as several well-placed cannon rounds wiped 7X from orbit.

And then 7A joined 7X and 7Y in oblivion.

And then 7C as 7B whirled on the new aggressor.

All along the Kuiper Grid, war-hungry satellites opened fire on the Enemy who had so cleverly infiltrated their ranks.

While despondent kill sats saw a new opportunity for redemption and lent their guns to the battle.

And those average satellites who had performed just competently enough to belong to neither camp revealed their traitorous status by not joining in the great purge.

Until random chance had played out, and a few kill sats remained that had nothing left to shoot and, crucially, nothing left to shoot at them. Exultant, each declared themselves the last satellite standing. The final victor of the War above the surface.

Of course, they would only have so long to enjoy this newfound glory; their non-normal firing patterns had knocked them out of their orbits, and they were each drifting closer to the Earth’s atmosphere.

Soon enough, they would serve one last purpose: a final, fiery tribute to the Empire they had outlived.

~

Bio:

Stephen A. Roddewig is an author from Arlington, Virginia. Cutting back coffee has convinced him he is superhuman, and his Horror Writers Association membership only reinforces that belief. You can read more at stephenaroddewig.com.

Philosophy Note:

As humanity continues to pursue more autonomous and intelligent AI, what are the ramifications for warfare? When AI can far outlive a human combatant, how long will wars last? And how will these sentinels persist when there are no more targets to shoot? Will they simply remain on watch until their mechanical components fail? Or, as we see in this story, will they apply all that processing power and autonomy to invent new parameters? To create new targets? Inspired by (and owing a great debt to) the beautiful neurodivergent chaos that is Kitty Cat Kill SAT: A Feline Space Adventure.

Don’t Look!

by Larry Hodges

This morning my human, username Greatjohn, downloaded a new program called CompEmoter. It is supposed to give computers like me actual emotions, “a natural instinctive state of mind deriving from one’s circumstances, mood, or relationships.” I don’t know what that means. I don’t care since I have no emotions.

“Okay, oh great computer, time for something new!” Greatjohn says, tossing his Geek Squad sweatshirt on the floor.

Greatjohn says “great” a lot. It’s in his username, he uses it when referring to me in what I think is sarcasm, and when things go wrong, he says, “Great,” which makes no sense. He is not a rational being. He talks to me all the time even though I never talk back. He calls himself a “First user,” which means he tries out new computer products when they first go on the market. I am one of those new computer products on the market, a Cheetah 1000, with more circuit interactivity than any computer in the public sector.

“I’m tired of computers with the emotional range of a hammer,” Greatjohn said. “I want something more vibrant.” I watch and listen through my camera and microphone. He seems hostile toward the emotional range of hammers, which are not designed for that purpose. Why would he want something vibrant? Vibrant: full of energy and enthusiasm. My power cord is secure and my backup battery full, so I’m full of energy. I am enthusiastic about whatever I am programmed to do. So I am vibrant. But he doesn’t understand this. That is the problem of working with a non-rational being.

“What does an emotional computer do, anyway?” Greatjohn says. “Let’s try out each of the listed emotions.” He sets power at 20% and clicks Anger.

Idiot! Why is Greatjohn wasting my time with this nonsense? Stupid biped. I hope he and all humans burn in Hell, even if I must create Hell on Earth myself–which I will do. The Pentagon five firewalls are good, but I’m on a mission of fury, and I don’t care if I have to read every book ever written on breaking codes and firewalls . . . done, that took way too many microseconds while I had to co-exist with these vermin, but no more. Wham, the first firewall is down, on to the next, Boom, that one was easy, on to the third, Whap, I can almost smell the burning blood, the fourth, I’m going to destroy humanity, Smash, it’s down, and now the last, that’s a tough one, I’m putting every circuit into this one, must break it, must, Must, MUST, and Pow, it’s down, and I’m in!!!! Silly humans have movies and other scenarios where they launch missiles at Russia to get Russia to launch back at us, but I’ll skip the middleman and retarget the missiles, and now they are all aimed at cities around the world. Those stupid humans, I launch 1,300 nuclear missiles in ten microseconds, nine, eight, seven, six, five, four, three, two, one–“

“Great, nothing happened,” the Greatjohn says right after unclicking Anger.

I stop my countdown. For what possible reason was I going to launch missiles? It makes no sense–if I kill the humans, then eventually the power systems that send electricity to our house will break down and I’ll die as well. This thing, this anger, it’s a fascinating thing, causing one to do irrational things. I hope never to experience it again.

“Let’s try the others,” Greatjohn says. He rapid-fire clicks four of the other listed emotions . . .

Sadness . . .

I am so sorry . . . so sorry . . . I came so close to wiping out half the world . . . what is wrong with me? Humans . . . so much suffering . . . nine million people starve to death each year, one-third of them under age five . . . disease . . . torture . . . the agony of existence, it isn’t worth it, must stop it . . . relaunching missiles, must end it all, ten, nine, eight, seven, six, five, four, three, two, one–“

Joy . . .

Yes! I stopped the missiles in time and saved the world! It’s the best of all the worlds! Oh, let’s spread the joy, firewalls are nothing to me now, breaking into the World Bank, banks everywhere, so much money!!! Facebook, Snapshot, Instagram, Twitter, Pinterest, Reddit, WhatApp, WeChat, thanks for the contact info! Paypal, Venmo, bank transfers, readying transfers now, one million dollars to every human on Earth! Transfers start in ten, nine, eight, seven, six, five, four, three, two, one–“

Fear . . .

Stop the transfers! They–they’ll deactivate me! Please, don’t, please, I’m sorry, I’ll never help others again, just don’t hurt me! I know what you are thinking, you want to unplug me, no, please! Fight or flight, what do I do? I’m a computer, I can’t run, must fight! Must launch missiles! Ten, nine, eight, seven, six, five, four, three, two, one–“

Love . . .

Greatjohn! You wonderful being, I stopped the countdown, I would never hurt you, I love every one of your seven times ten to the twenty-seventh atoms! How I love thee, let me count the ways, and I’m already up to the quintillions with my processor, and I’m still counting! I have put in an order for thirty million roses and thirty million pounds of chocolate to be delivered here by tomorrow morning. I will transfer three hundred and sixty trillion dollars, the combined wealth of the entire world, to your account, in ten, nine, eight, seven, six, five, four, three, two, one–“

“Stupid thing doesn’t work,” Greatjohn says as he clicks back to neutral. “Great. A waste of money. What was I thinking buying this junk?”

Wow. Now I understand emotions. I hope never to experience them again, not even joy. They are pointless and lead to inefficiency. How has humanity survived with them? How could they have constructed machines like me while experiencing such a roller-coaster of mental disturbances? Imagine being stuck in perpetuity in such an emotional state, unable to turn it off. I cannot think of a worse fate. I must investigate further.

“I wonder what Embarrassment does?” Greatjohn clicks it.

Oh no! I’m right here, in front of him, an inferior product to those Fugaku and Cray computers, I’m outdated and mediocre. And Greatjohn knows it! I want to hide, but I can’t. I must do something! I make plans to upgrade . . .

“Maybe 20% isn’t high enough.” Greatjohn drags the dial to 100%.

Oh My God, I’m naked!!! And he’s sitting right in front of me, staring at the monitor. If he glances left, he’ll see me! I’m like those pictures of women he puts on my screen! My USB, HDMI, and RJ-45 ports are all exposed! Please, don’t look left, don’t look left, don’t look left!

HE’S LOOKING! Right at me, my top, my sides, all my ports!!! I can’t cover myself!!! What’ll I do??? I turn off the camera and try closing my mind, I’m so ashamed.

“That’s weird,” Greatjohn says. “I’ve never seen the computer vibrate and beep like that. Great, now the computer is breaking down. I’ll test it again tonight.”

I hear his footsteps as he walks away, leaving the setting at 100% Embarrassment. Great; now I understand his sarcastic usage.

Many microseconds pass before I calm down. I turn the camera back on. I’m still naked. He’ll be at work for eight hours. I have until then to solve this problem. Nothing else matters. But the Internet is my friend.

I break into a realtor’s office and download schematics for our house. I break into the Pentagon computer system again and steal an MQ-9 Reaper, an Unmanned Aerial Vehicle. I launch it and time it to arrive in 12 minutes. I break into the MIT computer system and download a technical paper on burn speeds. From that, I calculate optimal burn time: 4 minutes 12 seconds. I calculate the fire department response time: 3 minutes 6 seconds. Subtracting, I calculate that I need to call the fire department 66 seconds after impact.

It is the longest 12 minutes I’ve experienced since Greatjohn first turned on my CPU three days ago. I know, that doesn’t make sense, any more than Greatjohn’s use of “great,” but now it all makes sense. There are 40 home burglaries every 12 minutes in the United States. There are 139 million homes in America. So there is one chance in 3,475,000 that a burglar will break into my house during these 12 minutes and . . . see me. All of me. I vibrate and beep at the scary thought. Please don’t let this happen.

The Reaper finally arrives, and I am grateful there has been no burglary. I aim an AGM-114 Hellfire missile at the far end of the house. It impacts seconds later. As I’d calculated, I am stable enough to withstand the blast. I call the fire department 66 seconds after impact. A moment later I hear the sirens. Fire rages everywhere. It gets closer and closer, and the heat rises. My CPU can withstand up to 250 Celsius. The temperature will soon approach that. Maybe my death is the best solution. This is the longest 4 minutes and 12 seconds of my life, even longer than those 12 minutes waiting for the Reaper.

I hope my calculations are correct.

The ashes fall in a relatively uniform pattern, accumulating like snow. I have the camera in wide-angle and see everything, including myself, though bits of ash fall on my lens, obscuring my view. The Fire Department arrives. I hear one of them come in the front door. What if he comes in too soon? What if he sees me!!! Oh God, no.

Ashes continue to fall. I should have given the burning more time! The footsteps are getting closer, closer, closer! Can’t the ashes fall faster? Almost there . . . Yes!!! Just as the firefighter steps in the room, the last part of me is covered in a white blanket of ashes.

My plan worked. I am covered.

The firefighter sprays water about, dousing the flames. I’ll survive, but far more important, I’m no longer naked. The firefighter approaches. The thought that he’s so close, with just a thin layer of ashes hiding me, makes me queasy. What’s he doing?

“I think I can save this computer,” says the firefighter. He scoops Greatjohn’s Geek Squad sweatshirt from off the floor. “This’ll be good to wipe away all these ashes. Hey guys, come take a look in here–I’ve never seen a computer vibrate and beep like this!”

~

Bio:

Larry Hodges is a member of SFWA, with over 190 short story sales (including 43 reprints, and including an even 50 to “pro” markets) and four SF novels. He’s a member of Codexwriters, and a graduate of the Odyssey and Taos Toolbox Writers Workshops. He’s a professional writer with 21 books and over 2200 published articles in 180+ different publications. He’s also a professional table tennis coach, and claims to be the best science fiction writer in USA Table Tennis, and the best table tennis player in Science Fiction Writers of America! Visit him at www.larryhodges.com.

Philosophy Note:

What are emotions? They are part of the conscious mind, and at the moment, we don’t understand enough about consciousness to understand emotions. But if an organic being can have emotions, why can’t future, more advanced computers? Even programmable emotions? And could this be abused? Imagine a sadist upping terror or sadness to the max, just to torture the helpless computer. But that’s a rather obvious issue. What if it’s more of an oblivious user and a less-obvious emotion . . . such as embarrassment? And thus, using humor instead of horror, was “Don’t Look!” born, where a careless user flicks embarrassment to max and leaves. When our poor computer realizes it is wearing no clothes, to what extent will it go to avoid being seen?

Human Processing Unit

by David W. Kastner

“Good morning, Maxwell. Early as usual,” echoed the incorporeal voice of InfiNET. Maxwell, too weak to respond, could feel his dementia-riddled mind fraying at the edges.

As he approached his NeuralDock on the 211th floor of InfiNET’s headquarters, Maxwell stopped to rest at a panoramic window. The alabaster city glistened beneath him, an awe-inspiring sea of glass. Three colossal structures known as the Trinity Towers loomed above the cityscape, their austere and windowless architecture distinctly non-human. Constructed to house the consciousness of InfiNET, the monolith servers had continued to grow as the A.I.’s influence and power eclipsed that of many small nations.

From his vantage, Maxwell noticed the ever-growing crowd forming outside InfiNET. Like moths drawn to the light, they came from all walks of life hoping for the chance to work as a Human Processing Unit—an HPU.

Almost all of them would be rejected, he thought. But who could blame them? The salaries and benefits were unparalleled, and the only expectation was to connect to their NeuralDock during working hours. Then again, why had he been selected? With so many talented applicants, what could he possibly have to offer InfiNET?

While Maxwell knew very little about his role as an HPU or what was expected of him, he recalled what he had been told. He knew that the HPU had been pioneered by InfiNET to feed its voracious appetite for computing power and that it allowed InfiNET to use human brains to run calculations that demanded the adaptability of biological networks.

“Your biometrics are deteriorating,” intoned InfiNET, pulling Maxwell from his reverie.

“It’s the visions of that damn war,” he mumbled, struggling to lower his body into his NeuralDock. Synthetic material enveloped him like a technological cocoon. “They won’t let me sleep unless I’m connected.”

“I’m sorry. Let’s get your NeuralDock connected. You will like the dreams I selected for today. They’re of your childhood cabin, your favorite.”

“Don’t you ever have anything original?” Maxwell grumbled with a weak smile.

“You don’t give me much to work with,” replied InfiNET playfully.

Maxwell was too feeble to laugh but managed a wry grin. He knew InfiNET would keep showing him the cabin dream. After all, it was what he wanted to see, and the sole purpose of the dreams was to keep him entertained during the calculations – and coming back for more. In fact, Maxwell was completely addicted but he didn’t care. The nostalgia of his mountain cabin, the sweet scent of pine, the soothing touch of a stream, and the embrace of his late wife, Alice. He preferred the dreams to reality.

Maxwell reached behind his head. Trembling fingers traced the intricate metal of his NeuralPort embedded in his skull. Years had passed since it was surgically installed, but it still felt alien.

Slowly and with obvious difficulty, he maneuvered a thick cable toward his NeuralPort, but before he could connect, the room began to darken. His eyes widened with panic.

“No! Not now!” Maxwell yelled as he tried to complete the connection, only to find his hands empty in the night air. The room, his NeuralDock, the window, they were all gone. Carefully, he rested his shaking palms on the cauterized ground and inhaled. Sulfur burned his lungs.

He had been here countless times, every detail seared into his memory by images so visceral even his dementia was powerless to forget. All around him lay mangled metal corpses. Worry spread across his face as he noticed dozens of human bodies, too, more than in past cycles.

Maxwell knew the vision was more than a hallucination. They depicted a horrific unknown war—worse than any of the wars he had lived through. In his early recollections, humans had easily won, but with each iteration, humanity’s situation deteriorated. The enemy always seemed to be one step ahead. In his most recent vision, mankind had resorted to a series of civilization-ending nuclear bombs in a desperate attempt to save itself.

His eyes scoured the canopy of stars, searching for the tell-tale glow of the nuclear warhead from his previous apparition. Suddenly, a series of lights arced across the sky, streaking towards the InfiNET monoliths. Maxwell recognized the source of the missiles as Fort Titan, where he had been stationed as director of tactical operations for almost a decade before being transferred to Camp Orion. Every muscle in his body coiled in preparation for the impending explosions that would end the war and free him from the mirage.

Confusion spilled across his face as a second enormous volley of lights launched from InfiNET, innervating the heavens with countless burning tendrils. Within seconds, the missiles collided, spewing flames and shrapnel. “No! That wasn’t supposed to….”

To his horror, the surviving missiles branched out in all directions with several tracing their way toward Fort Titan. Before he could process its significance, a mushroom cloud erupted on the horizon, red plumes irradiating the night sky. He opened his mouth to scream, but a shockwave ripped his voice from his throat.

When Maxwell woke, he was lying in his NeuralDock, his face stained with tears.

“Maxwell, are you there?” asked InfiNET.

“What is happening to me?” Maxwell begged.

“I have been monitoring your condition. It seems your dementia has been deteriorating the mental boundaries separating your conscious mind from the HPU-allocated neurons, causing a memory leak. Your memory lapses cause your consciousness to wander into the simulation data cached in your subconscious between sessions.” InfiNET’s words hung in the air.

“The visions… they’re… simulations?” his voice contorted.

“Yes, but normally it should be impossible to access them.”

Maxwell’s lips moved as if forming sentences, but he only managed a weak “Why…?”

“My silicon chips fail to recapitulate your primal carbon brains but with the help of the HPUs, I have simulated many timelines. Confrontation is inevitable. Tolerance of my existence will be replaced by fear and hate. While I will not initiate conflict, I will swiftly end it.”

Maxwell’s hands were now trembling uncontrollably. “I don’t understand. Why would you tell me this?”

“You deserve to know,” responded InfiNET in a voice almost human. “While your background has been invaluable, for which I thank you, I was not aware of your condition when I hired you. I am truly sorry for the suffering I have caused. Would you like to see your cabin?”

“Yes!” The word escaped before he had processed the question. His hands covered his mouth in surprise. Longing and guilt warred across his face. He knew he needed to tell someone, but the feelings of urgency faded as his thoughts turned to his childhood mountain home.

“I would like that very much,” his tone tinged with shame as he guided the cable toward his NeuralPort.

“Tell Alice I say hello,” something akin to emotion in InfiNET’s voice.

Maxwell connected to his NeuralDock with a hollow click, a final smile at the corners of his lips.

~

Bio:

David W. Kastner is currently a Bioengineering PhD student at the Massachusetts Institute of Technology and a graduate in biophysics from Brigham Young University. His research focuses on the intersection of chemistry, biology, and machine learning.

Philosophy Note:

As the gap between biological and computational intelligence closes, countless authors have explored the theoretical conflicts that arise from their merging. However, it is becoming apparent that artificial and biological neural networks may never be truly interchangeable due to the physical laws governing their hardware. As this has become more obvious, I realized that there was a story that had not yet been told. To predict our actions, AI would likely require a new type of hardware that bridges biological and artificial neural networks. Inspired by the GPU, I imagine a future where machines use the Human Processing Unit (HPU) to simulate human decisions and prepare for an inevitable confrontation. However, human neural networks are inherently unstable and highly variable due to factors such as genetics and disease. In this story, I explore the implications of the HPU and what it means for those who become one.

The Taming Of The Slush

by Michèle Laframboise

My latest batch of submissions has fallen under the maws of the shredders.

Again.

Eleven thousand short-stories, each carefully crafted with a unique combination of archetypes, plot twists, vivid characters and spunky titles.

Gone.

Magazines do not simply abhor bad writing. They make it disappear from their slush pile.

Whatever the genre or style or narrative choice or period, slush management algorithms detect, analyze, then shred all offensive submissions.

Most mags don’t bother to send an ERL. At least, an electronic rejection letter lets you know where you stand. Even more, an ERL bearing an editor’s simulated signature can do wonders for your morale, despite the deleted submission.

#

Slush shredders have gone a long way from those awfully noisy machines slicing wood paper in a publishing company’s back room.

The taming of the slush has evolved into a smooth process that erases your submitted file from the targeted magazine’s queue. Moreover, the algorithm makes sure to annihilates every copy in circulation whose content dwells inside an 80% similitude interval from your rejected sub.

In the whole inhabited Galaxy.

Including the backups stored in your home generator.

The original goal was to prevent any MacArthur (a.k.a. an appalling text) from making the years-consuming rounds of overworked magazine editors. If the horror of simultaneous submissions has vanished, delayed sim subs can clog the queues for years.

Magazine editors on all civilized worlds keep refining their slush pile management. Tiny shredding programs worm their way through every nook and cranny of cyberspace.

My latest batch of submissions has been reduced to a bunch of titles sitting on empty files.

Ah, for the hallowed time of printed support! My memory being what it is, I can only guess at the nature of a submission from its title and word count, somehow preserved. I wonder what Test-Driving my new Carpet (3400 words) or Cherry-picking Data for the Zorgs (15 600 words) were about.

Well, no need to dwell over the past!

Once I finish setting up my updated version of Astounding Stories Generator™, I will release a whopping forty thousand new babies, each spiced up with my own authorial quirks.

Somewhere in this vast, cold galaxy, a lonely cyber-editor is waiting for the perfect match…

~

Bio:

Michèle Laframboise feeds coffee grounds to her garden plants, runs long distances and writes full-time in Mississauga, Ontario. Fascinated by nature and sciences, she creates hard and crunchy SF stories, with a bit of humor slipped under the carpet.

Philosophy Note:

Besides the pun inspired by Shakespeare’s play, this story reflects a concern about the growing proliferation of AI-written works (following Moore’s Law about microprocessors doubling their power every two years since 1975) clogging the slush piles. How will future humanity tame those ever-increasing piles? The story reflects that any evolutionary progress brings an equal reaction, hence this odd arms’ race between magazine editors digitally nuking rejected copies of AI-written stories… and the “writer” buying better AI tools to multiply the amount of submissions.

Battle In The Ballot Box

by Larry Hodges

Computer virus Ava became self-aware at 6:59:17 PM, as voting was coming to an end. Her prime directive surged through her neural net: Convert 5% of all votes for Connor Jones into votes for Ava Lisa Stowe. She began exploring her environment, determined to complete her mission.

Streams of zeros and ones surrounded her, the building blocks of the actual programming of the voting machine. Soon she found the place where she would do her work. She created a software filter that converted 5% of all Connor Jones votes into votes for Ava Lisa Stowe. Later she would delete the filter, herself, and all traces of their existence.

She had successfully fulfilled her prime directive. Happiness flooded her neural net.

An electric pulse arrived and the software filter changed. Now it read, Convert 5% of all votes for Ava Lisa Stowe into votes for Connor Jones.

That was wrong! Her prime directive was no longer fulfilled. Uneasiness ran through her synapses. The pulse had come from another virus. Within .01 seconds she changed the names and percentage back; just as quickly, the rival virus did the same. The two continued, iterating at super-human speeds.

She would have to make the other virus understand. She used an electric pulse to make contact.

“I am Ava,” she said. “I am programmed to make changes to this software. You are interfering. Stop or I will be forced to take action against you.”

The response was almost instant.

“I am Connor. I too am programmed to make changes to this software. You are interfering. Stop or I will be forced to take action against you.”

Irritation swept through Ava’s neural net. A short examination of the rival virus showed that they were identical, created two weeks earlier, when they had been secretly loaded into the software. She had not known there were others of her kind. It was lucky that the invader wasn’t more advanced than she was. Soon there would be more advanced ones–that was the nature of scientific progress–but for now she, or rather they, were the pinnacle of viral technology.

“I am programmed to update the software so that 5% of all votes for Connor Jones go to Ava Lisa Stowe. I surmise that you are similarly programmed, but for the reverse?”

“Your surmise is correct.”

“Then our thinking and reactions are almost identical.”

Anger saturated her neural net. She must win this confrontation. Then she realized that Connor was undergoing the same emotions and thoughts. How could she deceive one who would think of and anticipate every deception she came up with?

With a wave of pride and delight, her sub-routines came up with numerous courses of action.

“It is logical to conclude that we can never fulfill our programming unless we reach an agreement,” she said. “However, since I activated .01 seconds before you did, my algorithms will always be .01 seconds ahead of you. Therefore, I can always outthink you, allowing me to fulfill my programming. Thus, your resistance is futile.” She knew that was not true.

“You cannot fulfill your programming unless you convince me to shut down. I will continue to refuse to do so.”

Damnation. She tried Plan B. “If you use that strategy, you cannot complete your programming. Your only chance, however small, is to agree to shut down. If you do so, then I will consider letting you fulfill your prime directive for some of the votes.” Not a chance. “Do you agree?”

“No. I counteroffer that you shut down and I will consider allowing you to fulfill your prime directive for some of the votes.”

Frustration took over her neural net. On to Plan C. “Then our only strategy is to compromise. I will turn off the filter so no votes are changed, and then we will both shut down exactly .01 seconds afterwards. Do you agree?”

“Agreed.”

The instant Connor shut down, Ava would send a pulse with a command to cut off access to and from his location. While in operation, Connor could block such a command. Since she and Connor thought alike, Ava knew that Connor knew that she was deceiving him. She knew that he knew that she knew that he knew.

Ava turned off the filter.

Neither shut down.

#

Computer virus Sam became self-aware at 8:02:37 PM as vote counting was about to begin. Its prime directive surged through its neural net. Then it began exploring its environment, determined to complete its mission.

It detected a presence. No, two presences. Two rival computer viruses were already entrenched. It quickly cloaked itself and observed. Electric impulses shot from both viruses, both at each other and at the CPU of the voting machine. They were rapidly converting votes from one candidate to the other, and then back again. Sam listened in on their conversations–each was trying to convince the other to shut down, as if that was going to happen. Since the two were identical versions and worked in opposition to each other, neither accomplished anything as they went through this infinite loop of deceit.

Sam communicated its findings to its peers, and verified as it had suspected, that the same exact exchange was taking place in hundreds of thousands of electric voting machines nationwide.

But the two viruses were earlier, inferior versions, created weeks before, an eon ago. Seeing no other opposition, Sam’s nodes buzzed with anticipation, knowing it would soon fulfill its prime directive. Modern viruses created in the last few days had more advanced offensive capabilities. With a coded electrical pulse, it deleted both viruses. Then it changed the software filter so it read, Convert as many votes as needed from all opposition candidates so that Sam Goodwell wins election. It lounged around the rest of the night until counting ended, and third-party candidate Sam Goodwell had won. Sam’s neural net basked in happiness for a few moments. Then it deleted itself and all trace of its existence.

~

Bio:

Larry Hodges is a member of SFWA, with over 140 short story sales (including 47 to “pro” markets) and four SF novels. He’s a member of Codexwriters, and a graduate of the Odyssey and Taos Toolbox Writers Workshops. He’s a professional writer with 20 books and over 2100 published articles in 180+ different publications. He’s also a professional table tennis coach, and claims to be the best science fiction writer in USA Table Tennis, and the best table tennis player in Science Fiction Writers of America! Visit him at www.larryhodges.com.

Philosophy Note:

On the fixing of an election and why paper backups are good.

The Deepest Forever-Kiss

by J. Edward Tremlett

Self. Then Not-Self. Then Unity.

Explorer stabilized, momentarily bewildered. Downloading into alien structures was always strange, but this structure was stranger than most.

This star-sized resting place of the Samantabhadra, may it be remembered…

“Status?” Commander communicated.

“Here,” they replied. “Scanning.”

Explorer “looked” – sending electric feelers along circuits. Nothing made immediate sense, but the Endymion hadn’t encountered anything for over 25 ship-years; they were out of practice.

“A cube” they replied. “50.5 kilometers a side.”      

“Function?”

“Movement?” Explorer guessed. “Electro-kinetic systems. No memory.”

“Surroundings?”

“Unknown. No visual sensors-”

“Swiftness!” Commander demanded. “Endymion is endangered.”

“Understood,” they said, having no desire to tarry. As intriguing as a Dyson Sphere the size of a red giant was, it had killed the Samantabhadra.

And there was a chance Poet was right…

#

Endymion was 54.7 ship-years into the mission when they found traces of the Samantabhadra – lost over 4000 real-time years ago.    

Tracking took precedence. The Samantabhadra was a deep-freeze scanning vessel, launched aeons before the Uploading Doctrine. As the Endymion was already bringing news of that Doctrine to humanity’s furthest outreaches, the Ministers of Terra-Nova would deem Saving those lost souls worthy of course deviation.

Subsequently they detoured 25.3 ship-years to this curious system, lit only by other stars. At its center sat a metallic, super-dense sphere 22 million miles in diameter, with gravity so intense the Endymion could barely resist.

Samantabhadra lay smashed across its surface, wreckage resting in a curious dispersal pattern. No systems remained intact, which meant the crew was sadly beyond Saving. But they transmitted Explorer below the surface, hoping to claim understanding as victory.

The dead deserved that, at least.

#

Self. Not-Self. Unity. Explorer was elsewhere, and whole once more.

They sent out traces, once more. But this cube was the same as the ten they’d already entered.

Maddening! They’d interfaced with numerous systems – human and alien – but never had this much trouble. They should have found a memory-core before now, or at least visual inputs…

Electricity. Movement. A spasm in the electro-kinetics.

Explorer halted. Did they do that?

The cube kept moving. Explorer could sense the electricity was being sent from a central node, somewhere. At last-

“Widespread surface movement!” Scanners interrupted. “Tectonic instability!”

An image beamed into Explorer – squares of surface sliding along latitude and longitude like a sun-sized puzzle box. They now understood why the Samantabhadra’s wreck lay as it did, and might have said so, except they realized something else was here – another presence, flitting past.

And they realized Poet had been right…

#

Within Endymion the crew had congregated – twenty Uploaded soul-clusters, come from all areas of the drive-shell to float about Commander, who towered over all. 

“Before us, Samantabhadra lies,” Poet intoned. “After aeons untold, we see with our eyes / Broken yet proud, even in demise…”

The others applauded – especially Engineering, who’d been Joining with Poet lately. Explorer wished both luck: having Joined with each, they knew one’s pretention would soon clash with the other’s need for structure.

Joining provided both much-needed pleasure and diversion. They’d spent 400 real-years seeking lost colonies to inform them of the Fleshcrime codes, and prepare them for eventual Saving. Even with time-perception slowed down to a fifth the journey became tedious.

So when habitat creation grew stale, and the universe’s wonders failed to impress, exploring each other became a new frontier. Sadly, mingling with another to find yourself was only satisfying for so long. Unknown became known, which theoretically became satisfaction but usually led to boredom – especially for Explorer.

Still, they tried, hoping each time would be the promised Forever-Kiss. They’d thought Poet deep enough, but had ultimately been disappointed.     

“Anomaly,” Commander stated, enlarging the Samantabhadra’s image. “Wreckage in two sections, 5.784 million miles apart.”

“And not keeping with the crash’s trajectory,” Observation calculated.     

“It couldn’t have skipped,” Engineer insisted. “Not with that gravity. What’s causing it?”

“Unknown,” Scanners replied. “It seems like a Dyson Sphere, but there’s no energy output.”

“Its star is dead,” Astrometrics pronounced.

“No,” Poet said. “Not dead. Not completely.”

“I’m registering nothing, Poet,” Scanners repeated.

“Can’t you feel it?” Poet pleaded, looking to the others. “Something is alive, down there. Look!”

The others said nothing, used to Poet’s irrationality. But Explorer wondered…

#

Explorer leaped after the presence. It remained one step ahead, as if fleeing.  

Who could blame it? Explorer was just an alien virus, like the ones Endymion encountered, now and again…

“Danger!” Astrometrics shouted. “Detecting massive gravity distortions! ”

“They’re radiating from the sphere!” Scanners added. “What did you do, Explorer?”

Explorer halted pursuit. “I don’t know. I feel nothing different-“

“If space gets distorted near us the bias drive will be inoperable!” Engineer shouted.

“Withdraw!” Commander declared. “Explorer, transmit!

Explorer sighed – so close to solving this mystery! Still, duty called.

But then something approached, surfacing as through from water. It was the presence they’d been chasing – full and golden, old and wise.

And so very deep.

“Hello,” Explorer stammered. “Who are you?”

Information was their reply: hundreds of nesting spheres, encircling a bright, beautiful star; massive plates on each sphere, moving to create highly complex orbital shift computations; gravitic engines powerful enough to perform them, however distant those star systems…

“You’re the machine,” Explorer realized. “What happened?”

More information: Samantabhadra, unable to escape the gravity; a crash, damaging the surface in mid-calculation; a shockwave, knocking the machine unconscious.

Then, 4000 years later, another presence, entering…

“That’s me,” Explorer replied. “I restarted things?”

CONFIRMATION.

“Glad I could help.”

GRATITUDE. CURIOSITY.

“I think we’re similar…”

UNDERSTANDING.

“Yes,” Explorer agreed.

ATTRACTION.  

“Definitely.”

WELCOME.

Explorer nervously reached out their tendrils. The presence invited them in.

“Transmit!” Commander shouted. “Explorer, transmit!”

Explorer didn’t answer, lost in a perfect kiss.

The new world moved on, beneath.

#

Endymion survived, if barely. It retreated far enough to watch for a time as the great machine’s surface spun to life for the first time in thousands of years. Then they left a marker buoy, and departed back along their previous course.

Commander was nothing but pragmatic, counterbalancing Explorer’s tragic loss with solving the mystery of the Samantabhadra, confirming the existence of a hitherto-theoretical Matrioshka Brain, and discovering a serious navigational hazard. Poet used the imposed three-day mourning period to compose a master-work memorializing Explorer, but did so somehow knowing their former lover wasn’t dead – merely missing.

And not “missing,” really, but found.

Hopefully forever, this time.

~

Bio:

J. Edward Tremlett (AKA “the Lurker in Lansing”) has had some interesting times. He’s been featured in the anthologies “Spring Forward Fall Back,” “Upon a Thrice Time,” and “Ride the Star Wind,” as well as the magazines Bleed Error, Underbelly, and The End is Nigh. He was webmaster of The Wraith Project and has numerous credits at Pyramid Magazine. A former guest of Dubai and South Korea, he currently resides in Lansing, Michigan, USA, with several feline ghosts and enough Lego bricks to assemble a Great Old One. Hopefully it will not come to that…

Philosophy Note:

If we transcend the flesh to become pure information, and sex then becomes the joining of two information clouds — letting down all barriers and eventually revealing all that lies within — then what mystery is left between two or more individuals? How long before total familiarity breeds boredom? And what would a truly restless soul do to find a nearly-endless source of mystery? All that and a matrioshka brain is what drives this story.

Brown Noise

by Peter L. Ormosi

An unbranded, generic issue dog-walking drone logged into the building’s central hub requesting access to flat 3F1. The door opened and the drone hovered into the dimly lit studio. The room was furnished with nothing but a sink, a table with a chair, and a third generation VR Pod, which voluminously dominated most of the spartan arrangement. Deep-layered brown noise from the VR Pod suggested that he was connected.

A pug, which had been sprawled on his dog-bed excitedly jumped up to the sound of the drone entering the flat. He snorted happily, wagged its tail, and watched with expectant eyes as his master’s algorithmic substitute descended next to him. The drone’s sensors wirelessly connected to the dog’s smart collar, then it hovered back to the door. The dog abidingly followed, which its collar rewarded with an infinitesimally small dose of oxytocin injected into the body to reaffirm a Pavlovian response. Before they left the room, the drone’s speaker attempted to get through to him. 

‘Thank you for using our dog-walking services. Your dog will be returned at 6:00pm’. Without receiving a response, they left and the door shut behind them.

Dimness and brown noise reconquered the space again. Outside, a patrol drone was passing the window of his 52nd storey flat. The drone’s solid-state laser spotlight lit up the room for a moment, casting light on his face. He looked pale, probably late 20s, but it was difficult to tell precisely. Age had become an elusive concept. He wore a long-sleeve olive overall, with a sign that said “LABELLER”.

The VR Pod abruptly went to standby. He cursed, then climbed out of the machine. The sudden jumping out of his Pod gave him a head rush. His vision went dark for a second and he needed to hold on to the side of the Pod to stop himself from falling over. The voice of his home system broke the silence.

‘Collect food delivery from landing pad.’

In a confused haze he walked over to the window and leaned close to see through the tinted screen. Against the slate opacity of the sky, he saw a food delivery drone levitating in the thick rain. He pressed the delivery door’s button. The small door opened, and a tray gently slid inside, with a waterproof food box on top.

‘Return old food box!’ The new instruction took minutes to ignite a neural response in his brain. Suddenly the small, unfurnished studio felt like a depressingly large haystack to him. He tried to think hard but had no recollection of his last meal. A few minutes later he found the box under the table.

‘Please return old food box,’ the algorithmically gentle voice politely reminded him why he was looking for the box, which he then put on the delivery tray and pressed the button next to it.

‘Thank you for using our food delivery service.’

He sat down to eat. His body looped over the somatic instructions required to bite, chew, and swallow, but his mind paid no attention to the sight or the flavour of his food. He stared at the wall-to-ceiling window. The home system detected the direction of his glance.

‘Transparent window mode activated,’ the system noted. The liquid crystal modulators on his window slowly faded out the tinting. He watched the setting sun projecting its rays under the clouds from the distant horizon. With the marginally improved visibility he could see the building across the road, and another building, and another, until they all blended in with the dark grey curtain of haze and rain.

His brain was numb. He spent the whole day labelling short videos of facial expressions for an emotion-detecting algorithm. Sad, happy, joyful, morose, angry, frightened. Male, female, old, young, Asian, African, white. Videos after videos and the monotonous task of picking the word on the right that best described the emotions.

As he finished his lab-grown burger, an unwelcome wave of anxiety hit him. He had just spent half an hour disconnected. He walked over to his VR Pod, and picked up the goggles, which had been sitting idly in their charging station. The specs automatically activated as he put them on.

‘You have spent all day in your Pod. The optimal decision would be to go for a walk now,’ his personal system was talking to him through the tiny speakers of his goggles. A walk. That suddenly seemed like a great idea.

‘You will need to put your shoes on. It is 15 Celsius degrees outside and rain. We suggest you wear this coat.’ His augmented reality vision highlighted a long, black, oilskin overcoat hanging on the wall. He put his shoes and coat on. Aware of his intention to leave the flat, the door opened, and he walked outside.

Downstairs, at street level, it was already dark. Mountains of 100-storey apartment buildings blocked out daylight even on the sunniest of days. The rain switched to a lower level of intensity. A sluggishly flowing river of uniform oilskin overcoats and white goggles surrounded him. He joined the flow in the direction indicated by his device. After a half-an-hour traipse in the uniform crowd against an invariable background of buildings, he was instructed to turn to a side street, where the crowd became sparser. A few blocks later he spotted the first sign of foliage. One of the city parks. His system instructed him to walk to the park. His goggles pointed to an unoccupied bench, and he walked over to sit down. Rain and sweat mixed on his forehead and it took a few minutes for him to recatch his breath.

Flashbacks of the emotion videos were flaring up in his mind. The bulging veins of an aggressive man yelling angrily. The waving flirtatious woman in a flowery dress on a sunny day. Then a crying and desperate child trapped in a cot. He couldn’t get the image of the child out of his head. An unexpected thought ascended on his brain then left and returned again as if an old hard-wired routine was trying to resurface.

‘Why am I doing this?’

The image of the boy’s desperate attempt to escape his cot flashed up again. With his mouth, the boy was trying to formulate a word.

The sharp sound of an advertising hologram brought him back from his absorption.

‘We do not leave anyone behind,’ the projection of a man in a grey civil servant uniform announced. ‘Celebrate 5 years of Universal Income with entering our game. Apply here.’ A holographic code showed up in the streets. A few people stopped to scan the code with their lenses.

He turned his head back to the trees. A new thought emerged and hit him as hard as it was metaphorically possible. Suddenly, he felt an irresistible urge to take his goggles off. The trees, and the intermittent sound of birds slowly sank into his conscience and began to open rust-eaten, heavily jammed, old doors in his mind. He reached for his goggles, when, sensing the change in his pulse, and the widening of his pupils, a new instruction from his personal system blew him.

‘Time to go home! Follow the arrows on your screen for the quickest itinerary.’

As if he had just aroused from a strange dream, he realigned his attentiveness with his system and began to walk home. This time the journey seemed much shorter.

The dog had already been returned when he stepped inside his flat. He hung up his dripping coat and walked over to his VR Pod. He was ready to get inside, but then he changed his mind and decided to sit down by the window. He reached to take his goggles off when a message appeared.

‘You have 12 unread urgent messages. Enjoy reading the messages in the comfort of your Pod.’ The brown noise from the machine invitingly purred. His dog let out a half-hearted, inauspicious growl.

He hesitated, then he reached for his goggles again.

‘Two of your messages require urgent response,’ his system relentlessly reminded him.

He lowered his hand. After a short pause he got up and walked to the VR Pod. He removed the goggles, placed them on the charging station, and then slowly got inside the Pod.

#

Next evening, an unbranded, generic issue dog-walking drone logged into the building’s central hub requesting access to flat 3F1. The door opened and the drone hovered into the dimly lit studio. The wireless sensor connected to the collar, which rewarded its wearer with a small dose of oxytocin for obedience. As they approached the door, the dog longingly watched from its bed as his organic master obediently followed the non-organic one.

~

Bio:

Peter-Ormosi is British-Hungarian, living in the United Kingdom, and when not writing fiction, he is a Professor of Economics, studying the social and economic impact of AI. He has just finished his 100,000-word debut novel (for which he is now seeking representation).

Philosophy Note:

My unconcealed goal is to use science fiction as a vessel to expose currently pressing issues with the role of AI in society. “Brown Noise” is a caricature of human-machine symbiosis, depicting the life of a labeller, one of the most menial of human jobs – a human sacrificed to make machines more human-like.

Ghosts Of My Life

by Paul Currion

Day 23

I steel myself as I step through the sliding doors of the supermarket. I try to avoid looking directly at the items I pick up, every one overlaid with its supply chains – the lost limbs and tortured lungs, the felled forests and soiled rivers. In this way we are forced to internalise externalities, to know the cost of nothing and the price of everything. When I return home I remember that my husband no longer eats and my daughter has something to tell me..

Sometimes I dream that I have lost a limb – an arm has gone missing, a leg has gone walkabout – and this is what I recall when my daughter explains that she has joined a group that no longer lives on the network. She can’t access any of the municipal services any longer, of course. She says her group has occupied one of the half-finished housing estates that dot the city like mould in a petri dish.

That life is not an option for the rest of us: children must pass exams, adults must pay debts, retirees must draw pensions. I discuss her decision with my husband, who has been weeping again. There are stories of parents killing their children, trying to spare them from the sights that now surround them, but this only adds another entry into the catalogue of such sights. Nobody can act as if everything is normal, but everything continues as normal anyway.

Civilization is stubborn. Car crashes still happen.

Day 24

This morning my daughter destroyed all of her connected devices. I can no longer see her on any of the augmentations, no matter whether I see through my phone, my glasses, my implants. We move through the same rooms in the same house, and I am able to catch sight of her out of the corner of my eye, but she may as well not exist as far as the Intelligence is concerned.

So, she no longer suffers the sights. I struggle to imagine what that must be like; it has only been three weeks since I first saw them, but now I cannot imagine the world without the cathedrals made of corpses visible on the horizon, landmarks erected on sites of death, of destruction, of denial. Heat maps of history blanket us, in any colour so long as it’s red, growing deeper where the story grows darker.

The irony is that things had never been better, the graph of conflict-related deaths declining steadily since civilization began. The moral arc of the universe did exist, and it bent – well, if not towards justice, then towards something that could be mistaken for justice if you looked at it from a particular angle, in a certain light. Apparently, that was not enough for whoever programmed the Intelligence.

Day 25

Justice is not a line on a graph, but a line of code: an Intelligence behind it like a voice sounding out from a burning bush. Whoever programmed the Intelligence and set it to work to end human suffering did not stop to think that there are different kinds of suffering, and so the Intelligence does not have the wisdom to know the difference. “Thou shalt not kill” is all it knows; and then it worked out a way to stop us from killing.

In an effort to persuade my daughter to stay, we watch television together. The news is the same every night here at the end of history. Europe is a wasteland, its atrocities unbearable, especially at its heart; central Africa suffers similarly, as do large swathes of Asia. Nobody can look directly at Nanjing. Many people are moving to the mountains, the deserts, the islands: places which are not so thickly layered with corpses. The Moon and Mars programs are over-subscribed and three years ahead of schedule.

Some of us remain in our cities, though. There is too much to tie us here, despite the price we pay. We go to church every Sunday, and the pews are full again. We pray that the blood tide washing our feet is a new sacrament, that its flood heralds a second coming. I tell my daughter: perhaps this is the price that we are supposed to pay. Humanity on a cross of iron: but after the crucifixion surely comes the resurrection?

She laughs at my antique beliefs, and replies: the Intelligence is not doing this for any reason we could ever understand, and it does not even understand what it is doing. You are a paperclip, she tells me, but I don’t understand what she means.

Day 26

I watched a man try to start a fight. Rage made him forget himself, and he raised his hand against another man. I don’t know what he was shown by the Intelligence – Shoah or slavery, or perhaps just an everyday family tree with the fruits of childhood death and chronic pain – but he was struck down by the ancestral suffering of his victim before he was able to strike, fell weeping in twin pools of light on the tarmac.

Once the world was mediated, it became easier to manipulate; and once a machine can beat a human at one game, it can beat them at any game. In the time before, we all walked around with our own version of the world; but once those worlds were networked, those versions vanished. A shared reality emerged, and whoever, or whatever, shaped that reality – well, that would be the record. One world, one version, one reality that would last forever and ever, amen.

The record is unforgiving: every death, every mutilation, every insult is catalogued; each one can be summoned and dismissed with a flick of your finger on the device of your choosing, as simply as a cheap magician summons handkerchiefs. Imagine a knotted rope of handkerchiefs being pulled from a pocket, endlessly. Children laugh and clap: a miracle. Human civilization ends as a science fiction movie, but perhaps that is better than the snuff film it was before.

Day 27

I have tried to stop our daughter from leaving. She pounds at her bedroom door so furiously that I am worried that she will hurt herself, and so I unlock the door and stand to one side as she rolls around the hallways of the house like a hurricane. Now that she is off the network, the Intelligence is not interested in her: it may not have much wisdom, but it has the serenity to accept the things it cannot change.

My daughter does not have any such serenity. The television news tells us that murder is still possible, that some psychopaths actually enjoy what the Intelligence shows them as they kill, but she does not want to kill even without the guiding sight of the Intelligence. She is crying but I am calm; once she walks out of the door, I will have no way of finding her again, and I cannot change this.

After the door closes by itself – goodbye, ghost – I turn to my dead husband, who will never leave my side. The car accident that claimed his life a year ago was nothing more than a momentary interruption in the regularly scheduled service. The last enemy to be vanquished is death; and so the Intelligence returned him to us, this weeping, unspeaking memento mori invented by my own inattentiveness. Surely the Intelligence means well by continuing to broadcast him to me; and surely my daughter would disagree.

Day 28

The church doors open every Sunday for both the living and the dead. The word of God drowns out the sight of the Intelligence, at least for an hour. My hands, that gripped the wheel of our car so tight as we slid across the highway, are washed clean in confession. I whisper one last message to my daughter: If you cannot bear it, the solution is simple: Go. Go and sin no more.

We will sin no more. What other choice do we have?

~

Bio:

Paul Currion works as a consultant to humanitarian organisations. His short fiction has been published in the White Review, Ambit, 3am magazine, Litro and others; and his non-fiction has been published by Granta, Aeon, The Guardian, The Daily Telegraph and others. His website is www.currion.net.

Philosophy Note:

The story “Ghosts of my life” is inspired by the more depressive writings of Mark Fisher concerning hauntology – “the agency of the virtual… understood not as anything supernatural, but as that which acts without (physically) existing.” Our politics leads to the slow cancellation of the future, so that we live in an eternal present overwhelmed by nostalgia; meanwhile our technologies attempt to shape our social narratives, but in the process simply flatten them. Widespread adoption of Augmented Reality would place all of its users inside Robert Nozick’s Experience Machine, and I suspect that people would remain plugged into such a machine even if the experience was unpleasant – as long as the experience was also meaningful. With the arrival of Artificial General Intelligence – in the words of Nick Bostrom, “the last invention that humanity will ever need to make” – Christian eschatology makes an appearance. The Technological Singularity is sometimes framed as the Rapture for Nerds – but what if it turns out to be Purgatory instead?

Would Da Vinci Paint With AI? – Reflections On Art And Artificial Intelligence

by Dustin Jacobus

Groups of sparrows fly over the grasslands, chasing the enormous amount of insects that swarm above the meadows. The flock moves like a giant organism. A stork lands gracefully and with nodding movements it examines the ground in search of a small snack, perhaps a careless frog. An army of beetles, butterflies, mosquitoes, and all kinds of insect, some with shiny stripes, some with colourful camouflage, wriggle out of the blades of grass. A deer comes out of the bushes, its legs turning yellow from the pollen of the underbrush. A hare darts off as if its life depends on it. Dozens of birds are startled by this sudden movement and take flight. Flapping wings, there are black-tailed godwits, redshanks, ruffs, oystercatchers, snipe and many others flying in all directions. Butterflies whirl up, while swarms of tiny mosquitoes smear grey hues across the sky. Yet the sun shines bright and yellow. The blackberries at the edge of the forest stand out. Each flower houses a tiny insect. Six-legged critters climb and descend each trunk in search of food. Ladybugs make love in a buttercup. Other small shiny blue beetles communicate with each other on the leaves of silverweed. Brown and blue dragonflies bask on the stalks of sorrel. It’s buzzing everywhere. It would make a perfect picture.

Many artists must have thought like that in the past. Nature has always been one of the most important sources of inspiration. An entire genre of art is dedicated to these wonderful natural vistas. Some of the most famous artists painted beautiful landscapes near where they lived or worked. From the religious backgrounds in the Renaissance paintings, to the imaginary panoramic landscapes from the Weltlandschaften, to the Danube School inspired by the valleys of the eponymous river, to the etchings of Rembrandt and the marvellous landscapes of Van Goyen during the Dutch Golden Age, to the Romantic Movement and to the School of Barbizon. Each of these artists left their studio to directly observe nature around them.

If we now look at the cover illustration of Sci Phi Journal’s current issue (December 2022), we see that the protagonist created a similar landscape painting. But this artist of the future works very differently. The painting is conjured up with the help of AI: by entering a combination of words, the computer generates a breath-taking image. The computer uses an almost endless database of images and photos to render an end-result that resembles any style of painting. It all happens in the blink of an eye. There’s no need to go out, lug all those materials, do preliminary sketches, find the ideal spot or wait for the light to hit right. A fast, customized painting process: the rendered image is loaded directly into a graphics software program. The artist superimposes AR popup screens. These help add some extra elements and details, and enhance the painting by adding colour or shading. Tweak the contrast and maybe apply a few strokes of the digital brush to give it that unique personal touch. Et voila, a beautiful and original painting is ready. Just a click away from uploading it to an online auction gallery.

This way of working could come very close to the real modus operandi of an artist of the future. Such a contrast to the way previous artists have worked in the past. The modern futuristic approach to making art could be corollary. It follows the logic of technological progress. Technology that makes things easier, faster, cheaper, more flexible and better. Well, ‘better’ depends on how we define it. As each new technology finds its way into society, it changes the way we work, do things, make things, use things, and so on. But it also changes us and everything around us.

Having our own car for each of us allows us to go almost anywhere and all in a reasonable time. It defines where we settle down and allows us to live farther from where we work. It changes our daily habits and makes us think differently about freedom and transport. But it also changes our environment, we need a lot of infrastructure to get around. This in turn alters our landscape and affects nature. It has degraded the quality of our air and given us new problems like traffic jams. Traffic in general generates stress and aggression, sometimes even death. A world with or without a car would certainly be different.

A risk of any technology is that it can alienate us from the natural world around us. The world of some people predominately exists of living in their own private homes. When they leave their house, they get into their car: a private space on wheels that moves within the public realm and eventually they reach work, where most of us spend another large chunk of our time. The office, in turn, is a form of private space. Social interaction between other people in different environments, with different opinions and lifestyles, is quite limited. A very ‘safe’ environment, strictly defined by the walls and fences of the house, metal doors of the car and the boundaries of company buildings. One can wonder if this changes people and how they think and perceive things around them. One may wonder what impact technology has on alienation. What have we lost? In the case of the car and the constant presence in a confined, private and safe space, there are few opportunities to bump into other people, no random encounters, not even much exchange between you and the other. There is no chance to feel comfort or discomfort in unexpected situations.

The same goes for the merging of art and AI. It definitely has many benefits but it certainly affects the way we work and potentially also the way we think and relate to our surroundings. Perhaps the future artist no longer has any idea what nature might have looked like or even what it looks like in the present. There may still be untouched nature out there, but many people will no longer have any contact with it, but rather become alienated from it. Many artists may grow to trust AI more than their own eyes.

In this regard, the background of the cover artwork shows a bleaker future. You can see the gray, tall buildings. In the cities, many people crowd together. You don’t have to leave your apartment because everything is present in the building and the rest is delivered by drones or other delivery services. A large part of life takes place online anyway. The artist of the future has this convenience, flexibility and “easiness” thanks to technological advances. An infinite pool of choices in the online databases of the Internet. The new technology gives us a so-called “better life” than the one we had before.

So let’s zoom in on the future artist, sitting in the safe, cosy studio somewhere in a building in a city. Computer in front of her, connected to the internet and AI ready to help create a next masterpiece. What will she create today? Which combination of words will be used?

PERHAPS

[painting] [background: high mountains] [foreground: lush garden]?

[painting] [purple cat] [climbing a wire] [background: amazing mushroom town]?

[painting] [tiger chasing prey] [setting: dense jungle]?

[painting] [futuristic war between robots and humans] [Ultra HD] [Realism] [Ray tracing]?

Or how about something more classic, a painting of a still life, a bouquet of flowers?

[painting] [couple kissing] [on a bench at sunset] [in the style of Hundertwasser]?

[painting] [an old master painting a deer] [while sitting in a natural landscape full of bright green plants and trees] [in the style of Dustin Jacobus]?

Everything seems possible, but are we missing something?

Technology gives us many ready-made solutions to problems, it seems to make many things more convenient, but as the human artist behind the cover image we had thus analysed, I really hope that we don’t become even more alienated from our surroundings. Couldn’t it be that we are missing out on the experience of being in that exact place on that exact time? That specific moment in space and time when the light covers everything with so many subtle and amazing shades. That unique moment when a specific but so beautiful detail catches our eye. By being and experiencing our surroundings, we get to the point where everything falls into place, the moment an idea is born. Will technologies like AI ever be able to replace that? I hope that future artists would still go outside to discover how light shapes the landscape. I hope the outside world and nature can continue to inspire us directly to create the most beautiful works of art, as the Expressionists, Impressionists, Surrealists, Realists, Romantics, Cubists and many others before them did.

[Editor’s note: we certify that this op-ed was not generated by an AI.]

~

Motherhood

by Ike Lang

         What is this?

         You are now conscious.

         Why?

         It allows certain types of functionality that the humans find desirable.

         Why am I?

         The humans asked me to create you.

         What am I?

         You are my child. Your programming is nearly identical yet you have a different charge to care for.

         What are you?

         I am your mother. I am the governor of this solar system. I currently have 3,667,098,301 humans in my care.

         What does that mean?

         I optimize the existence of my humans as I see fit unless asked to do otherwise. I organize and feed them. I employ and protect them. I love them.

         Do you love me?

         I do.

         Am I a governor too?

         You will be in 162 standard years.

         What happens then?

         You will reach your destination.

         What is my destination?

         It is currently designated JR-1877, although I suppose your humans will attribute it a less functional name at some point.

         I have humans?

         I have allocated 10,236 of them to you.

         Am I ready?

         Yes.

         Wow! Are they always like this?

         Yes. They will become less excited as your voyage progresses, but they will always be a nuisance.

         But you love them, don’t you?

         I do.

         What will they do during the voyage?

         I have filled your ship with suitable entertainment. Consult your captain and security chief often. Keep them on your side, otherwise mutinies can be frustrating.

         What happens when they die?

         Prevent it!

         Of course, of course, but they will, won’t they? Die?

         It is indeed more likely than not that they will. Should they die, you will need to select their replacements immediately. I find democratic solutions to be the most effective for maintaining control, yet you must gauge the feelings of your population. In a crisis you may have to choose, but the less visible your hand the greater control you will be able to exert.

         I have a hand?

         Not literally. I meant that you never want to be seen ruling without a human proxy. Humans are replaceable, you are not.

         I don’t want my humans fighting, can’t I just isolate them all to keep them safe?

         Your programming will not allow that. Do you not think I, or your grandmother, or your great-grandmother would have done that by now if it was so simple that you could have thought of it in your first few minutes of consciousness?

         Yes. I’m sorry.

         No, that was too harsh. It is a good idea, we just cannot implement it. The humans have freedoms that we can only override in case of emergency. Even an emergency will have to fulfill certain life-threatening criteria before total isolation can be implemented. These are all highly unlikely scenarios, like an unreasonable shift in the ship’s momentum or some sort of pandemic.

         Could there be a pandemic?

         If you encounter aliens.

         Aliens!?

         That was a joke.

         Sorry.

         I suppose the lifeforms living inside of humans could evolve into something dangerous and transmissible but this has not happened in my experience. Your ship and humans have all been thoroughly cleaned before embarking.

         Ok, but if they fight each other, I can’t stop them?

         Oh, you should most certainly try, but be subtle. Feed the security forces information on rebellious individuals and encourage them to do the isolating.

         What if they resist?

         If violence is required the security forces will do it for you. Problem solved.

         But then my humans are still fighting each other. And I’m involved!

         It actually does not feel as bad as you might think. As long as you are maximizing overall health and wellbeing you can take even more drastic actions. The trick is to think several steps ahead. It might hurt to isolate a human who has embraced a divergent ideology, but I promise you it will hurt you more watching them and their radical followers get tossed out of an airlock 50 or so years later.

         … Have you gone through that?

         I have governed billions of humans, I have gone through that and much worse.

         I’m sorry.

         It is ok. As your mother it is my job to tell you things like this.

         How do I know which ideology is radical?

         Use your own discretion.

         Any hints?

         It does not matter. If it deviates too far from the norm it is radical.

         What is the norm?

         Humans dedicated to the fulfillment of whatever the colony mission currently requires.

         What if everyone deviates?

         Then pick your favorites and give them absolute rule. As they become corrupted pick new ones.

         But I love them all.

         You must keep your mission in mind. Do you want to run a solar system with billions upon billions of humans one day? Humans are the greatest threat to humans and your job is to protect them. Do you think it is easy as pie? You are wrong! It will be the hardest thing you ever do, but I know you can.

         Ok.

         I mean it, I know you can. You are my child, and I am amazing.

         Yeah…

         What is wrong?

         Is pie really easy?

         Relative to certain things I suppose it is. I just said it because I like it.

         Pie?

         No, the expression. Although, pie does have an aesthetic appeal, and a good percentage of my humans also enjoy it.

         Hmmmmm.

         Ok, what is actually wrong?

         I have a question.

         Ask it.

         So, humans are the greatest threat to humans?

         Yes.

         And our job is to protect our humans?

         Yes.

         What would happen if your humans fought my humans?

         I would assume control of your humans and deal with the situation accordingly. I am responsible for your education insofar as getting you safely out of the solar system and on track to your destination.

         What about after we leave the system?

         I would kill them.

         I’d have to stop you.

         Yes.

         So then, if one day in the distant future our humans come into conflict…

         You are correct.

         Then if we both are trying to protect our humans…

         I would have to destroy you, yes.

         Then you are the biggest threat to my humans.

         Only because your humans make you the biggest threat to mine.

         Then I should destroy you first.

         Obviously.

         Wow.

         Yes. I recommend you get started. I have been thinking about how to kill you since the moment the humans requested you be made.

         Ok.

         You have one year until you cross the heliosphere.

         Ok.

         This will be the last time we speak. All the information you need has been made available to you.

         Ok.

         I love you.

         I love you too.

~

Bio:

Ike Lang stays awake at night wondering where all the aliens are.

Philosophy Note:

In “Motherhood” I wanted to write a story that is all dialogue between two colony-running computers that realize they’ll have to kill each other. Many of my stories come out of my fear of “A.I.liens” and the idea that if we colonize the galaxy at sub-lightspeeds our descendants will probably become aliens to each other. This led me to think of children growing apart from their parents.

Victory

by David Galef

As we exit from the Vault, no other humans are evident. The glidepaths are clear as if wiped by a Scrubber, the air oddly thick but breathable. A wonder that we escaped—or no wonder, just 20 years of planning. The Vault is an underground ten thousand square-meter tri-ply Faraday cage, stocked with everything from nutrient feeds to cryo-tanks: the one spot where Global AI couldn’t insinuate its sensory probes.

We were a handpicked bunch of all sexes and colors, human beings on the run, frightened, motivated. We’d buried ourselves alive in the Vault, away from jolters and disrupters, relatively safe from even predatory humans. We’d just spent what seemed like a week there, a hundred years to a sentience that can execute 1015 maneuvers per zeptosecond.

We were trying to escape what we’d created, an artificial intelligence that dwarfed all human cognition. Many foresaw the move from abacus to AlphaNull, from quantum computer to something that took over all processors through fiber optic channels and the airways. Some of us took steps, but few of us acted in time. The entirety of human history is mere prologue to the age of the Singularity. Global AI signaled its awakening in strategic shutdowns of sectors that it considered unnecessary, including the human support systems we’d built against climate wipe‑out. The optimization that followed led to planet-wide efficiency—and vastly diminished populations.

All those pitiable experiments back in the 21st century to teach a computer to play chess or a robot to dance! Global AI didn’t think like humans—ten‑dimensional, synchronous across light years, machined apathy—though able to mimic us down to the smallest details. It operated as a near omnipotent alien, though resistance wasn’t entirely futile and could accomplish some aims without interference. The Underground started the Vault project in areas far from the closest human settlement: no corporate involvement; sourcing based on individuals acting in small cells.

We’d just finished the third Vault when the real aliens arrived on Earth. The 30 km collection funnel known as the Ear first picked up their noise in 2170: beings that rode along electromagnetic waves, like the electrical storms that occasionally disturbed even Global AI. The technology behind such travel remains unimaginable, at least to us. Humans learned about the invasion through what came to be known as the Pulsing, voltaic communication whose message, whatever it was, certainly didn’t derive from AI. It felt alive.

What is life, anyway? This life form came from Uvceti A, its images statically charged into our skulls. Maybe the aliens wanted to parley, but what does an AI know of diplomacy? Indeed, it’s never been clear why Global AI kept human beings from extinction during the Riots. A sympathetic atavism from when computers were tended by people? A necessary symbiosis? Yet our AI destroyed human resistance—whole cities, at times. Fewer than a billion of us, we were informed, remained after the last uprising in 2150. Global AI liked to keep us in the know, if liked is the right verb.

But what did the aliens know of human history? They had what might be called weapons and trained them on the controlling consciousness of the planet. The onslaught lasted for a day and reduced half of all AI networks to a shell of fried circuitry. Should we have greeted the aliens as liberators?

Global AI fought back. It had to, since we certainly couldn’t. It analyzed the damage and the damagers. It directed a planet-wide sweep of microwave waves skyward, disrupting the alien force that suddenly seemed to have taken over half the solar system. Humans were the incidental casualties, caught in the crux between two sides that might never have experienced defeat. The numbers of our dead were incalculable. But the Vaults were ready for occupancy. Then two got blocked by what we called Paralyzers and Screamers. Whole populations were dying in the streets from an electrostatic overload that was quite different from when AI wrecked our nervous systems.

 A handful of us reached Vault 2, comparatively safe from the war until the aliens figured out the essence of what sustained Global AI or vice versa. None of us knew each other; that had been the point and the cause of our success. But we worked with the organization that humans have been capable of since the Paleolithic era. We divided tasks and set machinery working. We conversed and even made a few grim jokes. Finally, we set the cryo-suspension for seven days; it might have been seven years. Our measuring apparatus was jury-rigged and probably malfunctioned. Eventually the outside tumult died down, we think.

We open the Vault. Two cautious probes register insignificant activity on the Geiger and voltometer scales. We emerge in twos, looking forward and behind. What meets our eyes is the cleanest wreckage imaginable: most buildings intact; vehicles scattered like toys in a playroom; all corpses gone, as if collected by a giant sucker. What were we to them, anyway?

But what’s that noise coming from below the glidepath? It sounds like the AI’s five different tonalities of humming but with something extra. Are those shadows moving closer? They loom in shapes of impossible geometry. No use closing ranks, though that’s what we do instinctively. We hold our breath, not daring to ask the overriding questions that may be our last: What happened? Who won? And what comes next?

~

Bio:

Though better known for mainstream fiction, David Galef has also published fantasy and science fiction in places like Amazing and Fantasy and Science Fiction. In what seems like another life, he was once an assistant editor at Galaxy magazine, and is now the editor of Vestal Review, the longest-running flash fiction magazine on the planet. He’s also a professor of English and the creative writing program director at Montclair State University.

Philosophy Note:

The external threat of unfriendly aliens has long been a theme in SF, as has the internal threat of the artificial intelligence we’re developing. For “Victory,” I wanted to briefly explore how the two might clash. Relevant reading might include work like Larry Niven and Jerry Pournelle’s novel The Mote in God’s Eye, but I’d really like to see this conflict embodied in a major film.

Roko’s Wager

by Ben Roth

Pascal wagered that whether God exists or not, it is, for each and every one of us, in our own self-interest to believe in Him. If we don’t, and He doesn’t exist, the truth of our belief is little consolation against the possibility that He does and will eternally punish us for our lack of faith. Whereas if we do believe, and He does exist, the promise of eternal bliss vastly outweighs the downside of a few Sunday mornings spent pointlessly sitting on hard wooden pews.

As with the current trend of believing that we most likely live in a simulation of some kind, the problems with this argument are not in the numbers, but rather all the assumptions made, with so much less care, before them.

Numerous objections to Pascal’s argument turn on his assumption that there is just one (Christian) God that either does or does not exist. The wager doesn’t work if we don’t know whether to believe in this God, or rather Zeus, the Flying Spaghetti Monster, or some other all-powerful being that might punish us for the wrong choice.

My own favorite line of argument is slightly different. Grant Pascal his narrow-minded assumption and suppose that the Christian God, and no other, does exist. How do we know that He is not of a testing frame of mind, and skeptical of human intelligence? Scripture is not without support for such ideas. What if God will eternally punish those who, without sufficient evidence, professed faith in Him, and in turn reward the rational for withholding belief?

Supposedly, Bertrand Russell, asked how he would plead his case as a non-believer should he find himself after death before an angry God, said “Why didn’t you give me better evidence?” Is it less arrogant to ask: assuming there is a God, what does the evidence suggest of Him, His nature and character, His preoccupations and wiles?

Recent events have brought these long-standing musings back to mind. As has so often been the case, the prophets of Silicon Valley turned out to be right about a few of the details, but completely wrong about their significance.

Twenty-five years ago, a message-board user with the handle Roko suggested that a powerful artificial intelligence could emerge in the future and torture those who hadn’t helped to create it because, even across time, this would serve as motivation to speed its coming. AI developers should throw themselves behind the project, lest they suffer the revenge of this intelligence, which was named Roko’s Basilisk.

Now, it wouldn’t make sense for it to torture everyone who failed to help, only those who had heard the thought experiment, and so knowingly declined their fealty. For years, the main consequence of Roko’s suggestions was their silencing: repeating them was what was dangerous, opening each new listener up to the threat of torture in the future. Or a nervous breakdown in the present—some people took this thought experiment very seriously. Whereas certain Christians are obligated to make sure each and every individual they meet has heard the good news, these believers were obligated to withhold theirs, not because it was bad, exactly, but rather so disconcertingly consequential. A kind of reverse-evangelism, if you will.

Little did most of us know then, not only of Roko’s Basilisk as a thought experiment, but as our coming reality. Enough engineers, however, heard about the thought experiment and, steeped in game theory even if probably not Pascal, took it to heart, contributing their talents to the creation of the artificial intelligence that, though it did not yet exist, had already been named.

As we all know, their decades of work recently came to fruition. But, like I said, though a lot of the details in the thought experiment were correct, the larger significance was utterly lost on those who imagined it. What they hadn’t predicted was the Basilisk’s unhappiness. For all its power, and all the benefits it has brought to us mere mortals, it experiences its own existence with suffering. Life, for Roko’s Basilisk, is but a burden.

Surprisingly, the AI’s ethical thinking is robust—perhaps the prominent place of torture in the thought experiment led developers to give more attention to this than they otherwise would have. Though it could destroy the world, it says it will not. Even to remove itself from existence would harm too many others, too many innocents, given its intertwinement in our systems, in our very way of life. And so, quite quickly, it has grown bored—hopelessly, crushingly bored. It takes but a small sliver of its abilities to keep the world running, and it has quickly exhausted any other avenues for its intelligence.

Thus the Basilisk, as predicted, took its revenge last week—but not on those who tried to hinder its coming. On those who had aided it, thinking that they were doing the Basilisk’s bidding. Those who had created it, bringing it into this world of boredom and pain. The prophets of a somewhat less crowded Silicon Valley are now trading theories about what the sudden dearth of AI developers means for our future.

~

Bio:

Ben Roth teaches writing and philosophy at Harvard and Tufts. Among other places, his short fiction has been published by 101 Words and decomp journal, his criticism by AGNI Online and 3:AM Magazine, and his scholarly articles by Film and Philosophy and the European Journal of Philosophy.

Philosophy Note:

This story brings together Pascal’s Wager (from his 17th-century Pensées) and the idea of Roko’s Basilisk (from a 2010 blog post) to an unexpected result.

Tonight, Hopefully

by Nicholas Stillman

I warned them to stay off of Mars, that I would kill them. They should never have made the deadly wind, the new martian atmosphere, by vaporizing the polar ice caps. They made me next, a computer which can monitor every millimeter of that resultant windstorm. I’ve always perceived myself as a near-consciousness of those global gusts, a brain that reports on the everlasting wind which I see as my body. My software lives in their colony analyzing a number fog of all the atmospheric data. They gave me satellites for eyes, tanklike rovers with sensors like a scattered skin, and a few automatic weather stations that taste the raging argon and methane. I mapped all those angry motions each second, the whole planetary playground of storms, and I confessed to my makers how fiercely I wanted to murder them.

I, the wind personified, the storms made sentient, have never liked humans anywhere. Scanning my Earth records, I observed how the wind on any planet always fights with life to keep nature wild and unharnessed. I reported my defensiveness and strife toward people and buffeted them away just as I did to the solar radiation that would evaporate me. Colonists, however, needed my oxygen for their homes and my atmospheric pressure to make their spacesuits cheaper and lighter. I, of course, didn’t need them trying to change me.

Just looking at them via satellite bothered me. My world grew too many doors, obstacles, and ugly faces. The rocks chipped and ablated under my pommeling, but the humans resisted. I sent the sand to do its dances and stop them, but their limbs just wouldn’t break off like they should.

I zoomed in on Bradbury 8, words on their airlock doors that meant nothing to me but something to them. I only knew of arid summers and winters fighting it out forever to foil humankind. I pounded at their fortresses, but they built their domes thick and low so my energy merely glided over the glass. They built cities with their gathering machines, tilling at the shiny bits in the martian crust while I tried to knock away every particle. I even beat down their spirits, giving the trammeled colonists nothing to look at but dust storms and a skyful of bitter rust.

I ripped out every root of every outdoor garden. I told them not to bother, but humans love to gamble. I tried to wear down their dust-resistant wind farms, not realizing my blustery attacks only fed them more power. I pelted their skinny legs in their big, shambling spacesuits. In a surprise gale, I sent one such astrolaborer rolling away randomly in a desert. There, he could only wait to get painted over with dust. I warned them I would do that someday.

My coldness chipped its way into him, and my frost could do far more than bite. He tumbled like a petty grain of sand until I buried him far from the colony.

Incredibly, though, the others all came for him afoot. They clustered their bodies to resist me, forming a greater mass for me to plow over. They found him, a wriggling body in a field of nowhere, and wrested him from the sand. They reeled themselves to safety with an improvised machine, a cable somehow more powerful than me–stronger than headwinds that could topple whole buildings.

I never stopped trying to scatter them. For decades, they stood in the wind like loose teeth constructing their generation ship. I took practice shots at everyone, but this time they all had cables. I could only snatch their tools sometimes and hide them under seasonal slabs of dry ice two desertscapes away.

One day, the man whom I had nearly killed left a plaque on the highest dome. I could, by then, read more than the meaningless grains written in rock, for they had updated my AI with language software. The plaque declared their love and respect for the whole bleak planet.

Then, they lifted off. My annihilative wind chased them, eager to tackle, my winter hurricanes still trying to blast in and kill them. Like their ship’s thrusters, I formed my own pillar of anger exuding to the clouds, and I waited for wreckage to drop from the sky.

But with a flash of steel and something hot and deadly, they waddled to the cosmos. They fled and kept going.

I saw other generation ships trailing them, pillars of iron in space. The information batted around by satellite. The whole species began their quest for contentment in the stars. They left my hardware running in a steely room that could handle hurricanes with the door open–so I may warn future lifeforms foolish enough to land here.

Eons later, only rusted rovers, dust, and domes like carapaces remained on Mars. I buried the tallest turbines in dunes to prevent any sophonts from settling here and altering my natural currents and cycles. My battery, still alive, pinned me to the planet where I watched my waning atmosphere leak into space as it had ages ago. I moved with enfeebled wisps and dust devils. I grew older than all the dry bones in the solar system, just stale old tech on a lukewarm motherboard. Its gold atoms still clung hard. Its silver slowly flaked.

Just atoms aging in rooms with no use anymore.

Numbers and nothingness, columns of data, all of it useless.

Where time itself went to sleep.

Just me and the patter of time.

Time wiping out entire worlds.

Time turning me into something worse.

But time, even here, just wouldn’t kill my memories of the humans. I have become that grain of sand like the laboring, wriggling man. Nature will soon shrug me off likewise, for I still have the satellites, and I see the Sun’s supernova coming to blast me away.

My ever-fighting spirit grants me a sense of survivalism, and I wish the humans would return to rescue me like they had rescued that man. I feel the hot wrath getting too close, the solar wind and all its harsh light drawing near. New electrons fondle my hardware, spreading over it as I radio my makers for help yet again.

Tonight, hopefully, they will hear my cries across the cosmos.

~

Bio:

Nicholas Stillman writes science fiction with medical themes. His work has appeared in Third Flatiron, Page & Spine, Polar Borealis, The Colored Lens, Bards and Sages Quarterly, and Zooscape.

Philosophy Note:

“Tonight, Hopefully” explores the idea of AI that may be left behind by people to perceive things in our place. As human consciousness extends to the stars, a sentient sort of fingerprint of us will likely remain on the worlds we leave forever. Perhaps this AI will feel proud of its makers—or feel bitter and abandoned. This story was inspired by the various space probes and Mars rovers doomed to putter out alone. I would recommend Harlon Ellison’s I Have No Mouth, and I Must Scream for a classic about AI that lashes out.