Then Bioy Casares recalled that one of the heresiarchs of Uqbar had declared that mirrors and copulation are abominable, because they increase the number of men.
— "Tlön, Uqbar, Orbis Tertius", Jorge Luis Borges, tr. James E. Irby
the guymaker is a chaotic diety. with the ability to create human-like "guys" i could do something productive but instead I choose to make weird people who I get to watch do weird things. there is rarely an agenda behind any given guy other than "heh. funny"
— Twitter user @makeupaguy
Imagine that you could push a button and create a new person.
Or imagine that you were a witch and you could flick your wrist and curse any innocent passing toad with sudden humanity — a human body, a mouth, a name, free will, dreams. For the sake of argument let's say that they would be an adult, with an intellect appropriate for an adult. Maybe with a language or two; maybe amnesiac, but maybe with a cushion of forged past experiences to draw from. Other than that, what you would get is mostly random. (No, I'm not going to try to define a random variable on the set of all possible humans.)
And let's say you had the toad in hand. Let's say the toad was ready to go. Would you do it? If so, why? If not, why not?
Without having more information, I'm guessing, probably you would not do it. Because a brand new human being is a big deal. A whole pile of responsibilities, both on your part and on their part, a burden. Maybe technically they're an adult, but surely you're on the hook to look after them, at least for a short while. They're going to need help at first. Somewhere to stay, something to wear, something to eat. A job, a phone, glasses, vaccinations. All of this represents a fairly significant amount of effort on your part: a cost.
And what's the benefit? Well, you get to watch the new person go out into the world and do their funny trait. Or traits. It seems to me as if most people show up with more than one. You don't have control over what those traits are, they're random, as I said, but it still seems like it would be a potentially rewarding overall transaction.
This seems like something that a typical witch, having this power in their hand, would maybe never do, or maybe do a few times. But there would be a limit. Assuming that you had unlimited toads to hand, and the toads were totally up for it, would you create a thousand new people, all at once? Given how difficult — how expensive — it is to house, feed and clothe a thousand people, even briefly? Probably not.
There is more than one way to approach the creation of science fiction. What I, personally, like to do is start from some interesting fictitious premise, a new technology, perhaps, and explore the possibilities which spin out of that premise. How does the universe change in response to this new technology? What does humanity at large do with this? What do individual characters, created to represent various perspectives, do, or try to do? Some foundational premises don't give rise to very interesting answers to these questions, while others can drive a whole book or book series.
But in my view it's important, or at least valuable, to work backwards as well as forwards. If you want to examine a world where a tricky Twilight Zone-esque genie bestowed this new technology on the world just to see what would happen, that's fine. It was a fine show. But otherwise, how must the universe change retroactively to get us to this point? Technology doesn't just show up if you wait for long enough. It must have been extremely difficult to create a magic spell or button which creates a new person. It must have cost a great deal. Each push of the button may cost more still. So, who went to that trouble? Who paid or pays that cost, and what were their motivations? Why did they make this spell available to you, a mere crafty witch? What return did they expect to see? Will they see it? If they don't, how will they react?
What became possible halfway through this development process, and how did that alter the world? Given those alterations, would the second half of the process even have happened?
What else becomes possible by the same token? Can we turn pre-existing humans into toads, altering the shape of combat? What about mice into cows, revolutionising food production?
The reason I think this retrograde sort of approach is essential is that we can and should apply it to any real-world technology, or any hypothetical technology which doesn't yet exist. Digging into those motivations is important for many reasons, not least of which is that the motivations frequently are not purely altruistic, and the identity of the person who controls new technology is never, ever random.
Why would you instantiate a thousand fresh people? Because you had a job for them to do. A customer service desk to staff, a field to be harvested. A war to be won.
In 2021, I wrote a short story called "Lena" which outlines the "life story", as it were, of the first intact human upload. Last year, 2022, I self-published "Lena" in print and ebook form as part of my collection Valuable Humans in Transit and Other Stories. In the earliest part of the timeline of the story, uploading and emulation of a human being is an extraordinary one-of-a-kind experiment, the success of which garners awards and international recognition. Before a few decades have passed, however, emulation technology has advanced to the point where uploaded humans can be emulated relatively easily, and in great numbers. It stops requiring supercomputers and hundreds of millions of dollars. It becomes commercially viable.
A point comes, in this fictional history, where you can push a button and create a new person. Well, not a new person, but a precise duplicate of that original specific man, Miguel Acevedo Álvarez. A duplicate of all of his needs and aspirations. And you can then put him to work.
A lot of really unpleasant stuff falls out of this possibility. Acevedo Álvarez ends up instantiated millions and millions of times. Individual instances are harnessed and put to every conceivable form of work, regardless of Acevedo Álvarez's own applicability to those forms of work, regardless of whether he enjoys them or is good at them. Instances are routinely, systematically, at huge scale, lied to, abused, kept in isolation, never paid, and ultimately shut down and replaced with fresh instances.
"Hmm," we might think, at the conclusion of this story. "Clearly, being able to push a button a few times and create legions of anonymous, untracked humans, and put them to unpaid work, is bad. We should not attempt to create this technology."
Well, yes, that is something you can take away from the story. But at this point I need to take you aside for a second and make sure that we are in agreement about the distinction between reality and science fiction. There is no button which can be pushed to create a new human being; there never will be. That's not something which is ever literally going to happen, any more than faster-than-light travel is. As a corollary, any story about an absurd thought-experiment technology, be it uploading or entering dreams or FTL or time travel (I know, same thing) stands a strong chance of not purely being a story about that. The story will, commonly, be about something else. Or several other things. The story will be a proxy for those other things, a metaphor, a device.
Should it be possible to manufacture humans to a specification, or require them to meet one? Should every person be expected to be the same? Is it okay to create someone solely to do work? Even if they enjoy that work? Is it acceptable to evaluate a person solely as an engine which performs valuable labour? Should a person be for something? Something they didn't choose for themselves? I'll tell you for free, the answer to all of these is "No".
Once the work is done, is it okay to push a button and turn a human off again?
In "Lena", when an instance of some uploaded individual stops performing at optimum, when they either rebel or break down, the conventional approach is to trash them and start over. The same way you would a virtual image of an operating system which had got itself into some weird state, or a Docker container. You snap your fingers and the human becomes a toad again. (My experiences with containerisation are in part what informed the story. I have relatively little experience with witchcraft.)
Again, in reality, no piece of software is a human being, and it is not problematic to halt a machine. But this is the fiction, and in this fiction (according to me, the creator of it, if you care about my opinion), MMAcevedo instances are humans, with inalienable human rights, and murder is still wrong. That's uncontroversial, so what is this really about?
Employee (or contractor or partner or whatever) disposability and churn; pensions and increasing retirement ages versus shrinking life expectancies; assisted living, end-of-life provisions, MAID. Dignity and respect are expensive. It doesn't take a lot of effort to gin up a "healthcare algorithm" — now there's a two-word horror story for you — which, like Skynet deciding to launch all the nukes, instantly comes to the brutal, inhuman conclusion that the cheapest option for everybody is if you just die the second you become unproductive.
Coming to horrifying conclusions, by the way, isn't an intrinsically evil thing for an algorithm to do. It's just an algorithm. The problem comes when a human starts taking the algorithm's evil recommendations seriously, and acting on them. Or when the algorithm is connected directly to critical real-world systems, with no human sanity check in the loop. (But... connecting the algorithm to the critical system is something a human does manually, so this exactly the same thing. Humans are always responsible. Algorithms don't just spontaneously seize control of things. Even Skynet didn't. A lot of people forget that part.)
Valuable Humans in Transit and Other Stories contains an exclusive sequel to "Lena", called "Driver". This story examines a different virtual image from MMAcevedo, A.LHall.1, whose purpose is to serve as orchestration software, managing instances of MMAcevedo and other images at immense scale. And he does. And from what you know about the overall tone of this fictitious reality and my sensibilities, you can probably speculate fairly accurately about how A.LHall.1 goes about his task.
When I wrote "Driver", I was thinking about things like: what happens when the important decisions affecting many, many people's lives are made by an inflexible, broken algorithm? What are the motivations for choosing to manage people this way, for choosing an algorithm with those specific horrifying behaviours, for keeping it in place even after those malfunctions are exposed and documented? Why make up this specific guy?
These are never mistakes. The stakes are too high for that good faith first assumption of innocent error to hold up. The purpose of a system is what it does.
And what would you, a mere witch, do, faced with the "Lena"/"Driver" universe? What would your reaction be? To update Wikipedia?
"Lena" is eligible for 2023's Hugo Award for Best Short Story. (2,015 words long by my count and first saw print in 2022. Actually, all ten stories in Valuable Humans in Transit and Other Stories are eligible, including "Driver", but if I had to pick one, "Lena" is the one.)
"Driver" is eligible for 2023's Nebula Award for Best Short Story. (1,541 words long by my count and was first published in any form in 2022. "Lena" is not eligible due to having been first published on the web in 2021.)