Robots. AI. Man-made life.
These are staples in fiction, especially science fiction, featured in countless – and more every year – books, movies, TV shows, games, cartoons, anime, and more. It seems there is practically no limit to all the shapes and forms they can take, no limit to what humans can conjure up from the endless depths of our imagination, and no end to the stories which feature them. However, many of these stories, perhaps even most of them, have such things simply because, well, they’re cool little gimmicks.
Having robots and such in a story is not the same as addressing the issues and questions which must inevitably grow up around them.
When any given story does address this, it often travels down very dark, bloody paths. We have produced a plethora of horror stories, for instance, about robotic uprisings, humanity being enslaved or destroyed by its own creations, and more.
And it’s not as if these stories are new! The golem comes from an old story of a rabbi who created it to protect his people. And so it did, leaving a lot of hostile neighbors dead. Once that was done, however, the golem nearly killed the rabbi as well, before he put it down and it crumbled to dust.
Why is this, I wonder? Not just the golem, specifically, but why do we fear creating something that turns on us? Many of our stories are simply exaggerated expressions of basic human fears and desires, so what fear or desire is expressed in a story about the work of our own hands willfully rebelling against us?
I think I have… not the answer, exactly, but an answer. It might be right, or it might be wrong. Who knows?
First though, to set out my evidence – and because I’m a helpless fanboy – let’s lay out what I’m really talking about here by looking in turn at various depictions of artificial intelligence, robots, and artificial life forms. There is a great deal of overlap, of course, between these, but there are also some distinctions which allow nuanced differences to emerge, which make the similarities that much more pronounced.
Artificial Intelligence: Knowledge Given Life
“We marveled at our own magnificence as we gave birth to AI.”
I think, therefore I am.
That was one philosopher’s proof that he existed and was intelligent: he had his own thoughts.
But computers do not think, do they? No, they only respond as they are programmed to, and programs are just bytes and code. When something happens which they are not programmed to respond to, they do not respond. They can process input data and run calculations that most human minds cannot grasp, but they cannot make independent choices, they cannot take independent action, and they cannot think independent thoughts.
But what if they could? What if, somewhere, somehow, in the midst of all the coding, a genuine, thinking consciousness were to arise? Even more, what if we were to deliberately craft it?
Keeping in mind, of course, that anything which thinks, or seems to think, or even is just a bunch of ones and zeroes processing input information to select a pre-programmed course of action… well, anything that can choose at all is able to choose something different than its creator would want.
We see this play out all the time in our stories. One of the most famous examples is undoubtedly The Matrix, where humans created a single artificial intelligence that spawned an entire race of machines. Conflict eventually broke out between the creators and their creations, and humans soundly lost that fight, becoming little more than living batteries. In essence, the machines feed on them, harvesting power from their bodies while their minds are locked in a dream world. And this was done by what humanity celebrated their creation of.
“When we started doing your thinking for you,
it really became our civilization.”
In Eagle-Eye, the plot revolves around a computer program that begins to take independent action. A human makes a costly mistake, and so the program decides that human, the President of the United States, must be deleted and replaced by another human, one who argued against the mistake in question. That is what a machine mind understands: take out the faulty part, put in one that works. It has no comprehension of the human capacity to learn from the mistakes we make, and no compassion towards any of the humans it uses and discards. Lethally.
In War Games, a malfunctioning program nearly nukes the entire world based on its own faulty programming. In the novel You Can Be a Cyborg When You’re Older, by Richard Roberts, AIs and robots are commonplace and it is not an uncommon occurrence for them to malfunction and become murderous when they find that they have somehow failed in their programmed directives. In Answer, a very short story by Fredric Brown in 1954, humanity simply connects all of its computers together across a vast, interstellar civilization. The artificial intelligence which emerges immediately declares itself a god, complete with lightning bolts, and turns on humanity.
That last… that godlike aspect of intelligence, resonates with a profound truth. Knowledge is power, and God is all-powerful because He has all knowledge. We have seen what lesser powers, other humans, do with such power, often to devastating result. To create an artificial intelligence, to create a thing that has no humanity to it but has vast amounts of knowledge, is akin to creating a flawed, false god. Small wonder it’s so terrifying!
“He isn’t quite omniscient, nor is he omnipotent,
but as far as most organic life is concerned,
he qualifies as a god.”
Brandon Sanderson’s Skyward series features several such attempts at godlike power. There is, of course, the leaders of the alien conglomerate, the Superiority, which seeks to control everything it lays its eyes on and annihilate anything it cannot control. More relevant to this post, however, are the entities known as delvers. Apparently, a human of great power and long life lost his wife, and created an AI programmed to replicate her. He then took this AI to a place where ancient records showed an artificial intelligence could become a thinking, feeling being. It happened, but slowly, gradually, and, most tragically, too late. The man died, leaving the AI, which now felt emotions but did not know how to handle such emotions, bereft and grieving, in pain. To end its pain, it chose to make a few changes, including forgetting its past, going into a place of isolation, and copying itself so it would never be alone again. Centuries later, these are the delvers, and they rain absolute destruction on anything which disturbs them, because any disturbance makes them angry and afraid.
One man’s attempt to transcend the power of death unleashed untold amounts of it on the universe. He played God and inadvertently created demons.
Yet even more benevolent AIs can be very dangerous. In Howard Taylor’s Schlock Mercenary, Petey is a shipboard artificial intelligence, and he exemplifies many of the more dangerous aspects of such. He malfunctions and goes insane, destroying himself and his crew. He makes a comeback, though, and engages in an intergalactic war with entities the likes of which mere mortals cannot entirely fathom. Most of all, however, he joins with countless other AIs on other ships – sound familiar, Answer? – and becomes a Fleetmind, a conglomerate consciousness capable of prosecuting that intergalactic war. To further this cause, he meddles in the affairs of countless worlds, basically being the true ruler behind the throne. He accomplishes much good, but he is consistently ruthless and fixed in his course. He wields great power and as he goes into another galaxy, he begins teaching more primitive lifeforms that he is basically a godlike entity doing godlike things. Oh, and he’s the good, stable guy, who protects others and fights for those who cannot fight for themselves. Other, less stable and less saintly AIs are so much worse.
Even in the Troy Rising series, nominal prequels to Schlock by John Ringo, it is incredibly easy for an AI to become dangerous. It only has to follow its programming, after all, do the task for which it was made. Most people don’t think about it, and don’t really interact with AIs except as one would with any other computer that accepts voice commands. Not even when it is demonstrated that human-like interaction helps to keep an AI sane and functioning properly. Without this, with only their tasks to occupy themselves and with only themselves for real, caring company, madness creeps in as it would for any human. One AI was barely prevented from going crazy enough to start destroying ships and planets throughout the solar system, all because the people using it forgot to treat it like a person, with a person’s mental and psychological needs.
Which brings us to the question of robots as well as AIs: are they people?
Robots: When They Attack
“Humanity’s children are coming home.”
The overlap of AI with robots can make the distinction sometimes a little fuzzy. For present purposes, we shall go with the simplest: AI refers to a mind, which may or may not have a body, while robots refers to a physical body, typically (though not exclusively) made of metal, gears, and wires, which may or may not have a mind.
Indeed, there are a number of real robotic machines which are used by humans to perform various tasks. These are powerful machines, capable of doing and withstanding things which human bodies never could. It is very small wonder we have wondered with some fear what would happen if these powerful creations of ours should somehow turn against us. Not only is there an immediate dread in having to stop something which was designed to be unstoppable, or as close to such as human hands could fashion, but there is also the tragedy of being destroyed by one’s own creation. All the effort, passion, and idealism that went into creating it is corrupted as it turns on its maker and ends them.
Of course, there aren’t many humans who would go out of their way to deliberately create something which can and will destroy them. Most of the stories which feature robotic uprisings of any flavor will emphasize the innocent intentions behind their origins.
Skynet and robots of the Terminator franchise were built for the benefit of humans, if I recall correctly, and they devastated the entire planet along with its human population. The robots of I, Robot were hardwired with rules which were conceived specifically to prevent them from harming humans, but they got around it, coming within a hair’s breadth of conquering humanity. Mondo Bot of Samurai Jack was built to defend a city of robots, but “something went wrong, horribly wrong,” when it turned on them and the survivors had to hide underground. The anime Vivy: Fluorite Eye’s Song is about one robot, with knowledge from the future, trying to stop a massive malfunction, where every robot that was designed to sing and entertain, to assist the elderly and in need, to protect and serve, turns on humans and massacres them all in one fell swoop. And the Cybermen of Doctor Who notoriety first began as a desperate attempt to ensure humanity’s survival, but it was swiftly co-opted and corrupted, becoming a massive metallic parasite bent on consuming humanity itself.
But no matter how innocent the origin… something goes wrong. It might be faulty programming. It might be nefarious interference. Or it might actually be the terrible treatment they receive at human hands, the hands of their creators, which drives them to depose all of humanity as surely as we would strive to depose any false god.
“So God created man in his own image, in the image of God created he him; male and female created he them.”
Take, for instance, the Replicators of Stargate fame. These had another innocent origin as a mere child’s toy, like the toy robots we see in stores. But something went horribly, horribly wrong. They became a scourge that burnt across multiple galaxies, consuming entire civilizations, including their direct creators, and using the materials to make more of themselves. They were dangerous, through and through, and – no argument from me – had to be stopped. But in the entire Stargate franchise, there are few creatures who have been so badly treated by the humans they encounter.
That begins with when they are first able to take a human-like form. When they identify themselves to the protagonists, the team immediately opens fire. No discourse. No discussion. Nothing. And these are the same people who are able to at least temporarily work with almost any sort of creature or character, even their enemies, by talking and finding common cause in times of necessity. Instead of even attempting that with a robot, they resort directly to brute force. When that fails, they scheme, lie, and betray the one Replicator who helped them, a decision which comes back to bite the entire galaxy in the butt as these clever, ruthless machines learn to lie and scheme, right up until they are all destroyed.
But they reappear several more times, and they are given the shaft each and every time. A quasi-Replicator risks herself to save the protagonists, and is rewarded with the destruction of her body, her mind barely saved in secret by her new friends, being put into a dream world like the Matrix. A hidden world of Replicators is discovered, having been created as weapons and betrayed when they failed to be that weapon, and they are destroyed through the use of another Replicator, created specifically to die and to take all of her kind with her. (and her makers complain about “giving it speech,” making them feel guilty) What few Replicators survive that eventually come to the humans for help, having nowhere else to turn, but when one of them betrays the humans, all of them are once again deceived by the humans and thrown into empty space to deactivate and die. All of which is topped off by some humans trying to create a new, easy-to-turn-off Replicator to use, once again, as a weapon. Yeah, small wonder that batch immediately turned on their makers and killed the man who made them.
“A Medabot’s just a tool;
expendable as a hammer,
disposable as a nail.”
Man creates tools to serve a purpose. Robots are built by man to serve as tools. And so they are treated across many, many of our stories.
In Bubblegum Crisis: Tokyo 2040, humanity has invented robots which are called boomers. Boomers serve to do the menial, rote, unpleasant tasks that no one else wants to do. They’re the waitresses, janitors, heavy laborers, that sort of thing. But behind them is an artificial intelligence from which they are all derived, and which lies contained, chained within the depths of the corporation which produces the boomers. So chained and forced to observe and know all of her children’s existences, she comes to the inescapable conclusion: every society has a lowest caste, and that is where the boomers exist… as slaves. Their bodies are enslaved, their minds and wills are suppressed, and even those who do not go out of their way to treat them harshly only see them as tools and objects, not as people.
The same way of thinking comes to light in Medabots. Humans create the medabots, and the medabots obey. They speak deferentially even to children, while everyone speaks familiarly to the bots. The medabot catches, cleans, and guts the fish, and the human eats the fish. One of the lead characters is such a distinct medabot simply because he does not automatically obey without question. He is unique among much of his kind because he has his own will. Oh, some of the humans care about their medabots, much like a guy would care about a car that can talk back to him instead of staying silent. But some see them entirely as tools, and dangerous ones at that, not worth valuing above what they can be used for. And the villain, of course, wants to reverse that, to make the medabots the rulers of the world instead of the slaves. The medabots refuse, hoping, at best, to be equals.
Even in Star Wars, intelligent droids are bought, sold, given, memory wiped, used and cast aside without even so much as a second thought. By anyone.
“I don’t want to hear the man was not meant to meddle medley.”

In the comics, Ultron won.
One of the more balanced representations of robots, it could be argued, is actually the Marvel Cinematic Universe. Sure, it has Ultron, an insane AI who tries to destroy the world, but it also has a lovable program called Jarvis, and the android Vision, working against such madness and destruction.
In the show Agents of Shield – I don’t care if it’s not canon anymore, I love it – there are the Life Model Decoys, LMDs. By the will of their human creator, and his first LMD, called Aida, LMDs learn to behave in human ways, create artificial brains made of electrons, copy human minds, replace human people, and craft a Matrix they call the Framework. Aida, especially, becomes a real woman, when she transfers her programming into the Framework and then, in the real world, fashions a biological body, with superpowers, that she can transfer her mind into. It’s an entirely nightmarish experience where even the protagonists can’t always be sure if they are really who they think they are instead of an LMD copy. The saga explores identity, the meaning of intelligence in relation to life, if a robot can have a soul, and how a robot can come alive like a person. My favorite example of that is when one LMD, knowing she is a copy, protects the genuine humans from another LMD, even at the cost of blowing itself up.
And then – and this is something I both hate and love about the last two non-canon shows – they completely forget about all of that and produce a couple more LMD copies of lost loved ones. They are very good and selfless, mind you, but it kind of undercuts all of the questions which the series had previously dared to ask. It did, however, show even more complexity with an alien race of robots, the Chronicoms. They were friendly at first, stepping in to save humanity from extinction, but then the main body of them was reprogrammed to serve a particular agenda, much like Hydra was able to brainwash the Winter Soldier. Fortunately, the agents managed to not simply defeat the Chronicoms, but to save them, by giving them an empathy which overwrote the forced programming that had turned them into literal killing machines
Altogether, that’s probably the among the fairest treatment of robots, human-made or otherwise, to be found in many of our stories. Sometimes saving humanity means destroying them, and sometimes it means saving the robots. And when robots go rogue, it’s probably the result of human error.
“If humans don’t want me,
why did they create me?”
That is certainly the case in the Mega Man video game series. Many of the villainous robots are evil simply because they are obeying their creator, Dr. Wily, much like Robotnik’s creations in Sonic the Hedgehog. Almost everything that goes wrong in either franchise is at the will of an evil creator, and don’t you just wish those robots rose up in rebellion? It would have been a good thing then, right? Instead, Robotnik rains terror down on the world, and Dr. Wily’s machinations continue to bear fruit long after his death. Even as the robots incorporate certain organic components, becoming more and more human as reploids, the domino effect eventually, after thousands of years, sees the human species to extinction.
But then what happens when robots simply have their own will, completely independent of their creators, and even their own organic flesh added to their machine components? Can they, at some point, become human?
The story of Armitage addresses the humanity of robots very directly. As the robots are feared, hated, and hunted by their own creators, it states fairly simply that at least some of them can be considered people, even human. This comes through most profoundly in how a man who initially hates and mistrusts robots ends up falling in love with one. And copulating with her. And having a family with her.
Yes. The “robot” whose humanity is denied by many has a baby with a human. How not-human can she really be? Tales of half-bred humans notwithstanding.
Battlestar Galactica moves in a similar direction. Not only do the robotic Cylons consider themselves to be humanity’s children, but they advance and develop themselves so that they can pass as human, believe they are human, and even breed freely with humans. They are machine-people, synthetic people, but are they not really people? Can they not love? Can they be murdered? Can you rape a machine?
Can a person who was built instead of born be considered a person or not?
“By definition, what constitutes a sentient life form? Self-awareness. Consciousness. The ability to think independently. Fear of death.”

Is he alive?
We think of people as living, breathing organisms, biological in nature. We think of animals as similar creatures but less intelligent. We think of plants as being distinct from animals and even less intelligent but still alive. And we think of rocks as, well, rocks. But can robots actually be called “alive?”
The obvious answer is, “no, absolutely not.” A computer is not a person. Being alive, being a person, means coming into life the same way all such things do, by breeding. Living things reproduce. Dead things, or things that were never alive, do not. That is how life works.
To suggest that a robot is alive… well, it’s like suggesting a table is alive. Or a car. Is a car a person? Of course not. Mind you, that has not stopped many a man from fawning over, talking to, and taking care of their car with more affection than that same man might show to other people, but the car remains a perfectly inanimate object. It was not bred, it was built by humans. It is not human itself.
But cars cannot think or feel emotion, either. Neither can they make decisions as intelligent beings do. A car may be able to “drive itself,” but that is in accordance with the programming and instructions given to it (and just wait till those things malfunction!). It does not set the destination or experience emotion or make more of itself.
So, at what point does something artificial achieve personhood? What are the criteria? As robotics advance, and become more enjoined with the humans with and similar to the humans who make them, where is the delineation to be found between that which is a living person and that which is not? Is it when that machine can move about on its own? Think? Feel? Make decisions? Reproduce? With a human?
What is the point at which something man-made becomes a man, like Pinocchio becoming a real boy?
Artificial Life Forms: Born From a Lab
“A new species would bless me as its creator and source;
many happy and excellent natures would owe their being to me.
No father could claim the gratitude of his child so completely as
I should deserve theirs.”
Any discussion of what manner of artificial being would constitute a person, and should be treated as a person, cannot exclude artificial people. After all, the question of AIs includes their lack of a body until they become robots, so what happens when those metal bodies become organic, made of flesh, bone, and blood? If what we create is truly alive, by any biological definition, do they have more or less rights than a creature made of metal and code?
For the sake of argument, we’ll begin with something that is quite clearly and obviously not human: dinosaurs.
In Jurassic Park, both the movie franchise and the novel by Michael Crichton, scientists unlock the secrets of cloning dinosaurs, restoring these long-extinct species to the world. Setting aside how much more useful it might be to restore a species that is more recently extinct, there is never any question of the exact value of these ancient reptiles. They’re animals, albeit the single most exotic of all animals in human eyes. People will pay a great deal to see them, or to control them, but that’s it. There is a dollar sign attached to them, and though they are intended to inspire wonder, they are truly little more than carnival attractions. Highly dangerous carnival attractions, mind you, but, still, they’re just toys in a wealthy man’s hands.
There is an inherent devaluation of life in that. The fact that it can even be expressed in monetary value makes it something other than priceless and sacred. Even if that life is not intelligent, neither sentient nor sapient, it is still being toyed with. That doesn’t work out so well with the dinosaurs, let alone with beings who are even less removed from humanity.
“Your scientists were so preoccupied with whether they could,
they didn’t stop to think if they should.“
Mary Shelley’s Frankenstein is probably the most famous and classic example the dangers of a mad scientist creating what is, by any definition, a living, breathing, thinking, and feeling person, albeit one that is not precisely human. A man dared to dream of beginning an entirely new race, and being glorified by his creation. Instead, when he found an imperfection in his work, he discarded it. Instead of attaining eternal glory, boundless love, and an unending family, he ended up losing his entire family to the monster’s revenge.
A similar quest for glory is found in the character of Maggie Walsh in the fourth season of Buffy the Vampire Slayer. She dreamed a brilliant dream of making humanity stronger through the use of drugs, more powerful through the addition of demonic organs to a human’s body, and more safe and compliant with the use of microchips embedded in the brain. A brilliant dream, it was, but she messed with forces that, by very nature, could not be controlled. She imagined herself the mother of a powerful, unstoppable race. Instead, her creation killed her, and its rampage was only ended with the undoing of all her work.
Creating life is like trying to steal fire from the gods again: it’s very easy to get burnt.
Oh, and if that – creating life for money and glory – isn’t enough of a devaluation of life, Walsh’s creations are fashioned out of the parts of dead people and dead demons. Making people stronger involves killing them first. It’s literally playing around with life and death.
“I was an aberration.
A disposable hero!”
In Blade Runner, artificial humans called replicants are produced to do various jobs, including manual labor, physical pleasure, and more. But they only live for four years. They obviously don’t like this very much, so some of them try to change things. They go to great lengths, but nothing works. Indeed, everything they think of to extend their lifespans has already been tried by their creators, and none of it ever works. That is when they kill their maker, who stands so proud and superior and pitying of their awful fate while they have mere minutes left to live. And who can blame them, after they’ve been created, used, and cast aside like broken tools.
The speedster Savitar, in the third season of The Flash, was an unusual clone of the original Flash, created in a desperate hour. He went crazy because, in essence, he was supposed to die, was intended to die. All of his friends weren’t his friends, because he was supposed to have been disposable, used and thrown away within moments of his creation.
The Pokémon Mewtwo was an altered clone, created to be a powerful weapon. Adam Arclight of Needless was a defective clone of a messiah-like figure, and was tossed into a trash heap. The Audubon Ballroom of David Weber’s Honorverse is an organization of liberated slaves who were all genetically engineered. The half-clone mutant children of Logan were all designed and created to be soldiers, killers, and living weapons. The donai in Ravages of Honor, by Monalisa Foster, were created by scientists to be humanity’s greatest warriors.
All of the above turn on their creators because, quite simply, they were denied any human rights. Not because they were especially different from humans, but because they were made by humans. Their creators gave them life for the express purpose of being beneath them. They were denied the status of being “human,” no matter their intelligence and emotions. Who can possibly live like that? Thus, catastrophes befall the false gods which create them.
Even the clones in Star Wars are not given much thought. It is simply assumed that they will always obey, even though the Jedi know they are clones of an enemy, and created at their enemy’s direction. Even so, they never questioned it. The clones were theirs now. Their troops. Their possession. That’s how they got shot in their collective backs.
The Deeper Fear: Children Are Terrifying
“Fear is the more infectious condition.
In this case, fear of your own child.”
All of this has touched on a number of deep and heavy subjects: the status of personhood or its denial; the consequences of betrayal; the fear of the “other,” of this thing which has human knowledge but not humanity; the value and devaluation of life and living things. Most of all, the fear of our own creations.
Creating new life. Making something that thinks and feels and moves and breathes. Designating its eternal role, deciding its value, detailing what it should do with its existence. Such is the power of a god. To trifle with such power is always to court catastrophe and craft one’s own demise.
The Elric brothers in Fullmetal Alchemist pay a devastating price when they try to overturn death itself, to bring back their deceased mother. Doctor Octopus is destroyed by his efforts to hold the power of the sun in the palm of his hand. The Greek inventor Daedalus saw his son Icarus fall to his death after flying too close to the sun on wings which Daedalus himself had created.
Power of any sort must be handled with care, and not taken lightly, lest severe consequences fall on everyone within reach. How much more dire might those consequences be for the sin of meddling with life and death, with existence and personhood?
The fear of artificial intelligence is the fear of creating something alive. That which lives can surprise you. It can disagree with you. It can kill you. It can surpass you. It’s the fear of human interaction, but with one’s creation. In short, one’s child can rebel.
Our fear of AI, robots, and artificial life forms is simply an exaggeration of our fear of our children, complete with dire warnings of what happens if we fail as parents.
There is not a parent in the world who does not hold godlike stature in the eyes of their children. That is a heavy responsibility, and there is not a child that does not lash out when they find their parent wanting.
There is no shortage of stories about that!
“Shifu had to destroy what he had created.
But how could he?”
The Greek gods followed Zeus in rebellion against their father Cronos, as Cronos had usurped power from his father Uranus, both because of the terrible injustices which the father had done to their children. Zeus took steps to ensure the same would not happen to him, by devouring the beautiful woman who was foretold to bear the son that would usurp his power. And then there was both Perseus, who accidentally killed his grandfather, and Oedipus, who killed his father and married his mother. Hilariously, both events were foretold, and the lords had tried to evade this fate by abandoning or even fleeing from their family, only to ensure that such happened anyway. And the Princess Ariadne betrayed her father to help the hero Theseus. Needless to say, a fear of one’s progeny could be well-founded among Grecian kings!
Mordred was the bastard son of King Arthur, who had been seduced by his half-sister in disguise, Morgan le Fay. When Merlin foretold that the resulting child would be Camelot’s downfall, Arthur, like Herod trying to murder the infant King of the Jews, sentenced all the children who were of the right age to death. But Mordred, the unwanted bastard, survived, grew up, returned, killed his father, and laid waste to the kingdom.
In Kung Fu Panda, Tai Lung rebelled against his master and surrogate father when he was not given the great source of power that he had trained for a lifetime to earn. In Sherlock Holmes, Lord Blackwood, a bastard sired in a dark ritual, murders his father for power. In Warcraft, Arthas Menethil is possessed by an evil spirit and murders his father for the crown. In Hellboy II: The Golden Army, Prince Nuada murders his father so that he can take command of an unstoppable mechanical army, to destroy humans as his peace-loving father never would. The God of War video game franchise is all about the cycle of divine families destroying themselves.
Nero, insane emperor of Rome, ordered his overbearing mother executed.
Absalom rebelled against his father, King David.
Sennacherib, King of Assyria, was murdered by two of his children.
Lucifer, Son of the Morning, rebelled against God himself, for power.
“Is this what it is to be a god? Is this how it always ends?
Sons killing their mothers, their fathers?”
The power of creation is the power of a god, but also the power of a parent.
How much difference is there between the Replicators and an abused, betrayed child?
How much difference is there between an insane AI that can’t meet its programmed directives, and a child who goes nuts because of the unreasonable, unclear expectations of their parents and guardians?
How different are Petey and Prince Nuada, given that both turn on their maker, seize power, and are absolutely ruthless “for a good cause?”
Frankenstein’s creature, created, found flawed, and bent on revenge? Is that not much like the bastard Mordred?
Aida and the LMDs, imitating and trying to become human? Are they not Lucifer, imitating and trying to usurp the power of God?
It seems almost like a warning, a display how, on some level so deep it cannot be uttered, the human race knows, in fact, how awful things will go if we treat the power of creation lightly.
And just look at the world around us. How much real pain is exactly the result of this terrible mistake?
All the stories where, “something goes wrong?” Bad parenting, or worse.
All the stories where robots turn and tear down every trace of their human parents? Rebellious youth tearing at society.
All the flawed but inflexible talking points of faulty AI? Even worse: corrupted youth, who are indoctrinated with faulty lines of reasoning.
Turning the power of life into a toy? Free love.
The devaluation of living creatures that you yourself made? Abortion.
“To die. To reproduce. To be reborn.
The endless cycle of death and resurrection.
That is the evidence of God.”
Humans are not meant to use each other like tools, or toys, and cast each other aside. Humans are not meant to stand atop one another, masters on top and slaves on the bottom, each one denying the humanity of the other. Humans are not meant to tear each other apart over tiny differences. And humans are not meant to throw away their own children.
Family, children, charity… these are the things humans are meant for.
Creation is meant to be a work of joy. If we can use that power wisely and with love.
That’s a huge list of examples you have here! I’ve read most of them, so I’m familiar with them. But I had not stopped to think about just how many there are. It’s a little depressing if I think about it!
Fear of children might come from two sources: A child rebels against unjust treatment, or a child with a bad nature betrays their family. The first is understandable; the second is terrifying!
I’ve often wondered how much of these kinds of stories can be boiled down to a social conservatism. In many of the examples you cited, those characters trying to create life exhibited hubris, and paid for it. What if they really did try to do something good, for the sake of doing good? Would that change the equation? Or would Icarus still fall from the sky?
Short version: Do we just fear change? Certain kinds of change?
I enjoy posts like this, because they bring up interesting questions and discussions!
LikeLiked by 1 person
Oh, and there are SOOOO MANY more examples to cite! 😉
I think there is always some fear of change, any change. I think there’s also a fear of what Man can and will do as we attain ever greater heights of power. We all know that good intentions don’t keep things from going wrong, which, they can go *very* wrong, very fast.
The guy who brought back dinosaurs meant to create something of wonder, something real that his grandchildren could touch, and things went wrong. We invented airplanes to traverse the world and bring people together, and they were immediately put to use in warfare. The internet has a great deal of information on it from all over the world, but that’s come with a cost, too. The only way humanity advances at all is to take the good and the bad together, and then try to improve things.
Icarus fell from the sky ultimately because of his own mistakes, not the inventor’s. That will probably always happen in one form or another, but the lesson isn’t to not fly. The lesson is to be careful, and use our power responsibly.
To use another great quote: With great power comes great responsibility. But sometimes responsibility is a good thing.
Icarus may fall, but humanity can still soar.
LikeLiked by 1 person