This was intended to be a movie review about “The Creator,” although I have been thinking a lot about robots, so this post will be more about robots in general than that movie. Therefore, I will say up front, “The Creator” is interesting but flawed, and I would recommend waiting to see it until it is on a service you can see for free. Then you can view it and argue about it, as I’ve been doing all week. It doesn’t work because the plot and character actions don’t make sense, but it may leave you thoughtful.
But what does it mean that we make these movies about robots which are sought after then destroyed in the movies? I spent the week watching other movies on the same theme: “Blade Runner 2049” and “A.I.” in particular. And, of course, there’s always “Terminator.”
The problem in these futuristic visions is that machines are created to perform complex tasks, with some level of artificial intelligence. Scientists who create the machines make them humanoid, with some degree of human likeness. They may or may not give them the ability to learn and change. Other humans enjoy them but also continue to treat them as disposable machines, and therein is the conflict.
Why Does a Robot Want Candy?
“The Creator” takes place in 2070, after America has banned A.I. “robots” but they exist in Southeast Asia. (Allegedly, AI was banned because it attacked Los Angeles with a nuke). America has built a giant space/sky weapon, which is not an A.I. robot but just a giant piece of murderous technology. So somehow murderous technology good? AI Bad? At least to the Americans? The US Military, with this weapon and many nasty soldiers, is hunting down AI and shooting any robots–or humans– that are near the robots. Clearly, very anti-American sentiment in the movie; I thought perhaps it was intended to appeal more to overseas audiences than to U.S. audiences (?) In the movie, it is rumored that a powerful new AI weapon being built by the New Asians will let the AIS defend themselves against the Americans who are flying and driving around in Southeast Asia attacking whoever they want, with no international repercussions. (You sort of see where I had a problem. )That weapon, and you could figure this out from the trailer, is a child.
Our hero (John David Washington), who fell in love while he was trying to infiltrate the New Asian group to find out about the weapon, is sent back to get this child. There is action and lots of chases and shooting, surprises to find out who is still alive but maybe not. Alison Janney (who I adore) is a vicious American colonel with a scar and she orders lots of blowing up of things. All of that was kind of ludicrous. Yes, like “Rogue One,” except “Rogue One” had a point because it’s part of a story arc.
I had many questions about this robot A.I. child. She always wants to watch cartoons and, like a child, stubbornly refuses to let the adult turn them off. She wants candy and ice cream. And she can send out electronic pulses that destroy machines (and apparently humans). She was based on scans of human children, and one character says that she will get more powerful if she grows. But another says that her machine parts are among the most sophisticated that he has ever seen, which means she was created not “grown.”
So, was she a child born that was turned into a machine, who was able to grow? Or, was she a machine grafted on to an existing child? What is her programming? Did someone program her to like candy and cartoons? This is my core problem with this idea of the AI, bot, sim, replicant, whatchamacallit. Is this a very sophisticated machine with humanoid features, which “wants” to be free because it is intelligent enough to know what freedom is? Or is this a growing human that has been enhanced with machine strength capabilities? It seems to be the former. But then it was programmed. Why was it programmed for candy (to blend “in” with the humans)?
It’s a world that has designed machines which are so sophisticated that they learn. This world put human faces on them and gives them human pleasure receptors so that they want candy, ice cream, and freedom. And then it wants to be able to turn them off at will. Therein lies the dramatic tension.
Retiring Replicants
In “Blade Runner: 2049” (and its prequel), scientists create replicants, bio-engineered humans who have superhuman capabilities. These replicants are desired offworld to do the dirty jobs to make colonies habitable for humans, who are moving there because they made Earth a trash heap. (Fundamental question: if they can make offworld colonies habitable, why not clean up Earth? Never mind. I love this movie, and I don’t want to overthink it. It’s fascinating and visually stunning, but let’s just talk about the robots.)
In the original “Blade Runner,” the replicants are kind of pissed at being enslaved, and they return to find information to get around their limited lifespan. They want to live longer; what could be more human? In “2049,” scientists have adjusted replicant programming to make them obedient, so they don’t just start killing everyone like in the prequel. Blade runner K (Ryan Gosling) demonstrates the programming, to chilling effect. He is both replicant and replicant hunter (blade runners are police who kill replicants that deviate from their programming). The plot turns on a child being born to a replicant, which everybody wants to find. The scientist wants to be able to reproduce docile replicants in complete uniformity without having to engineer them–that would be cheaper. The replicant freedom movement wants to reproduce to go to war against humans, and who wouldn’t at this point, after they trashed the planet and created a slave race.
Humans in this world treat replicants as disposable–certainly as less than human–calling them “skinjobs.” The creepy scientist in charge sees fit to create them and kill them at will. Other than the mysterious child, they are created as adults and aren’t able to grow or age. But in other respects they are human; they need to eat, they have memories (even though implanted).
The drama here is that the dividing line between human and not-human is blurry. The chief replicant who works for our creepy scientist is both killer and empathizer; she cries when her boss stabs a replicant. She is the enforcer, but she has both anger and sadness over the treatment of other replicants. Then, there is Joi (Ana de Armas), a hologramic A.I. that K owns. He buys a Google/Amazon enhancer that allows the hologram to detach from its machine. In turn, she orders a replicant prostitute to have sex with K. Critics did not like the objectification of women in the sex scene, but I think that misses the point. Two replicants engage in sex aided by a hologram. Is that degrading? Is it degrading for the actors or because we’re supposed to really think of them as human beings? Is it gratuitous to watch mechanical beings have sex? Does it matter that it’s a movie, and by definition they’re all just pixels whether they’re human or not?
The fundamental tension is that bio-engineered humans were created to perform functions according to their owners, but they have enough free will to act as humans who want to be free. Certainly, Ryan Gosling is hoping to find that he grew up and was not bio-engineered. My take is that we might not want to bio-engineer humans with superhuman functions. It might be better to design machines to do those functions and leave them looking like machines without too much ability to think for themselves. We are certainly going to need machines and technology when we go into space; maybe we need to be very thoughtful about much they should resemble humans.
Making Real Boys
“A.I: Artificial Intelligence” is the third movie that can help fill in the very messy gaps of this idea about how we might treat future robots. As an aside, I had several chances to watch this movie and avoided it for twenty years, and it turns out that it is pretty bad, other than as a curiosity and a movie about robots for a blog post. The combination of latter years Stanley Kubrick + Steven Spielberg is both creepy and bizarre, fancy special effects that didn’t age well combined with syrupy scenes, strange behavior, and a score by John Williams that sounds like he never watched or heard anything about the movie. Moving on, what’s this got to do with robots?
In “A.I,” the premise is that humans have died off due to rising sea levels, so they need to create other humans for companionship. Mecha humanoid robots are programmed but without emotion, until scientist William Hurt creates a little boy mecha prototype who is designed to love.
Of course, there are two problems immediately obvious. First, the kid only loves; he doesn’t have complex emotions. His love is not particularly believable, but the love of something that is programmed. Secondly, he isn’t programmed or designed to grow. Would you want to have a seven-year-old child hanging around who never ages and never says much more than, “I love you so-o-o much, Mommy!” If this is what his designers had in mind, they don’t really understand what love is, and if this was supposed to be ironic, it doesn’t come across that way. The problem with robots here–and it’s made clear that Haley Joel Osment is a robot who can’t even eat food without ruining his metal innards–is that if they are designed to be human companionship, they are no more sophisticated than the animated teddy bear. Who actually is pretty creepy.
To recap: 1) it’s a problem to create robots who look like humans, give them an understanding of free will, and want to limit them to being slaves or being turned off at will; 2) it’s a problem to bio-engineer humans to have superhero strength, and try to limit them to being slaves, using psychological conditioning to prevent them seeking freedom; and 3) it’s a problem to design human-appearing robots, treat them as disposable, and let them loose to suffer or try to become real. The key seems to be making them look human and then enslaving or treating them as disposable. My modest proposal: How about if we don’t try to make them look or seem human?
The Real Problem with Robots
Right now, fortunately for us trying to avoid the terrible things that happen in all of these movie (or the A.I. Singularity from “Terminator”), the big problem isn’t robots seeking sentience. The big problems is robots that don’t work. Cruise, the driverless taxi service in SF was making strides until it was involved in an accident and didn’t stop until it ran over an injured person. It didn’t cause the accident, but it then didn’t keep the humans nearby safe. This was compounded when the company altered the images of the accident to the DMV for analysis. So now they’ve suspended operations ’cause they’re in big trouble.
It is a problem when students use Chat GPT to cheat, and when Chat GPT uses written content in copyright violations. But an even bigger problem is that Chat GPT gets information wrong, so it is essentially untrustworthy as a reference device. It’s kind of useful as a way to make students verify the information, should that be something the teacher wants to teach.
Just yesterday, the NYT had a story about the use of technology in hotels up in Detroit, with “smart” bartenders and apps to tell people what room to clean. The hotel employees went on strike, mostly because the machines don’t work. The bar machines sometimes spray customers or lack the ingredients they are supposed to stock. Result: no tip. The clean room apps have cleaning staff enter rooms where customers are sleeping. Result: customer complaints about the staff, not the app. In other words, the problem is that the technology doesn’t work consistently.
Right now, we don’t need to worry about enslaving humanoid robots, though we might want to write down for future edification, Note To Self: Do not create and enslave humanoid robots. In the short run, the bigger problem is getting the robots to work.
I really enjoyed this post. There’s a lot to think about, and we better get to thinking about it before the robots figure it all out for us.
Definitely true–let’s not make them smart enough to understand that we can turn them off (that’s what the Terminator is about). Thanks for your coment1