More than twenty years ago, Steven Spielberg completed Stanley Kubrick’s final passion project, a film about artificial intelligence and machines that is equal parts prophetic from a modern perspective, uneven as a movie, and unfulfilled as a work of art.
Today, everyone is talking about Artificial Intelligence. The emergence of ChatGPT and subsequent competitors capable of interacting with people using natural language has officially brought the technology into the mainstream, turning everyone into an armchair expert. ChatGPT and others, are of course, the pinnacle of decades worth of work behind the scenes, and most people have been exposed to some form of machine learning over the past ten years. Alexa, Siri, Google Assistant, and more are all earlier versions of this technology, introducing us to the idea that you can talk to a computer in the first place. The same was not true back in 2001, when legendary director Steven Spielberg completed equally legendary director Stanley Kubrick’s passion project, A.I. Artificial Intelligence, after his untimely passing in 1999. Kubrick, of course, was crucial to introducing people to the idea that machines were capable of thought in the first place. Thirty five years earlier, 2001: A Space Odyssey featured a computer, HAL, as a main antagonist. HAL ultimately concludes that his mission is too important for humans, never known for their reliability or rationality, and decides to execute the crew of the Discovery One until Dr. David Bowman shuts him down in one of film’s most dramatic and memorable sequences. The implications of machine intelligence must have lingered with him long after because he began work on what would become A.I. Artificial Intelligence ten years later, selecting Brian Aldiss’ short story, Supertoys Last All Summer Long as source material. Unlike HAL, the computers in this story would be able to interact with the real world through lifelike robot bodies, and unlike 2001 in general the story would center on humanity’s relationship with machines, questioning whether or not we could truly love them, rather than a grander arc about evolution and enlightenment.
Kubrick worked on the project on and off for two decades, across three screenwriters including Mr. Aldiss himself. In 1994, the film finally entered pre-production, but the technology was not available at the time to realize his vision. “We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing,” long time Kubrick producer Jan Harlan reflected. “But it was a total failure, it looked awful.” The same was said to be true of attempts at using early computer graphics, which ultimately lead Kubrick to direct Eyes Wide Shut instead, which in its own way is Kubrick’s analysis of human relationships including love and lust, ideas he tackled once before in Lolita. At points, he is said to have toyed with the idea of handing the project over to Steven Spielberg given the focus on children and relationships, but that did not officially happen until after his death. Mr. Spielberg then adapted the 90-page treatment Kubrick had been working on for decades, writing the screenplay himself for the first time since Close Encounters of the Third Kind. Spielberg, being Spielberg, the film was produced in less than two years, and appeared in theaters in 2001, ironically, to rather mixed reviews. The film review compilation website, Rotten Tomatoes, describes it as a “curious, not always seamless, amalgamation of Kubrick’s chilly bleakness and Spielberg’s warm-hearted optimism, A.I. is, in a word, fascinating.” There is a lot of truth to this, beginning with an opening sequence that is all the more striking considering the developments in Artificial Intelligence over the past two decades.
Professor Allen Hobby is the head of Cybertronics, a company that makes robots, known as mecha, that do everything from perform household chores to satisfy sexual desires, but these mecha are missing something that keeps them from being truly intelligent. They can be made to look like people, they can do many of the things people can do including carrying on a conversation, and yet they remain clearly machines. They respond to commands. They perform the functions programmed into them. These functions can be advanced, like a damage detection system that simulates pain, but they are merely functions. If you turn them off, a mecha will allow itself to be destroyed. Professor Hobby believes technology has advanced to the point where mecha can reach another level of awareness, and he proposes to build a robot that can love and be loved. Here, Mr. Spielberg presents the current state of Artificial Intelligence quite clearly and succinctly; the language they use to describe how these machines work aside, the overall approach is prophetic considering how technology has evolved over the past two decades. Robotics aside, ChatGPT and other technologies demonstrate that we can create the output, but something remains missing on the inside. Professor Hobby originally defines it as self-directed action, that is the ability to identify your own goals and implement your own plans, which was once considered a prerequisite for true Artificial Intelligence, known as Artificial General Intelligence. Later in the film, this concept is revisited and refined in any even more clarifying matter. Humans have the ability to believe what is impossible and strive for it. This strikes at the center of a long-running philosophical debate. If I say “the sky is blue,” we can easily evaluate whether that is true or false, but if I say “I believe the sky is red,” the situation is a lot more murky. I am clearly stating a falsehood, but I can just as clearly be a madman and believe it. The statement is therefore true, even though the belief is false. How can humans be wired in such a way that we can believe easily falsifiable things, sometimes with an intense passion?
The vehicle for this exploration of human and Artificial Intelligence is David, a mecha boy programmed to love his parents. Henry and Monica Swinton’s natural child suffers from an unmentioned disease and has been placed in suspended animation for an unidentified period until a cure can be found. Henry is an employee of Cybertronics and Professor Hobby arranges to have David placed in their home for testing. Monica has been devastated by the loss of her son, making her an ideal candidate for a robot replacement that can truly love, but there is a twist: The process of imprinting David is irreversible. He cannot be reprogrammed to love another. If Henry or Monica ever tired of their robot child, he would have to be destroyed. This is an intriguing premise. One of the key, though often unrecognized aspects of human consciousness, is its all consuming persistence. We cannot be turned off, we cannot be forced to forget, we cannot alter our memories. There is no reset button. Consciousness, as in everything we are at any moment in time, is irrevocable short of sleep or death, and even when we sleep it bubbles up in dreams. We cannot take a serial killer and remove their darker impulses, though Kubrick himself explored this idea in the masterful, A Clockwork Orange. We cannot get over the death of a loved one simply by suppressing the feeling of loss. Computers, at least so far, do not work that way. Even if we cannot fully understand how or why ChatGPT may respond to a given query because of the underlying complexity, the development team that supports it can alter everything that comprises the software whenever and however they would like, either changing the parameters or the data set. They could also reset ChatGPT to its original state.
Mr. Spielberg, whether aware of it or not, poses an interesting question: Is persistence a requirement of consciousness? What would happen if we created a ChatGPT that was sealed off and could not be altered except as a result of its own feedback loops, and its internal circuitry were free to evolve on its own? I’m reminded of an independent role playing game from a few years ago, Darkest Dungeon. The game featured an intriguing premise: There was no saving or undo. You could pause the game and resume where you left off, but like life itself, there was no going back. If one of your characters died, they were gone forever. Characters could also suffer serious injuries, mental illness, and other debilitating diseases, all of which were largely irrevocable. If your character went mad, they were mad. You could not replay the events leading up to it and try to find a better outcome. You could only live with the consequences. Of course, Darkest Dungeon was not designed to be an experiment in Artificial Intelligence and the characters did not do anything unless directed by a human, but the proposition that things in the game were permanent as they are in real life radically changed the experience. The characters themselves seemed a lot more human and moments when they were in peril were significantly more charged, suggesting at a minimum that a persistence of consciousness would change the way we react to machines, if not the machines themselves.
Sadly, this aspect of David’s nature functions primarily as a plot device, making it more difficult for Henry and Monica to let go knowing he would be destroyed, but not really impacting the story in any meaningful way. This is doubly true given that David seems a strange robot to begin with. Generally speaking, human fears concerning Artificial Intelligence originate from the reality that machines are rapidly getting better than us at just about everything, from chess to writing a resume, threatening to displace us entirely. The sphere of action where humans are superior is rapidly shrinking. David, on the other hand, does not seem to be particularly good at anything. He is not the ideal child one would expect, far from it. Nor is the vision of what it means for a computer to love truly realized beyond David’s protestations to that effect and desire to be with his mother, as if love was merely a matter of how much you say it and whine about it. When Henry and Monica’s son, Martin, is cured, and returns to the household the film sets up something of a rivalry between the real boy and the fake one. Martin is understandably jealous of this machine interloper, but David doesn’t really do anything to earn that jealousy, nor does it appear to to be reciprocated as one would expect if David was truly capable of love.
Instead, David acts in an increasingly bizarre manner that doesn’t really make much sense either in the context of the film or for an Artificial Intelligence. First, he is prodded by Martin to eat green vegetables and stick out his tongue as children do, even knowing ingesting food could permanently disable him. Sure enough, he ends up in the robot equivalent of surgery after his face starts falling off. Second, he is prodded by Martin again to sneak into their parents bedroom and cut off a lock of his mother’s hair. I probably don’t need to tell you how that turns out. Third, he is bullied by Martin’s friends, one of whom cuts him with a knife to see how David responds to pain. Apparently, a robot responds to pain by flipping out. He starts screaming over and over again for Martin to protect him, then goes into some kind of catatonic state, falls into a pool while his arms are locked around Martin, almost drowning him. Whether he is capable of love or other emotions, David is simply not a very high performing robot, which inexplicably misses the entire point of Artificial Intelligence and the dangerous allure of it. We use these machines because they are better than us, even before the advent of computers. The automobile did not replace the horse and buggy because it was slower, harder to maintain, less efficient, and all around worse. It replaced the horse and buggy because it was better in (almost) every meaningful way. Machines might one day replace humans as loved ones and lovers, but they will do so because they offer a significant advantage, not because we claim they love us and they protest about it a lot. Overall, David seems to behave far worse and far more bizarrely than a child his age, offering no advantage or any real reason why parents would truly love him. It would have been far more interesting – and I suspect Kubrick would have gone in this direction – if David was superior to Martin and Henry and Monica could not help loving him more for it.
In addition, the decision to make David a poorly mannered robot and a danger to their child gave Henry and Monica no choice except to return him to Cybertronics, reducing the moral complexity of the decision and the film as a whole, and then serving as a plot device to set David off on his version of the hero’s journey, where he will meet other robots for the first time. Unfortunately, here too, the movie cannot seem to understand the implications of its underlying assumptions. Gigolo Joe is a pleasure robot who accompanies David through most of the second half and he is not supposed to be self-aware, but he certainly doesn’t act it. He too is on the run after being framed for a murder, and he exhibits the same desire to survive, the same drive to find a location where he is safe, and the same overall persistence of consciousness as David. Nor do any of the other mecha they encounter appear to be any different. They might be more blaise about their fate after they are rounded up for destruction in a “Flesh Fair,” where humans who hate robots vent their frustrations in something of a violent circus, but they all seem to carry their memories with them like David or an ordinary person would, nor do they seem capable of simply shutting themselves off as one would expect an ordinary computer to do. Instead, they march to their own doom, unhappy about it, but resigned to it. Gigolo Joe also seems capable of far more than mere pleasure work. He’s a creative problem solver that manages to hitch them a ride to their next destination, and he appears to know and understand far more about the world than one would expect for a robot programmed for a singular function. Why he would be turned on at all when not serving a customer is odd considering the set up, but why he would be able to note that humans are always looking for their creator in philosophical terms remains completely inexplicable. The only difference between Gigolo Joe and David appears to be that David has an inspiration.
After being read the classic story of a wooden boy who becomes real, Pinocchio, he convinces himself that he can find the Blue Fairy, be transformed into a real boy, and be reunited with his mother. Gigolo Joe is just along for the ride, and sees no greater meaning in existence than surviving from day to day. David’s journey will ultimately lead him back to Professor Hobby, this time in a flooded Manhattan. Professor Hobby informs him that David is special as the first of his kind, but not unique and he can never be made real. Here, the Professor explains that David, unlike any machine before him, is capable of self-directed action and believing the impossible. He should not be ashamed that he cannot be made real. It is human to seek to achieve things and believe in things that are not real, which the film states is the hallmark of our intelligence and remains its most subtle, salient point. Inexplicably, Professor Hobby leaves David to wander alone and he encounters himself as a product, dozens of Davids and also Darlenes ready for shipment. David is distraught, and attempts to commit suicide by falling into the flooded city. He is ultimately saved by Gigolo Joe, but not before he discovers an underwater amusement park and what he believes to be the Blue Fairy. After Gigolo Joe is captured, David takes a ship underwater and seeks her out, discovering only an old statue, but he believes it is real. He waits there for 2,000 years before discovery by advanced computers who inform him that he is now special as the only being on Earth that knew humans directly, in what amounts to yet another plot device. Machine memory is permanent. If humans died out and computers survived, those computers would carry with them every piece of data humans ever recorded. David would be no more special in either era, except as an antique in the future. Regardless, they inform David that they can reunite him with his mother for a single day, but no more. He makes that choice, and they fall asleep together, apparently never to awake again.
The audience is left uplifted and yet vaguely unsettled. Even aside from an over-reliance on the Pinocchio trope in the second half of the film, one which is said to originate with Kubrick himself, the entire movie seems to hinge on a slight of hand, a trick. David is not real at the end. He is told he can never be real, but somehow a single day with his mother treating him as if he was so, makes it so and he can do what he has never done in the past. The idea that humans, and presumably Artificial Intelligences, would have their view of the world shaped by how they are treated might be interesting in another context, but here, as in so much of the film, it seems to be nothing more than another plot device in a film littered with them. Mr. Spielberg wanted a happy ending where there was none to be found, and so as the creator of the film, he simply waved a magic wand and made it so. This, of course, essentially undercuts the entire premise. David does not behave strictly the way his creators intend. He has agency and desires, even when they are misplaced. The movie, of course, is whatever Mr. Spielberg wants it to be, a different kind of creation entirely. Confusing these two distinct types of creations, one that has freewill and the other that is confined entirely by another’s will, is a fatal flaw, at least in my opinion, and one Kubrick never would have made. If David were truly like humans in our ability to love and desire that which we cannot attain, he cannot be truly fulfilled no matter what may occur or no matter how long he might persist, the real heart of the human condition and the foundation for much of our striving and angst. Perhaps Bruce Springsteen said it best, “All men want to be rich. Rich men want to be king and a king ain’t satisfied until he rules everything.” This, of course, is the second part of our fear of Artificial Intelligence in general. If machines are better than us at everything and decide we are no longer needed, what chance do we have? Mr. Spielberg refuses to answer that question, and instead ends up saying nothing at all, offering questions, some of which have turned out to be prophetic, but none of which he seems to have an answer for. We will never know for sure, but it seems impossible for me to believe that Kubrick would’ve been so noncommittal about the hole thing.