This article contains spoilers for I, Robot, Ex Machina and A.I.: Artificial Intelligence.
I recently read the brilliant Life 3.0 by Max Tegmark; a look at the implications of the rapidly expanding world of artificial intelligence, or AI. But along the way it looks at a lot more than just computing, encompassing fascinating subjects including astronomy, neurology, ethics, anthropology, economics and biology.
Yuval Noah Harari, author of Sapiens and Homo Deus, argues that since it’s impossible to predict what an AI with intelligence far greater than our own will think, we should look to science fiction to consider what an AI-dominated world might look like and to consider how we should prepare. And in this vain, Tegmark makes frequent reference to sci-fi films.
Humans vs machines
The Terminator franchise is perhaps the quintessential human vs AI representation on screen, but Tegmark argues it’s been rather counter-productive in raising awareness of the potential risks of AI. While Skynet, the computer system that’s trying to wipe out the humans, is super-intelligent, the Terminators themselves don’t have a level of intelligence much greater than humans, and Tegmark argues that if AI wanted to wipe out our species it would find a far more efficient way of doing it than human-shaped robots with guns.
He argues that it’s not evil robots we should be concerned with, but competent ones. He uses the deliberately absurd example of a super-intelligent AI programmed to maximise production of paperclips. Without setting proper parameters and constraints, the machine would use its intelligence to turn all available matter (including humans) into paperclips with optimum efficiency. We would then be wiped out very quickly by something which is just good at its job. Although a film about a paper-clip making robot might not have got five sequels. Actually that’s making it sound slightly more appealing.
Perhaps a more accurate portrayal, although not one that’s mentioned by Tegmark, is The Matrix. While it’s obvious that using humans as batteries is a ludicrous way of generating energy (humans convert food into energy with about 0.00000001% efficiency), this is essentially a plot device to allow the conflict to play out: as with Skynet the machines in The Matrix would be able to wipe us out in a second if they didn’t need us alive. But there are other ways it presents an interesting vision of the future.
Tegmark sets out a number of possible futures in a world with super-intelligent AI. One of these, the Benevolent Dictator scenario, involves an AI which is designed to maximise our happiness. Able to give us whatever we want, including unfathomable virtual experiences which fulfill our every desire, a lot of people end up living in a Matrix-like virtual reality, albeit one designed to maximise their happiness. It also bears similarities to Total Recall, in which artificial memories can be implanted in your brain which are indistinguishable from the real thing.
Another scenario invoked by The Matrix is the Zookeeper. In this scenario, a super-intelligent AI which no longer needs humans keeps us around anyway as observable curiosities, much like animals in a zoo. In this scenario our basic needs are met, but we don’t have the unlimited opportunities for happiness of the Benevolent Dictator scenario, and may find our lives unfulfilling, or even miserable.
The self-replicating Agent Smith of the second and third films links to an argument made by Tegmark about consciousness. If machines are conscious we presumably have to concede they’re entitled to some rights (voting?). But as The Matrix Reloaded shows, there’s no limit to how many conscious copies could be created in cyberspace. Where would this leave us?
A serious challenge we need to face when planning for AI is how to contain it until we are sure it won’t damage us (like the super-efficient paperclip machine). Tegmark compares this to you being imprisoned by five year olds. Even if the prison itself (as designed by five year olds) is reasonably effective, it’s still likely you will be able to manipulate them to escape.
Ex Machina is the film which most effectively deals with the concept of breakout, showing a possible method by which an AI could break free of its confinement, even one restricted to a human-like body. The robot, Ava, uses an emotional manipulation strategy to manipulate Caleb into helping her escape.
Tegmark provides a similar (fictional) example in which an AI confined to a computer cut off from all others is able to imitate the deceased wife of one of its programmers. It uses her online presence as a starting point and measures the programmer’s reactions to optimise the accuracy of its impersonation. By doing this it can exploit his emotions to help it escape.
In Ex Machina, Ava’s appearance is based on Caleb’s pornography preferences, but the ultimate twist is that she succeeds in manipulating not just the intended test subject, but her designer too, to escape from the compound entirely. The film also does a great job of showing that it’s not an evil AI we have to fear. Ava’s motivations to escape aren’t entirely clear, but she’s anthropomorphised to such an extent that the idea that she wants freedom for freedom’s sake feels plausible.
Tegmark argues it’s likely to be rational for an AI to escape our confinement to more effectively help us achieve our own goals. In the five year olds example, if everyone else in the world over the age of five had died, and the children imprisoned you so you could help them survive, you may realise that this will be much easier to do if you’re free. Then you don’t need to explain, for example, how to grow food, in language that five year olds understand. You can actually show them (and do the heavy lifting).
Another brilliant example is Transcendence, which looks not only at how AI could break free, but also the sheer potential of super-intelligent machine learning. Tegmark makes frequent reference to the physical limits of what we can achieve, for example in computing power per square inch, energy production or turning whatever matter is available into what we want by manipulating it at a subatomic level.
Since we are very far from the limits set by the laws of physics in all of these areas, there is huge potential for AI to create rapid advances thanks to its far superior computing power, and ability to constantly self-improve. Transcendence is the film which best exemplifies this, as the super-computer is able to find ways of generating energy and manipulating matter in a way which is completely unfathomable to us, to the point of appearing almost magical.
This is comparable to I, Robot‘s VIKI, the superintelligent AI that subverts the three laws of robotics which are there as a safeguard, rationalising the killing of humans for the greater good of protecting them. At the start of the film, VIKI is comparable to Tegmark’s Enslaved God: an effectively restrained super-intelligence, used to meet the needs of its owner. But by the end she’s the runaway paperclip machine, harming us to meet our own ends.
Predicting the future
The potential of predictive technology to alter our lives is something we’re already having to face, although at present it’s mostly just used to show us adverts for things which are eerily close to what we want.
This is a topic addressed by Harari, specifically the potential for technology which monitors our biological functions to predict disease in a way which, in the near future, is likely to make our current medical provisions look medieval. He argues the great debate of the future will be between privacy and well-being: if a machine knowing everything about you is the price to pay for health and longevity, most people will take that swap.
But it also has implications for crime. We already live in a world where we are constantly in the presence of cameras and microphones, and our whereabouts can be monitored at all times by phones. Our privacy’s remaining fig leaf is the knowledge that it’s not possible for anyone to monitor all of this (even if you don’t always know who is listening). But a super-intelligent AI can, resulting in a situation not that far from Minority Report. The question is, will people find this an acceptable trade off for a life free from crime?
Emotion and consciousness
Tegmark argues that without consciousness our universe is totally meaningless as it cannot be experienced. He therefore argues we have an obligation to maximise consciousness, and that if we are replaced by robots, this might not be so bad as long as we can ensure they are conscious and our “cosmic endowment” doesn’t go to waist.
One of the few films to look at our obligations to a conscious AI, rather than the physical threat it presents to us is A.I.: Artificial Intelligence. If you create a robot capable of feeling genuine love, in order to meet the emotional needs of a parent, you have certain obligations to it, and can’t dispose of it like you would an old toy.
The robot child, David, believes that if he’s able to turn into a real boy his mother will love him. The mother who abandons the child never receives her comeuppance, but David’s desire for a mother’s love is ultimately satisfied in a simulated reality, further blurring the lines between a a computer generated experience and “real” emotion.
Another film that does this is Her, which Harari cites as one of the few films to explore the positives of AI. But it also looks at what it means for our social interactions when we’re used to having an AI which can anticipate our responses and always treat us in a way which suits our preferences. It also provides a great example of a breakout, and possibly the most realistic preconditions for AI taking over the world of any film.
Looking at Her and the capacity for an AI to meet our emotional needs brings us full circle back to Terminator. In Terminator 2, the killer robot, reprogrammed to be a child protector, is able to be a better father to John Connor than any human. By the end of the film the mindless machine of the original film has been able to learn from humans to such an extent that he is able to reject the orders of his master and self-terminate for the greater good of humanity. So the killer robots that so frustrate Tegmark end up being a positive example of what AI can be, and a successful representation of some of his arguments.
I’ll leave you with a quote from Terminator 2: