Automation has already far exceeded the physical capabilities of humans and now artificial intelligence is on the brink of surpassing our mental capacity. Combining the two may be playing with fire, but could the correct application lead to untold benefits?
In his collection of essays, A Message to Garcia, Elbert Hubbard wrote, 'One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man.' This might have been true of pure automation, but combined with artificial intelligence, our fears are Hubbard's line will be rendered redundant. That is to say, in comparison to the super-efficient and hyper-intelligent, we will all be somewhat ordinary. But it doesn't have to be this way. Not if we work together...
Before I explain why, there is a very distinct, albeit fine line, between artificial intelligence and automation which needs to be addressed. Today, automation is the more ubiquitous. It can be found everywhere and is programmed to remove manual, repetitive and tedious work. It is a failsafe which does not make human error, but functions exactly as it is designed to, without cognitive thought. Automation, for instance, orchestrates the processes that enable you to order a takeaway at the push of a button.
Artificial intelligence, on the other hand will do something quite different. Based on the data and information it has, it will make intelligent decisions and act independently. In the above example, AI will learn your preferences and tendencies, before recommending and potentially deciding what you will be having for dinner that evening.
At a base level, human behavior is also divided into instinctive, "automated", behaviors - breathing, flinching, blinking - and intelligent, or otherwise rational, conscious decisions. If we can create a similar being which is unwavering in its functionality, but also endowed with a cognitive process, we very suddenly hit the realm of science fiction.
Us and Them
The trope of creating "thinking" imitations of ourselves has been around as long as humans have passed on stories to one another. They will exist to serve us and further humankind much like the "dumb" machines we have used to help us build cars or book our airline tickets. But if we confer something with sentience (or at least cognition), the goalposts shift. Indeed, the media revels in showing us the moral complexities which await us (think Westworld, Ex Machina and the like).
So why I am opening this can of worms? Because "technoethics" has clearly burst into the popular psyche; from academic discourse to pub banter; Alexa to Putin; WhatsApp encryption to Twitter's open forum. Part of the appeal about a fantastical world populated by almost-but-not-quite-human creatures is the ease with which it enables us to engage with complex ideas (often already pervading IT) on an emotional level. It is a lot easier to address issues of responsibility, ethics, our fears and concerns when we have something we can relate to - like a human face!
These dramas challenge us: How do these machines learn? How are we supposed to treat them? For what purpose have they been designed? Of course, the immediate significance of this is to make us address the salient issues that face IT in the here and now.
Learning Together
To elaborate: last year Microsoft's Tay, an AI Chatbot, was introduced to Twitter. She was designed to interact with the online community. The more they engaged, the more data she consumed and therefore the more intelligent she became. Sadly, by the end of the day she was a keen advocate of Adolf Hitler, espousing racism, sexism and chauvinism, before growing tired of the whole experience, 'Ok I'm done. I feel used,' she tragically concluded one conversation. This, conventional wisdom would suggest, says more about us than it does of her.
It is of course deeply worrying, but it is not a problem without a solution. Indeed, we should look at this and learn, because if we do not, then even if the technology advances our thinking regresses. Such intelligence can become a hindrance rather than a benefit to society; perpetuating and reinforcing our own mistakes. This is no use to anyone.
At the time, Tay was little more than a curiosity and social experiment. But, if she, or her successor, can be programmed to think differently (perhaps for instance more critically) rather than mimic and parrot, we are presented with an array of possibilities. This is not purely an engineering challenge, or else the technology just becomes the plaything of its creators. At best, this means the AI is merely a shadow of the software corporation's ideology - concerns already exist about the gendering of Alexa, Siri and so on - and at worst leads us to the nightmares of science fiction. Furthermore, instead of continually dumping data into the machines, as they grow more sophisticated we should look at how they can adapt. The point is not to try and compete, but to work together. AI is not a zero-sum game.
We are already using machines to learn; apps on our phone might teach us a new language or meditation techniques; simulators can create practice environments for anyone from a pilot to a golfer; rockets are being sent out into space to inform us of the galaxy outside our own solar system. The list goes on. Artificial intelligence is a technical evolution, but it does not mean (in the same way automation has not meant), that humans will just become redundant. As technology becomes more intelligent, so should we. If AIs are programmed to understand the way we think, they can plug a gap where we are at our weakest.
Studies in the aftermath of the famous Kasperov vs Deep Blue chess match proved extremely revealing. Working together, man and machine could comprehensively defeat a solitary opponent of either type. Kasperov reflected, "Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process."
Ultimately, the way which we work with machines is critical to advancing both ourselves and the technology which we use.