Nobody doubts the power of artificial intelligence to make our world a better, safer place. A recent study estimated that autonomous vehicles will prevent up to 350,000 accident-related injuries by the year 2025. But how many jobs will be lost by then, and can we even begin to predict the impact such change will have on our culture? In a CES 2017 SuperSession, a panel of experts discuss the future of humanity in a world that is increasingly being shaped by AI.
Vivienne Ming, co-founder of Socos, admits to "drinking the Kool-Aid" on AI years ago; some of her recent work in data intelligence includes anticipating manic episodes in bipolar patients, in real-time, with the help of wearables. "The world doesn't get better just because," she says. "AI might be a powerful technology, but things won't get better simply by adding AI."
"You can't just collect data, you have to build intelligence on it," says Chris O'Connor, GM of IBM Watson IoT. He believes that the terms 'AI' and 'IoT' will eventually become synonymous. "There are maybe only 10 players globally who are currently able to aggregate this information and build value."
Jeroen Tas, CEO of Connected Health at Philips, says that AI was an inevitability in the field of healthcare: it is the only reliable means of studying the "diversity and complexity" of medical data, of recognising patterns too subtle for human detection, and of tracking digital pathology. As Ming puts it: "Who wants to get a worse diagnosis of their cancer, just to keep a human doctor in the job?"
Are the robots here to take your job?
"There is no element of the work hierarchy where you shouldn't expect to see displacement," says Ming, citing recent McKinsey research which suggests that as much as 40 per cent of a CEO's job can be automated. "I don't think anyone currently owns that problem," she adds.
This is happening right now; earlier this month, a Japanese insurance firm replaced 34 of its employees with IBM Watson. So how do we avoid our own obsolescence? By making AI a complementary technology, Ming suggests, rather than a displacing one: "How do we augment people? How do we create technology that's complementary to what we do?"
Ming also points out the need to acknowledge that while in many sectors, automation is going to free people up to do different and better kinds of work, "that simply isn't true for the global population... there are people doing these jobs who won't be able to move up the training scale."
"We need mechanisms to deal with the short-term displacement that is going to be a fact," says Accenture CTO Paul Daugherty, pointing to tools such as VR which can be used to augment human capability. "Technology can help solve this problem," he says, adding: "Public-private sector collaboration is going to be crucial to this."
Daugherty is the author of a recent Techonomy article which outlined the four key challenges that humanity needs to work on in order to coexist productively with AI. They are:
1. Prepare the next generation: Re-evaluate the knowledge, skills, education and training that will be needed in the future.
2. Advocate for and develop a code of ethics for AI: Daugherty believes "tangible standards and best practices" will be integral to the development and use of intelligent machines.
3. Encourage AI-powered regulation: Using AI itself to update old laws and create new, self-improving regulations will help close the gap between the respective paces of technological change and regulatory response.
4. Work to integrate human intelligence with machine intelligence: Human beings and machines offer very different but equally valid strengths to business processes; while AIs are capable of analysing vast volumes of data, it will be up to humans to be the adaptive, creative problem-solvers.
"Ethical systems need to be built in from the operating system up," says O'Connor. "But do we even understand what human values are, at a macro level?" Tas agrees that we must be "explicit" in laying down an ethical framework for AI. "There are a lot of ethical decisions to be made in healthcare, especially at end of life," he says, although he maintains that human beings will remain at the centre of this, and "the idea that the machines are going to take over and make these decisions is just wrong."
Society needs to get moving
There are huge, critical gaps in our knowledge which need to be filled at a societal level, especially regarding notions of data rights and ownership, which are still highly nebulous. Ming recalls the time that she hacked her son's Fitbit and insulin monitor, only to then be informed she had actually violated several federal laws in doing so. To whom does that data belong, if not to the user? The device manufacturer? The FBI?
The way Ming sees it, public officials are way behind when it comes to their own AI education, and this has led to a lag in policy and planning. "There are neural networks that can build whole apps from scratch - so why are we teaching high school kids to code?" She asks. "Where are we going to be ten years from now, when these technologies are deeply integrated?"
It is a reality that some human job descriptions will inevitably become defunct, but Ming sees potential for the creation of new ones, like "adaptable creative problem solver," somebody who uses the sheer automated scale of AI to realise their vision, whether that be in the arts or in the medical industry. "We're all going to be doing that in our own realms," she says.
And while it's a bit of a stretch to say that every major company will have a Chief AI Officer in five years' time, it isn't inconceivable to see a real need for an AI Relations Officer; somebody who helps to monitor equality and efficacy.
"We want systems that are accountable, we want transparency, and we want fairness and non-discriminatory data," says Daugherty. "Sometimes we think this is all going to happen accidentally -- but the ability to design and deploy this lies in our hands."
This article originally appeared at Ogilvydo.