Decisions that affect us are increasingly being made by artificial intelligence (AI). From automated trading algorithms to precision manufacturing, AI is responsible for more and more of the things we thought only humans could do – and this is having a profound effect on our world.
Our awareness of such effects focuses on early adopters in developed countries. However, most people live a long way from such technological utopias. For instance, in 84 countries around the world, including China and India, less than 50% of people have access to the internet, while in 36 countries less than 50% have access to electricity.
Yet, AI has the potential to bring significant benefits even to the world’s poorest. It could help overcome implicit, institutional and persistent biases working against them and remove educational and informational barriers preventing them accessing economic, social, legal and cultural institutions. AI could reduce infrastructural hurdles to development, because an AI system, hosted in a technologically developed area, can easily be accessed, via mobile technology, around the world. Finally, AI could even help solve pernicious global challenges, such as how to allocate and manage resources, prevent conflict and respond human rights violations.
On the other hand, AI also has the potential to work against the interests of the worst off, by deskilling and automating industrial and service jobs, and thus removing opportunities associated with human and economic development, or by assisting with the surveillance and exploitation of the worst off. Indeed, simply by increasing the financial returns to capital, relative to labour, AI is likely to produce increased economic and social inequality.
There is nothing inevitable therefore about AI’s effects on inequality, and this should hardly surprise us. Previous industrial revolutions were equally responsible for creating highly unequal societies, like China and the USA, and more egalitarian ones, like Sweden and Japan. So why would anything be different with this ‘fourth industrial revolution’ being driven by the development of AI?
For some, this as merely an extension of centuries-old debates about the distribution of wealth. AI, they argue, will inevitably serve the interests of its designers and owners. The only way to produce a fair outcome will be collective ownership of technology or at least sufficient levels of taxation to ensure a universal basic income for all. While clearly a logical solution to the problem, history suggests this bring problems of its own. Is there really no better way to build a fairer world with AI?
Perhaps in the far future, when AI has reached its full potential and we are living alongside superintelligent entities, there won’t be[S1] . However, that future is still a long way off, and how our economy develops in the meantime is no less important. What then can we do right now to build a fairer world with AI?
Firstly, we can learn the lessons from other technologies, such as the pharmaceutical industry, where research and development may be contributing to global inequalities. It is increasingly apparent that, whatever the intentions of doctors and scientists, pharmaceutical companies respond to economic incentives that produce unfair outcomes. Put simply, the drugs that make the most money are seldom those that do the most good. The same can increasingly be said of other technology companies, with investors only pouring money into whatever start-ups show the greatest potential for short-term financial return. Yet, trying to solve this problem by tax and regulation alone could easily backfire by hampering important innovation. Instead, what is needed is to support those whose work is likely to be most beneficial, either by direct investment from people looking to do the most good or by industry or multi-stakeholder agreements to work together in the common interest (https://www.partnershiponai.org).
Secondly, we can consider what lessons AI itself is learning about fairness and inequality. Until recently, the only way to develop problem-solving AI was to train it on historical data, which often reflects, or even amplifies, past wrongs. For AI to produce more equitable outcomes, developers need to find better alternatives, either by involving a much broad range of perspectives in training AIs or, as Google DeepMind have recently started doing, finding ways to avoid the use of historical data for training them altogether.
Thirdly, we need to tackle the lack of diversity amongst those developing AI in the first place. Finding solutions to the problems of the worst off is very likely to depend on encouraging more home-grown technological development in marginal communities and developing countries. Developers must also incorporate insights from all those who will be affected by these technologies, and cannot make the mistake, common in previous technological developments, of assuming that the designer always knows best.
Finally, even if we can develop beneficial, unbiased and inclusive AIs, they will only benefit everyone if people are willing to engage with them, and this requires trust, but how can we trust AI systems when even those who are developing them often struggle to understand how they work? The trustworthiness of machines is commonly talked about in terms of their ‘transparency’ or ‘explainability’, however, given the complexity of the underlying technologies, these simple words can mean multiple things. Furthermore, when it comes to trustworthiness in any meaningful sense, it is not enough for a system simply to explain how decisions are made (offering evidence-based justification and natural-language explanations). Developers must also ensure that people can actually find out what they want to know about these systems and how they operate, and to challenge an AI system’s decisions when they feel it has got things wrong.
The potential to develop artificial intelligence that can help to build a fairer world is within our grasp. However, it will require solving problems that both complicated and complex. More importantly, it requires doing so whether or not solving these problems is in the interest of those who are most willing and able to invest in the future of AI. This may turn out to be the single hardest problem for the development of beneficial artificial intelligence.