The Institute for Public Policy Research’s new report suggests around 10% of annual NHS operational expenses (approximately £12.5bn) could be saved through artificial intelligence (AI) and automation technologies. The report also claims 30% of social care activities (equivalent to £6bn) could be saved by automating repetitive decision making and admin tasks.
There are now a growing number of examples of AI and automated decision making being used clinically e.g. to identify patients at risk of advanced kidney disease, administratively e.g. to digitally verify patients’ insurance information and for aftercare/rehabilitation e.g. to support tailored physiotherapy programmes. So much so, Accenture’s Digital Health Technology Vision 2018 report found that 85% of health executives believe that every human will be directly impacted on a daily basis by an AI-based decision within the next three years.
Despite the exuberance for AI in the healthcare sector, it was the Royal Society of Arts (RSA) and YouGov’s research that really caught my eye as it paints a very different picture of the British public’s general lack of trust in automated decision making. In general, only 32% of people surveyed were aware of automated decision-making systems and 74% lacked familiarity about the use of these systems to make decisions in healthcare specifically. 48% said they were actively opposed to the use of the technology in healthcare.
My interpretation is that a lot of this opposition stems from the perception of automated decision platforms as black box systems that make it difficult, or impossible, to understand exactly how a decision was reached. In fact, the House of Lords Select Committee on AI has already expressed its view that “…it is unacceptable to deploy any AI system that could have a substantial impact on an individuals’ life, unless it can generate a full and satisfactory explanation for the decisions it will take.”
In healthcare, AI technology’s ability to explain the process used to arrive at decisions will be critical to trust, safety and compliance. This is corroborated by RSA and YouGov’s research, which noted a 36% increase in support for these systems if users could be granted the right to request an explanation of the organisational steps or processes undertaken to reach a decision with an AI system.
In addition to earning the trust of the public (patients), I think it’s just as important that doctors, nurses and NHS leadership also trust automated decision making if the technology is to be fully adopted. When clinicians are using AI to make decisions, they need to believe the technology is trustworthy and dependable. They need an audit trail of how decisions were reached and the level of certainty underpinning the decision.
I believe that what we need to see are more technologies clearly modelling automated decision-making on human expertise – rather than black box data. AI is designed for collaboration with people, so building human expertise into the development of these platforms and enabling AI systems to provide clear explanations for the decisions they make (in a format that those same human experts understand and can confirm) will be critical to the future of AI in health care and trust in automated decision making platforms.
Be warned, if society’s lack of trust in healthcare AI isn’t dealt with properly, any technological progress and enhancements in patient care will be rendered null and void.