AI and Global Health

What can be done to harness the increasing interest in the application of AI and machine learning in healthcare? 

That is the question that Divya Srivastava seeks to answer in a recent post for LSE. 

Srivastava argues that digital health technologies can be used for patients, health care professionals, health system managers and data services via tools such as apps, programmes and software in public interventions and for specific procedures or therapeutic purposes.  

Whilst acknowledging that as recently as a few years ago, many countries did not have a strategy for digital health or explicit regulation around market access, safety and quality, Srivastava points out that the growing interest in the intersection between AI and healthcare has shifted the decision-making landscape.  She points to the fact that many countries are now taking a proactive approach with respect to AI but also a government-wide approach that brings key institutions and stakeholders together. Something that is highlighted by the growing desire to figure out how to regulate AI solutions within healthcare.

Of course more needs to be done, and several areas are highlighted for discussion.

Firstly, Srivastava argues, policymakers should understand the risks and functionality of AI solutions. They need to know whether a low risk solution relates to simple monitoring or whether a solution around supporting diagnosis and clinical decisions is high risk.

On top of this they need to have evidence standards for AI solutions that they are putting into place. This can be done through healthcare economic evaluation which will enable countries to assess and evaluate the costs and benefits of medicines and medical technologies. An example on an individual level is the National Institute for Health and Care Excellence in the UK which updated its Evidence Standards Framework to include evidence requirements for AI solutions. 

Thirdly, and perhaps most obviously, Srivastava argues for more robust studies of AI to be conducted. Arguing that studies that look at how to establish standards in the economic evaluations for AI can be a step in the right direction to improve the calibre and benchmark AI related research outputs. Linked to this point is that AI in healthcare is an active area of learning, meaning that there need to be multiple ways to test solutions and its applications. 

Something that will require collaboration, which in of itself could bring many benefits including listening and engaging with the public about concerns. This could then branch into public reporting and monitoring of AI performance, setting out rules about data control, incentivising and overseeing adherence to responsible AI principles, and monitoring solutions and applications once they are on the market. 

Finally, in order to make sure that none of these efforts are in vain, international collaboration is a must. This could focus on operationalising policies and codes of conduct that remove unnecessary and unhelpful barriers to responsible AI, whilst ensuring that appropriate risk frameworks, mitigation measures and oversights are in place. Something that the World Health Organisation is already beginning to contemplate. 

Leave a comment

Blog at WordPress.com.

Up ↑