Evidence-Based Medicine, AI and Equity
Artificial intelligence (AI) refers to the ability of computer systems to perform tasks related to human intelligence, such as problem-solving, learning and decision-making. A subset of artificial intelligence – machine learning (ML) – is a field that focuses on the development of algorithms that are capable of learning from data and use this acquired knowledge to address a specific problem or a task.
In recent years these methods have witnessed explosive growth in the field of healthcare. These technologies have the potential to revolutionise healthcare, but they also come with various challenges that should be addressed before the widespread deployment of AI tools in clinical settings.
This blog looks at the potential benefits and risks of AI in healthcare, with a focus on its relation to equity. Additionally, it explores how to make AI models more understandable for both providers and healthcare consumers, to ensure that the decisions based on these models are acceptable, unbiased and trustworthy.
Benefits of AI in Healthcare
Healthcare is currently facing numerous challenges: the rising burden of diseases; global population aging associated with multimorbidity and disability; and increased demand for healthcare services, among others. All these challenges place a higher demand on health systems; AI can alleviate this burden in various ways.
- 1. Speeding Up Diagnosis: AI-powered methods can efficiently analyse vast amounts of data, such as ECG signals or medical imaging scans, helping healthcare workers to make quicker and more accurate diagnoses.
- 2. Allocating Resources: AI-driven prediction of diseases can help identify populations at higher risk of specific diseases. This enables targeted prevention programs in high-risk areas or among vulnerable populations. Predicting the outcome of medical interventions can also help identify patients most likely to benefit from an intervention or flag patients at risk of adverse outcomes.
- 3. Creating Tailored Treatment Plans: By analysing data from individual patients, AI can help create personalised treatment plans for each patient or schedule follow-up appointments to monitor patient progress.
- 4. Addressing Geographic Inequity: In regions with limited access to specialized healthcare, AI can play a vital role in addressing geographic inequity. For example, AI-powered analysis can identify patients needing specialised care. Telemedicine platforms with AI-driven diagnostics can connect patients in remote areas with healthcare providers, enabling them to receive expert advice and care without the need for travel.
In summary, AI offers solutions by expediting diagnosis, optimising resource allocation, personalising treatment plans and addressing geographic inequity in healthcare access. These uses of AI have the potential to improve healthcare outcomes, enhance patient care and make healthcare more efficient and equitable.
Inbuilt Bias in AI and Explainability
Differences in healthcare outcomes depending on population are well documented. They can also manifest in AI methods, creating a bias – often referred to as ‘algorithmic bias’. Machine learning models have been found not only to reflect societal inequities but can also exacerbate them. This bias can arise from factors such as socioeconomic status, ethnicity, race, gender, disability and more. For example, if a model is primarily trained on data from patients from one population, it might not perform as well on patients from different demographic groups. Disadvantaged populations are at greater risk of being negatively affected by this bias. This may potentially lead to AI methods producing predictions that are less accurate or underestimate the need for medical intervention in specific populations.
It is imperative to identify and address these biases during model development and to keep in mind the limitations of AI for disadvantaged populations when using AI models in clinical settings. One of the critical steps in reducing bias in ML models is to ensure that they are explainable and do not only provide a prediction or a diagnosis as an outcome. Explainability means providing an explanation of how the decision was reached, including which factors contributed most to the final decision. This can also ensure identification of biases and their subsequent reduction.
Explainability is closely intertwined with understandability, ensuring that AI models are transparent and the outcomes they produce can be comprehended not only by healthcare providers but also by consumers. Adequate explanations of the background of AI models and their outcomes are essential to build trust in these technologies.
In summary, just as in all areas of medical research, addressing inequities and incorporating considerations for equity is critical in research design. An important aspect is also maintaining a patient-centred approach in creating AI-powered methods and incorporating outcomes relevant for healthcare consumers. Hence, it is important to involve healthcare consumers in research.
Data Privacy and Data Sharing
Another significant concern in AI-driven healthcare is data sharing and privacy. Data sharing allows the collection of data from diverse populations. This leads to more comprehensive research that can address algorithmic bias in ML models arising from the under-representation of certain demographic groups. As AI methods often rely on large datasets to produce precise results, data sharing, and therefore, data privacy, come into question. This raises concerns about data security, especially given the sensitive nature of healthcare data. Healthcare consumers may worry about their privacy when it comes to sharing such personal information.
Addressing this concern involves creating ethical standards for data sharing for the purposes of training ML models. It is also important to adhere to high standards when it comes to the privacy and security of data, as well as informed consent so that the patients are adequately informed about the purposes for which their data will be used.
Conclusion
As AI becomes increasingly prevalent in the medical field, it is essential to address the challenges that arise before it is implemented in clinical practice. Despite the promise of AI models to improve individual patient care and transform the field of medicine, it is crucial to first tackle the issues that accompany these methods, ensuring that vulnerable populations are not harmed and global equity is enhanced.
Providing accessible explanations of AI methods for healthcare providers and consumers is pivotal in the future implementation of AI. This will ensure that these methods and their outcomes are comprehensible, trustworthy and equitable, all while maintaining ethical data-sharing standards to achieve this aim.
References
Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. J Glob Health 2019;9(2):010318.
Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med 2023;6,113.
Shad R, Cunningham JP, Ashley EA, Langlotz CP, Hiesinger W. Designing clinically translatable artificial intelligence systems for high-dimensional medical imaging. Nat Mach Intell 2021;3:929–35.
Siontis KC, Noseworthy PA, Attia ZI, Friedman PA. Artificial intelligence-enhanced electrocardiography in cardiovascular disease management. Nat Rev Cardiol 2021;18:465–78.
Tamajka, M. Three benefits of explainable artificial intelligence [internet]. KInIT, 2022. Available from: https://kinit.sk/three-benefits-of-explainable-artificial-intelligence/.
Vokinger , K.N., Feuerriegel , S. & , Kesselheim, A.S. Mitigating bias in machine learning for medicine. Commun Med 2021;1:25 (2021).
Disclaimer
The views expressed in this World EBHC Day Blog, as well as any errors or omissions, are the sole responsibility of the author and do not represent the views of the World EBHC Day Steering Committee, Official Partners or Sponsors; nor does it imply endorsement by the aforementioned parties.