This briefing note examines the current and potential applications of AI in healthcare, its limits, and the ethical issues arising from its use
AI is being used or trialled for a range of healthcare and research purposes, including detection of disease, management of chronic conditions, delivery of health services, and drug discovery.
AI has the potential to help address important health challenges, but might be limited by the quality of available health data, and by the inability of AI to display some human characteristics.
The use of AI raises ethical issues, including: the potential for AI to make erroneous decisions; the question of who is responsible when AI is used to support decision-making; difficulties in validating the outputs of AI systems; inherent biases in the data used to train AI systems; ensuring the protection of potentially sensitive data; securing public trust in the development and use of AI; effects on people’s sense of dignity and social isolation in care situations; effects on the roles and skill-requirements of healthcare professionals; and the potential for AI to be used for malicious purposes.
A key challenge will be ensuring that AI is developed and used in a way that is transparent and compatible with the public interest, whilst stimulating and driving innovation in the sector.