“AI can help us to automate the simple tasks allowing our clinicians and professions to work at the top of their license. It will free up time by giving our doctors and nurses everything they need to know in one view” – said Rachel Dunscombe during “Man vs. Machine” debate.
Healthcare is undergoing a technological revolution. The development of AI for medical purposes seemingly holds the key to unlocking massive untapped potential for increased cost effectiveness, efficiency and efficacy. So much so, that growth in the AI health market is expected to reach a staggering $6.6 billion by 2021 compared to just $600 million in 2014
The rapid rise of AI technology in the healthcare sector – worryingly for some – makes it seem that the days of reliance on a human doctor’s advice are numbered. This year’s European Health Forum Gastein EHFG hosted an Oxford Union style debate on AI – “Man vs. Machine”. Rachel Dunscombe and Brian O’Connor were in favour of the motion. How do they argument the acceptance for AI?
Rachel Dunscombe, CIO, Salford Royal NHS Foundation Trust
AI is already being used in healthcare to manage some problems which are starting to emerge. We can’t recruit enough clinicians, we need to make best use of our staff time and we need to use our resources well. We also need to speed up essential decision making and we have a duty of care to use the data we generate to assist our citizens and patients with their health and wellness. The examples I give are Digital pathology where all the data could not be processed by a human and disease risk scoring in emergency care where clinicians could not manually calculate the score from many data sources rapidly. In the case of our stroke pathway we are saving 1 more life in 10 over 100 days using patient scoring and routing in the ambulances. In our emergency department we have reduced the prescribing of low molecular weight heparin by 79% and improved outcomes on the pathway.
What is important is that we do not allow AI to become a black box – the workings of which we do not understand. We must own the AI logic and ensure that we have appropriate governance and measurement around AI systems. Clinical safety sign off must be based on appropriate evidence. In my case this has been based around the IHI Quality Improvement methodology and the baselines and post implementation metrics.
“WE CAN’T ALLOW AI TO BECOME A BLACK BOX – THE WORKINGS OF WHICH WE DO NOT UNDERSTAND”
We also need to take Cyber Security very seriously as with AI we are creating mission critical infrastructure like water or electricity. This is also another reason why we need to be the “intelligent client” of our AI and understand how it works. In my case I have a team of data scientists and clinical content managers who own the AI we create or buy – they can explain how it works and can assist in assuring clinical safety.
Overall, AI can help us to automate the simple tasks allowing our clinicians and professions to work at the top of their license. It will free up time by giving our doctors and nurses everything they need to know in one view. This allows the clinical workforce to spend more time with the patient – listening, communicating and working with empathy. We use AI to route our nurses so they spend less time driving and more time in the patients home. This improves care and experience. My view is that we must proceed with appropriate governance but it is our duty of care to use AI as this saves lives, improves lives and makes healthcare more efficient.
Brian O’Connor, Chair at European Connected Health Alliance
Every new technology is greeted with concerns and worries. That is understandable and those concerns should be listened to and addressed. However, the consumer make choices every day about risks and rewards, crossing a road, flying, driving a car, using a smartphone etc. We do our best to understand the risks and weigh them against the benefits and overall we are pretty good at it. When I hear people pointing out the “problems“ with AI I always ask, what do you suggest?
Ban the use of it? Too late.
Surround it with regulation and good practice? Of course, but don’t stop something which is already doing much good.
Leaving aside the technology, it is said AI will never replace clinicians. I agree, but what it will do is take routine tasks, about 80% of doctors’ work and allow them more time to spend with their patients. It is always easier to recite the possible problems than implement solutions.
Article Source: https://www.ictandhealth.com/news/ai-in-healthcare-do-the-rewards-outweigh-the-risks/