This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Print Page | Contact Us | Report Abuse | Sign In | Join Us
News & Press: Digital Health Observatory

Health is Something that Happens Every Day

09 July 2018   (0 Comments)
Share |

Joan Cornet Prat, Non-Executive Director, ECHAlliance

The future of healthcare is a combination of wearable tech and Mobile health apps, powered by Artificial Intelligence, giving the user bespoke support and behavioural nudges.

 

Integrating healthcare apps with wearable technology provides minute-by-minute monitoring of those with chronic conditions, a task which is physically impossible for human doctors and nurses but is vital as it allows them to respond quickly to patient deterioration or changes in conditions. 


This implies a major transformation on how health services are given. Instead of a checkpoints every month and sometimes every six months, it’s now possible to have up to the minute data from patients daily lives.

And these new tools and techniques are not just for those for chronic conditions, they also work for those of us that just want to live a healthier lifestyle. The data that wearables obtain helps us track different patterns of both vital and behavioural habits.

All this allows a better monitoring of diseases, adherence to therapeutic guidelines, as well as healthy habits that can allow us a higher quality of life. All of this would not be possible without digital technologies and the data processing we get with Artificial Intelligence.

 

 


Defining Artificial Intelligence

“Software technologies that make a computer or robot perform equal to or better than normal human computational ability in accuracy, capacity, and speed. Two very different approaches, ‘rule-based systems’ (see expert system) and ‘neural networks’ have produced increasingly powerful applications that can make complex decisions, evaluate investment opportunities, and help in developing new products. Other uses include robotics, human-language understanding, and computer vision.” Read more. 


So far so good…

Artificial is something that is not real, and which is kind of fake because it is simulated.


Intelligence is a very complex term. It can be defined in many ways: like logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and of course problem solving.

We call us, humans, intelligent, because we all do this mentioned thing. We perceive our environment, learn from it and act based on what we discovered.
Alan Turing was born 23 June 1912 in London. He is widely known, because the encrypted the code of the enigma, which were used from Nazi Germany to communicate. Alan Turing’s study also led to his theory of computation, which addresses how efficiently problems can be solved. Turing presented his idea in the model of the Turing machine, which is today still a popular term in Computer Science.

In 1920 the Czech writer Karel Čapek published a science fiction play named Rossumovi Univerzální Roboti (Rossum’s Universal Robots), also better known as R.U.R. The play introduced us to the word ‘robot’. R.U.R. is about a factory, which creates artificial people, and these artificial people are called robots. They differentiate from today’s term of robot. In R.U.R. robots are living creatures, more similar to what we now call ‘clones’. The robots in R.U.R. first worked for the humans, but then there comes the robot rebellion which leads to the extinction of the human race.

In 1956 there was probably the first workshop of Artificial Intelligence and with it the field of AI research was born. Researchers from Carnegie Mellon University (CMU), Massachusetts Institute of Technology (MIT) and employees from IBM met together and founded AI research. In the following years they made huge progress. Nearly everybody was very optimistic.

In fact, scientist’s optimism in 1956 was misplaced. The main predictions are still not reality today. Instead of the 1956 predictions, it’s new data processes and cheaper and bigger data storage that’s changing the world.


The amount of available digital data is growing at a mind-blowing speed, doubling every two years. In 2013, it encompassed 4.4 zettabytes, however by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes or 44 trillion gigabytes. (1)


Personally, I think the term ‘Artificial Intelligence’ is excellent for the computer sciences, but it creates confusion in our society.

It's as if at the popular level, instead of saying “I need to an aspirin”, a patient instead says “I really need to take an Acetylsalicylic acid’ (or ‘I have a headache, I really need an AAS (C9H8O4)’) People would say: ;Why should I take acid?’ or “how is it possible that my body can absorb C9H8O4? Won't it be dangerous for me?”

Some indivIduals AND organisations already understand this. IBM, for example, no longer talk about AI in healthcare. Instead they talk about the ‘Deep Dive’ which, which feels much friendlier AI.

Still, we cannot forget that we are faced with technologies that we do not have much experience of , which require a series of guidelines and ethical norms, as well as scientific evidence of the impact on the health of the patient and the general citizen in the care of their health.


We need the following preparations to avoid the pitfalls of the utilization of AI:

1. Creation of ethical standards which are applicable to and obligatory for the whole healthcare sector

2.
The gradual development of AI to give some time for mapping of the possible downsides

3.
For medical professionals: acquirement of basic knowledge about how AI works in a medical setting in order to understand how such solutions might help them in their everyday job

4.
For patients: getting experience of artificial intelligence and discovering its benefits for themselves – for example. with the help of Cognitoys which support the cognitive development of small children with the help of AI in a fun and gentle way or with such services as Siri.

5.
For companies developing AI solutions (such as IBM): more (and more effective) communication with the public about the potential advantages and risks of using AI in medicine.

6.
For decision-makers at healthcare institutions: doing all the necessary steps to be able to measure the success and the effectiveness of the system. It is also important to push companies towards offering affordable AI-solutions. This is the only way to bring the promise of science fiction into reality and turn AI into the stethoscope of the 21st century.


In 10 years we’ll look back at 2018, astonished at the progress of AI. This will be same as when we look back now, with astonishment that yes, there was a time in history when the world existed without the Internet.

So we have a long and winding road ahead. There will be a lot of challenges, but there are also plenty of opportunities for better healthcare for patients and better quality of life for OUR healthy citizens.


References: 

1. The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things.

2. Artificial Intelligence Will Redesign Healthcare.

3. Artificial intelligence in healthcare: past, present and future.