How AI will enable patient centric healthcare, and how to get it right
Today’s consumers – including those of healthcare – expect greater choice and personalisation.
Thanks to growing volumes of research and patient data, patient centric approaches – from targeted drugs, to more convenient diagnostics, to remote care designed around patient needs – are becoming more achievable.
This represents one of the most fundamental shifts in health and life sciences for decades. AI in particular presents huge opportunities, but also huge risks. We as an industry must learn to use AI correctly if these benefits are to be realised.
How AI is already disrupting healthcare
Patient centricity is already driving change across all areas of healthcare, with many AI and data science projects at an advanced stage. Broadly speaking they fit into three categories:
1) Disruptive discovery
AI is aiding the development of more personalised drugs.
AI finds links in data. So, it can be told what pharma research is trying to achieve, then analyse molecule libraries to identify likely candidates, without being explicitly trained on what to look for. In some cases, this can lead to approaches that no human would see. For example, researchers at Massachusetts Institute of Technology (MIT) recently claimed to have used AI to find a new antibiotic.
AI also has the potential to enable personalised disease management and improve clinical trials. For example, in many chronic diseases symptoms vary from day to day, so checkups with physicians will not provide a full picture of disease progression or the effect of interventions. Pharmaceutical company, GlaxoSmithKline (GSK), is exploring the use of sensors to measure movement of arthritis patients. AI could then be applied to the sensor data to spot subtle changes in how patients move over time that likely would be missed by human observations.
2) Decision support and diagnostics
AI has huge power in diagnostics, since it learns from patterns in data.
Image and sound analysis offer huge potential. SkinVision checks for signs of skin cancer using a phone camera. ResApp diagnoses respiratory diseases from the sound of a cough. Both are trained on large databases of labelled images or sounds, and learn to spot subtle physiological correlations, which would not be possible without AI.
Data from new genetic sequencing techniques also present new opportunities. One recent trial gathered patients’ genetic data and used AI to compare it to vast bodies of existing information about rare genetic diseases. This saved the life of one participant, uncovering a rare genetic disease in a newborn baby that had confounded doctors, but was easily cured once understood.
3) Smarter healthcare
AI can harness health and lifestyle data (from wearable technologies for example) to optimise healthcare regimes or promote healthy living.
The French project M4P is establishing a database where diabetes sufferers and healthcare professionals can directly upload information on their wellbeing. Across such huge amounts of data, AI can identify when specific combinations of factors lead to specific outcomes – one example being how environment, lifestyle and drug combinations affect the progression of diabetes. Health professionals can use this data to tailor care regimens. For humans, it would be too hard to isolate all the factors across such complex datasets and conclude that a particular cause had a particular effect. This is where deep learning shines.
Other services are helping people take wellbeing into their own hands via apps. Riva Digital deduces blood pressure from slight variations in the colour of blood flowing through your fingertip. Lumen digitises your breath to understand individual metabolism, then provides personalised weight loss advice.
How to harness AI successfully
Despite the promising projects above, applying AI in healthcare still presents many challenges, and there is high risk of failure. Success stories excite the industry to do more. Yet costly failures could set AI in healthcare back years. It is important we get it right.
AI is rarely ‘plug and play’. It learns its own rules from patterns in data, rather than having explicit pre-programming, which creates room for error. It can only work with the right data, models, training, and deployment.
What matters most for AI is building trust. If we understand what AI is – and isn’t – telling us, we can use it to make better decisions. If we expect AI to automate high stakes decisions without proper checks, we are setting ourselves up for failure.
Here are the three stages of what we believe makes a healthcare AI project successful and trustable.
1. Planning for success
Successful AI programmes start by identifying what decision the AI should take, and what insight is needed for it to take that decision. Only then should they start to gather data, aligned to what is needed.
Data projects also need the right expertise. This is not just AI and data experts. Teams must also include experts on what data means in the real world (e.g. biochemical interactions or human biology) and on patient quality of life, as well as translators who can bridge the language gap between different groups.
2. Building and training AI
AI needs to be fed good data. But health data is complex, varied, and often poorly structured. Handwritten doctors’ notes can be hard to accurately digitise. Data from consumer smartphones introduces risks from uncontrolled variables. Combining all this needs considerable expertise to accurately capture, modify, and label data to remove sources of bias that would cause the AI to reach incorrect conclusions.
Many models fail because they rely on correlations, without adequately validating that they are causally connected using real world data. For example, a sensor may infer that a behaviour indicates patient stress. To be certain, you need trials to check that measurements consistently correlate with other reliable stress indicators. If you don’t verify that your data is actually a ‘real effect’, it may give you wrong results in the real world, and quickly lose trust.
Most healthcare AI will also need to be explainable, with tools which describe in clear language how the model reached its decision. If users do not understand what is driving the AI decision, they will struggle to trust the results.
3. Deploying AI
We can’t expect people to engage with a second-rate user experience, and this is doubly true where the end user is a consumer – whether it be a platform for self-reporting on drug trials or long-term condition management. We must ensure AI is easy to use, or people won’t use it.
AI needs to be rolled out gradually with checks until it is proven. For example, a doctor may start by making her own diagnosis, then run a diagnostics AI to validate it. Over time, she may make diagnoses in parallel. Eventually, the AI may become the first port of call, with the doctor only brought in for serious or edge cases. Expectations need to be managed as AI use grows. Overpromises can lead to long term suspicion, as many early AI innovators have found.
It’s time to take time and build trust in AI
AI can decouple and understand correlations that humans cannot, and process vast amounts of data far quicker that humans can. This can give researchers and healthcare professionals access to expertise and approaches outside their thinking and abilities, and speed critical decisions.
But AI has limitations which can quickly undermine trust if not addressed.
In the short term at least, AI should be a collaboration partner which informs human decision making. For high stakes decisions, humans need to be able to look under the hood to understand why it made that decision, and its level of certainty. They need to see it working and become familiar with it so they can learn to trust it, but also have the training to know what to do if something doesn’t seem right.
If we get it right, AI could help us design more personalised drugs, make diagnoses outside a doctor’s experience, and recommend health and wellbeing regimes on a personal level. The benefit to both business and society is enormous. It is worth taking the time to get it right.
This article was co-written by Dr James Hinchliffe and was inspired by Tessella’s new whitepaper, Patient Centric Healthcare – The Role of AI and Data Science.