Q: How do you define AI on the technology spectrum? (And how does this work in the health care system?)
John Edwards, Vice President, Healthcare Solutions Consulting, SoftServe: We talk about artificial intelligence (AI). It’s still a pipe dream that a robot can think and feel and replace what the human brain does. But what AI actually wants to do is automate some of the activities that a person would normally do, replacing it with you using computers to create rules and make it more efficient. Therefore, it was developed as a set of techniques that have their roots in statistics.
I actually have a master’s degree in statistics from Ohio State [University]. And we studied the underlying methods of how you create these kinds of learning capabilities and gain insights from them. And it’s different statistics than what you study if you take a regular course in college because it’s more theoretical. As big data becomes more real—as we can process more data and have more data we can hold—research capabilities are beginning to evolve with different technologies of AI. And really, we often talk about the underlying learning models that are part of AI, whether they’re supervised or unsupervised learning models. Sometimes people get frustrated with AI because the computer sees patterns and connections between data through these learning models that are hard to explain.
Traditional regression models always involve people making decisions about variables and trying to predict with some kind of linear model, you know, progression through a line. You know, AI goes beyond a simple assessment of capabilities, multiple things happening at the same time can be really diverse. and using these new statistical methods to draw new connections between data that are expected to have predictive capabilities. Now, hope is measured by your alpha scores and the scores you have in the statistics course of reliability. They are called differently, but they are really used to test how reliable the prediction from the model is.
So this is a whole new area of science that has really affected every industry and healthcare in particular. It has been the focus of many researchers, many great papers have been written there and published around very specific examples of applying this technology in the healthcare sector. And it has proven again that it can work, producing predictable results.
Over the past few years, we’ve seen a real explosion in the use of this information—to drive drug discovery opportunities but also elsewhere in the drug companies’ lifecycle—from better recruitment to better understanding of patients for clinical trials. What happens to your drug once it’s released into the real world? And they all use these new underlying technologies called AI.
I recently came across an estimate that pointed to AI and healthcare as a nearly $30 billion industry last year. So it is not a small amount of work or activity that takes place. But as with every new technology, it’s often quiet and still hasn’t reached the full potential of what it can do. But this is certainly a growing trend in healthcare, where pharma is a bit ahead of how providers or payers use that data. But providers are quickly thinking about how to use what you know, these techniques to create better services for themselves. But I will continue.
Q: Can you explain predictive modeling in relation to healthcare?
John Edwards: Many studies have been published about the value of using AI to predict how often a woman should get a mammogram. Today there are standards that apply to preventive medicine across the board. If you are of a certain age, you should get them at a certain frequency. If you have a family history, it may be a different frequency. If you have certain conditions that predispose to cancer, you have a different approach, but it’s still not perfect medicine—it doesn’t take everything we know about the woman into giving her great advice about how often she should go through the mammogram procedure and expose herself to radiation through the technique used. Doing so, and then getting advice about her care. We can do better.
If we build a model that predicts more than just a woman’s age — about whether they receive breast cancer screening — by taking other available data about women who have received screenings [or] The different stages that end with a cancer diagnosis, we can create a more complete model that allows us to take the available data about the whole person and create a protocol that makes sense for that person about how often they should get it. Mammograms… from “everyone should get one at a certain age” to “get one often”.
And because of your estimated risk factors—based on the data and growing evidence that we have from looking at lots of people in the real world and what happened when we applied this analytical approach to this real problem—you can identify people who are getting it more or less often than the normal protocols give us.
Predictive models should be trained on a data set. And if the data is narrow, it’s hard to know whether the prediction from the AI model makes sense. Did you know, only 3% of the population participates in clinical trials. And by its very nature clinical trials involve a small group of people for a drug [that] have been tested. Furthermore, when we bring drugs to market, they are treated by everyone out there. The whole world starts using the drug. But only 3% of that group had symptoms that would allow themselves to be part of a clinical trial.
The real-world evidence that comes from looking at this data and building data-driven AI models will allow us to use more data from a wider range of people, perhaps more representative of all of us, and make better advice about what’s going on with diagnostic procedures—and whether their use or treatment protocols are the treatments and diagnoses we’re using. It can be used to develop a greater expectation of how policies will bring about outcomes that are meaningful to people.