Why Health Systems Should Build Their Own AI Models
With so many commercialized algorithms on the market, many health systems have an important decision to make: Should they buy an artificial intelligence (AI) model or build their own? If a health system elects to build its own model, it has to invest time and manpower into it. But the benefits could be tremendous. The case for building a customized AI model is simple: Instead of the algorithm learning on national data, it is learning on the health system’s data, said Pamela Peele, Ph.D., chief analytics officer at UPMC insurance division and UPMC enterprises. She spoke during a World Health Care Congress keynote called “More than Buzz: Realize the Potential of AI and Machine Learning.” So, if a health system builds its own model and trains the system on its own patient data, the AI will help the health system best serve its population of patients.
What to Know Before Buying Healthcare AI
When it comes to purchasing, health systems need to invest in more than just the price of a vendor relationship. No matter the chosen product, the algorithm still needs to feed on data. And it could cost a lot to extract data from a health system to train the algorithm. “We never talk about the amount of money and disruption that we have to go through to be able to get the data fit for consumption by the algorithm, one,” Peele said. “And two, we never talk about the disruption and the money and sometimes the failure of taking the knowledge and stuffing it into our work processes.” She added that all of the investment needed for the model does not do a health system any good if it cannot be added to the care delivery process efficiently. The return on investment might not be worth buying the model in the first place. And when an organization buys a deep learning model, it turns into a “deep-learned” model. That’s because it stops learning the second the U.S. Food and Drug Administration (FDA) approves the technology, said Daniel Durand, M.D., chief innovation officer and chair of radiology at LifeBridge Health. The algorithm is considered frozen in time because the FDA does not want the model to learn and get dumber somewhere along the road. And if it’s not training, the algorithm cannot get any smarter.
AI Can Fall Short of Healthcare Expectations
UPMC once bought a common predictive analytics asset that had performed well on the market. It was considered the “best in breed,” and it had been trained on national data before consuming UPMC’s data. The system worked well, but it was static. It was not constantly learning and was not trained on the new, real-time data. And as UPMC evolved and moved into other regions, the model could not keep up with the changes. Having once rendered predictions with an R-squared value between 0.26 and 0.28 before UPMC underwent major changes, the model fell to 0.11. It failed to grow alongside UPMC. The organization ultimately decided to build its own model that trains itself every month. It now runs at a value of 0.35. Peele said that if a business is going to remain static, a tried and true AI model could serve the company well. But if the business is going to grow and change, buying a standard model is unlikely to help.