Understanding the Two Primary Types of AI in Healthcare
The typical healthcare organization is exploring two types of AI deployment, Gough says. In the more established field of classic machine learning, organizations lean on data scientists and other experts to determine the features that are built in to the algorithm.
Risk stratification is one of the most common use cases for classic machine learning in healthcare, Gough says. Common examples include figuring out which recently discharged patients are at risk of readmission within 30 days and should be assigned a case manager, or which patients who have appointments scheduled next week are at risk of not coming and should receive a personalized appointment reminder.
“This helps healthcare organizations best apply their high-touch resources,” he says.
Deep learning, on the other hand, uses frameworks such as TensorFlow to train algorithms to classify and analyze data. This doesn’t require a human expert to select an algorithm’s features, but it does require additional computing power — which is where the ability to boost server performance with tools such as Intel Deep Learning Boost Technology offers a clear advantage.
The physical location where AI models are deployed is also important. For example, training an algorithm to analyze brain scans can be done using cloud computing, Gough says, especially if an organization is leveraging data sets that aren’t stored on-premises.
“But if you’re trying to provide decision support at the point of care,” he says, “you want the model running as close to you as possible. You would spend a long time sending the entire imaging study to the cloud.”
5 Current and Future Use Cases for Deep-Learning AI
When hardware optimization is combined with software development toolkits and application programming interfaces that are likewise optimized for AI, such as Intel oneAPI Toolkits, healthcare organizations can optimize their deep-learning models for their existing Intel hardware, Gough says. This allows hospitals and health systems to explore several emerging use cases for deep-learning AI.
- Natural language processing: As much as 80 percent of healthcare data is unstructured, whether it’s physician notes within the electronic health record or transcripts from the call center. NLP enables computer systems to process natural language text and turn it into structured data. Converted to a machine-readable format, this data can be more easily searched and analyzed by analytics and business intelligence tools.
- Imaging analytics: Computer vision is a type of AI that can interpret visual data, which is useful for analyzing medical images. In one example, Intel and GE Healthcare have collaborated to run algorithms at the point of care using the same Intel processors that conduct an X-ray. Now, Gough says, clinical staff can determine in seconds whether a patient has a collapsed lung, an analysis that would otherwise take hours.
- Robotics and medical devices: Embedded cameras and visual processing units within robots enables the use of AI to complete a range of tasks, from assisting with surgery to distributing medicine and prepping patient rooms. These use cases have taken on added importance as hospitals look to limit the exposure of clinical personnel to COVID-19 and other pathogens, Gough says.
- Multiparty analytics: Through its work with the Confidential Computing Consortium, Intel has developed software guard extensions (Intel SGX) that partition memory into trusted execution environments such that only an approved application and algorithm can use that area of memory, and only the analytical output is shared with participating stakeholders. This would enable multiple organizations to collaborate on research with fewer (and less complex) legal agreements while keeping both the underlying data sets and the algorithms analyzing them in a secure environment. “This approach has the dual benefit of better privacy and protection for the intellectual property of the algorithm itself,” Gough says.
- Audio and video stream analysis: By and large, telemedicine adoption since 2020 has focused on taking the analog (an in-person visit with a doctor) and making it digital (a videoconference). The next step, Gough says, is developing algorithms that can detect the presence of medical conditions by analyzing the audio and video streams that each virtual visit creates. “Is it possible to gauge something based on the inflections in their voice?” Gough asked. “Soon we may be able to find out.”
Brought to you by: