Close

New AI Research From CDW

See how IT leaders are tackling AI opportunities and challenges.

Jun 11 2025
Cloud

Cloud-Based HPC Is Helping Researchers Move Healthcare Forward

At the AWS Summit in Washington, D.C., an NIH researcher shared how the agency is using high-performance computing to make strides in cardiovascular disease treatment.

In the past, high-performance computing wasn’t readily available to anyone but researchers and research institutions with access to large HPC clusters. Those without direct access had to write proposals and wait their turn to use the clusters. With resources scarce and many people in line ahead of them, researchers could be waiting a while for their turn and, if they made a mistake or their code didn’t run well in the large cluster environment, they might have to restart the process from the beginning.

In recent years, the cloud has transformed the data processing space, making HPC clusters accessible to more researchers without the long wait time. At the AWS Summit Washington, DC 2025, Jianjun Xu, principal solution architect for AWS Higher Education Research, explained how Amazon Web Services has shortened the research lifecycle to enable quick results that drive healthcare forward, while Joseph Marcotrigiano, chief of the structural virology section at the National Institutes of Health, shared how the agency is using AWS tools to better understand cardiovascular disease and uncover new treatment options.

Click the banner below to read the recent CDW Cloud Computing Research Report.

 

AWS Provides On-Demand HPC Resources for Healthcare Researchers

While there are benefits to on-premises clusters, setting up a traditional HPC cluster can take up to seven or eight months. By the time an organization procures the needed hardware and sets it up, the technology might already be out of date. Not to mention that the necessary graphics processing units can be difficult to procure. Using HPC services through AWS ensures that the organization has access to the newest hardware immediately, according to Xu.

AWS offers healthcare organizations several options when it comes to HPC. AWS Parallel Computing Service is a fully managed Simple Linux Utility for Resource Management cluster. A researcher can create a SLURM cluster that meets their specifications, such as processor types and latency needs, within 20 minutes. The user can control the compute nodes and build the node groups themselves. Additionally, a user can run a native app or run containerized apps on AWS with the SLURM scheduler.

“You can create a compute environment that can run up to 100,000 CPUs, but if you ask for only two CPUs, that’s how much you’ll be charged for,” said Xu. “It’s on demand. You pay for what you use.”

AWS ParallelCluster is an alternative service for researchers who want full control of the SLURM scheduler and its plug-ins. It’s an open-source solution that allows the user to create a fully customized HPC cluster in the cloud that they manage themselves.

Researchers can choose from over 800 types of HPC instances. Resources such as Amazon FSx for Lustre and Amazon File Cache are other resources available to assist with HPC goals.

“We don’t want you to waste any resources, so you only pay for what you use,” said Xu.

RELATED: Follow three steps to successfully deploy high-performance computing.

NIH Uses HPC to Better Understand Cardiovascular Disease Proteins

Cardiovascular disease is the No. 1 cause of human mortality globally. In 2019, 18.6 million people died of the disease worldwide. Having a high amount of low-density lipoprotein in the blood increases the risk of cardiovascular disease. The LDL particles can build up in the blood, deposit on the walls of arteries and form plaques, which could lead to a heart attack or stroke.

In the U.S., 30% to 40% of the population over the age of 50 takes statins to treat high cholesterol, according to Marcotrigiano. Statins work by targeting the receptor, not the particle itself. To learn more about the particle, scientists at the NIH recently used HPC and cryo-electron microscopy to model the LDL particles, a process that people once considered impossible, Marcotrigiano said.

Modeling the particles themselves required huge amounts of data. One data set included 35,000 movies and about 17.5 terabytes of data. The movies also had to be compressed into high-resolution images. Researchers aligned particles based on similarities and differences, classifying particles from a sample using both 2D and 3D systems.

As a result, researchers have a better understanding of how the particle binds to the receptors, which will be helpful in developing new therapies that target the particle itself rather than just the receptors.

“The only place we could do this was in the cloud,” said Marcotrigiano, adding that NIH used Amazon FSx for Lustre and several GPUs to process and store the data for this project.

sanjeri/Getty Images