New cancer-fighting drugs are sorely needed, but getting effective drugs to market takes years of clinical trials. Researchers at the Georgia Institute of Technology are hoping to change that, however, and speed up the process via a machine learning algorithm that has successfully used raw genetic data to predict when cancer drugs will be effective.
Moreover, the machine learning algorithm will be open source and available to the health research community at large in order to encourage collaboration and further medical advances.
“By making our algorithm ‘open source,’ we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications,” states a study of the algorithm released by Georgia Tech researchers in October.
The university isn't the only organization looking to speed the time to market for cancer therapies through tech. A new partnership among pharmaceutical giant GlaxoSmithKline (GSK), the University of California, San Francisco, and two national labs — Frederick National Laboratory for Cancer Research and Lawrence Livermore National Laboratory — known as the Accelerating Therapeutics for Opportunities in Medicine (ATOM), has tapped Big Data, artificial intelligence and supercomputing with the aim to cut the average cancer drug time to market from six years to one.
Growing Data to Further Cancer Drug Research
The Georgia Tech machine learning algorithm is already a promising solution, however, as it proved to be up to 87.5 percent effective in predicting effective cancer therapies for nine cancers in more than 270 patients, according to the study.
“Nine drugs are in the published study, but we’ve actually run about 120 drugs through the program all total,” said Fredrik Vannberg, an assistant professor in Georgia Tech’s School of Biological Sciences and collaborator on the study told Georgia Tech’s news site.
The program calls on machine learning algorithms to sift through massive amounts of data from a vast array of sources, as well as remove human biases about the possible outcome.
“It’s much more effective to put in loads of raw data and let the algorithm sort it out,” John McDonald, the director of Georgia Tech’s Integrated Cancer Research Center and a collaborator on the study told the site. “It’s looking for correlations, not causes, so it’s not good to preselect data for what you suspect are most relevant.”
A main bias that researchers avoided when designing the software was looking at the gene expressions of particular cancers as they pertain to treatments. Instead, they decided it was best to allow the software to look at the bigger picture of cancer data.
“On a molecular level, some breast cancers, for example, are going to be more similar to some ovarian cancers than to other breast cancers,” McDonald said. “We just let the algorithm work with about everything we had, and we got high accuracy.”
By making the algorithm open source, McDonald is also hoping that the software will pool larger amounts of success and failure data, making it even smarter as it gains traction in coming years. To that end, while a corporation could potentially profit from the software, Vannberg stressed that tackling the larger issue of cancer treatment and saving lives is the main priority.
“With our project, we’re advertising that sharing should be what everybody does,” Vannberg said. “This can be a win for everybody, but really it’s a win for the cancer patients.”