Servers and storage are a primary focus for one hospital’s support upgrades.
In August 2013, the virtualized environment at St. John’s Riverside Hospital was a maxed-out mess.
Performance issues and a lack of resources caused daily challenges for physicians, nurses and techs while taking care of patients at the Yonkers, N.Y.-based organization. Storage and memory were at capacity, says Nelson Carreira, director of servers and desktops, information services and technology at the hospital.
By the following month, after moving St. John’s to a hyperconverged infrastructure, complaints from staff dissipated. Carreira achieved a massive performance boost by implementing five SimpliVity OmniCube CN-3000 hyperconverged appliances.
“I knew we had a 50 percent performance increase, but hearing it from my users made it real,” he says. “Going to a hyperconverged infrastructure gave us more storage, more compute, data deduplication, compression, backup and disaster recovery strategies all built into one. All that made hyperconverged very, very appealing for us.”
Last fall, St. John’s added two more OmniCube CN-3000 appliances and two CN-3400 OmniCube devices, making it one of a growing number of healthcare providers looking to converged and hyperconverged infrastructure to solve its growth and performance problems.
There are several reasons healthcare organizations make the decision to eschew traditional technology in favor of bringing four core data center elements — compute, networking, storage and server virtualization — together in a single box, says Charles King, a principal analyst at Pund-IT.
First and foremost, he says, is when a hospital determines that its existing IT infrastructure fails to effectively support core processes and applications, or is inadequate for pursuing new use cases, such as enabling doctors and others to use smartphones and other devices.
Still, other factors contribute as well, King says.
“A group may decide to adopt a new electronic medical records platform, like Epic, and then realize that its current infrastructure can’t support it,” he says. “Or a hospital might find that MRIs and other digital radiology images are growing beyond its data storage capacity.”
Wake Forest Baptist Medical Center falls into the former category. In March 2014, the Winston-Salem, N.C.-based teaching system, which comprises three hospitals and nearly 300 clinics, faced an aging IT infrastructure; 63 percent of the environment was more than seven years old, while 12 percent was more than 11 years old.
Such a setup could not support new applications and services that typically are hallmarks of a top research hospital, says Executive Vice President of Corporate Services and CFO Chad Eckes. The organization — which boasts 1.25 million clinical visits, 45,000 to 50,000 surgeries and 120,000 emergency room visits each year — had invested heavily in application implementations that were “spectacular failures.” (Eckes previously served as CIO and still maintains ownership of IT.)
“When I was brought in, the company had underinvested in infrastructure,” he says, causing problems for medical school students and researchers trying to crunch massive amounts of data for genomic sequencing and high-performance algorithms.
The solution? Eckes tapped Dell EMC’s VCE Vblock Systems, which allowed Wake Forest to shut down 1,400 servers and move 8 petabytes of data to the new system. That, in turn, decreased server utilization from 70 percent to 35 percent and reduced the CPU load from 75 percent to 15 percent. It also provided the hospital system with a 30 percent performance improvement, directly impacting its end users.
“One of the things we monitored right away was the speed that our EMR performed on the converged infrastructure,” he says. “We immediately picked up multiple minutes on certain transactions. That processing time is saving physicians up to 45 minutes per day.”
Whether opting for converged or hyperconverged infrastructure, considerable planning is required, because systems often are designed and configured in the factory to support specific applications and workloads.
“That can streamline deployment processes and save considerable time, but IT organizations still need to determine and identify key assets that will be affected during the process, and minimize disruption accordingly,” Pund-IT’s King says. “Planning ahead, establishing a consensus on proper steps, proceeding cautiously and methodically, and making use of weekends and other downtimes can help ensure success.”
Planning ahead was a big part of installing Wake Forest’s new infrastructure. Before the IT team shut down the health system’s existing servers and storage, Eckes took a snapshot of each environment so he could roll back and implement a system restore in the event of a problem. He also put a policy in place that he says errs on the side of caution.
“Any changes we make to the production system require two mock conversions before we go live,” Eckes says. “We do the entire upgrade in a test environment, go through the entire script and make sure there are no problems before allowing that upgrade to go through into production. I’ve watched too many times when people tested it once, thought it was fine and then had big problems going live.”
St. John’s Riverside’s Carreira made sure that every one of his software vendors supported a hyperconverged infrastructure before transitioning to his new environment. He also built in redundancy so he could take the old systems offline without impacting end users.
Both Carreira and Eckes say that those moves and the new infrastructures continue to help keep patients safer and healthier, while also keeping costs down.
“It only takes one problem or glitch, and then you can’t take care of the 5,000 people we see every day in our clinics,” Eckes says. “At no other time has IT ever been as critical. We’re talking about patients’ lives.”