Poor Cybersecurity in Health Devices a Life-threatening Problem

”]

Security researcher and Type 1 diabetic Jerome Radcliffe demonstrated a potentially lethal vulnerability in wireless medial devices by hacking his own insulin pump and glucose meter—showing a live audience that a malicious hacker could potentially use security gaps in the devices to end someone’s life.

Radcliffe’s segment at the 2011 Black Hat Technical Security Conference in Las Vegas had a resonating effect, raising concerns of a frightening side of cybersecurity. For some security researchers, however, this came as no surprise. Rather, it brought attention to an issue they’ve warned of for years.

One such organization is the Medical Device Innovation, Safety and Security Consortium, which was founded to help protect the growing number of medical devices linked to computer networks from the likes of malware and hackers out to cause harm.

The consortium could not be reached for comment, but a document from the organization notes that the problem goes much deeper than a mere lack of oversight. The vulnerabilities stem from problems found across the board.

“Broad capability is lacking among security, IT, biomedical engineering, and medical professionals in protecting medical devices,” there is a low standard of quality “in protecting networked medical devices and associated systems,” and there are gaps in collaboration between all stakeholders involved, including technology companies, manufacturers, and device providers, states the document.

It adds that “Current regulations do not sufficiently address” the vulnerabilities.

The issue sounds a bit sensational—hackers can breach your insulin pump and kill you—but Radcliffe gave a pretty clear explanation of how it works.

The problem these devices have is the same faced by industrial systems, the “critical infrastructure” we hear so much about, such as the power grid: The devices all use supervisory control and data acquisition (SCADA) control systems.

In the past, SCADA systems have been targeted by hackers who were able to gain control over them, speed up, and physically destroy the systems they were connected to. The Stuxnet worm, for example, targeted SCADA systems at Iranian power plants and was able to physically destroy nuclear centrifuges. The same thing was seen during the 2007 Aurora Project, when researchers physically destroyed a 27-ton generator using a cyberattack.

Radcliffe stated the wireless health devices he uses have turned him into a “Human SCADA system,” in the outline of his speech, posted on the Black Hat conference website. He adds, “in fact, much of the hardware used in these devices are also used in Industrial SCADA equipment.”

He began researching the vulnerabilities out of curiosity to find whether a hacker could reverse engineer the system, take control of an insulin pump, and inject a victim with a lethal dose.

There are also concerns around wireless health devices outside life-threatening vulnerabilities—including privacy of data about a person’s health.

David Kotz, associate dean of faculty for the sciences at Dartmouth College, is working with a team to develop better security for wireless health devices by keeping data safe—and unaltered—from people wanting to cause harm.

“You increasingly see people carrying smartphones, and I’m increasingly seeing products and apps—either software apps or hardware devices—that you could wear or carry with you, and it will measure and monitor something about your health,” said Kotz in a phone interview.

While these devices send and receive data about a user’s health, the data is being sent wirelessly and could be intercepted. Someone could even potentially take this data, alter it, and then put it back into the stream. “You could be releasing your health information to hackers or passersby, or just about anyone,” he said.

Some of the scenarios are a bit more practical—for example if a woman were wearing a fertility monitor in hopes of getting pregnant, and was concerned that if a potential employer found out she wouldn’t get the job, “which is illegal, but it happens,” Kotz said.

Employers could set up a device to detect these types of tools. Even if they couldn’t see the exact data on the device, just knowing it was there could be all it takes.

“We’re trying to protect against those kinds of risks,” Kotz said.