If online attackers can control your implanted heart device, they can blackmail you, damage you, even kill you.
Based on a couple of sessions at the SOURCE Boston conference this week, the bad news is that things have not measurably improved in the several years since former Vice President Dick Cheney famously had the wireless functionality of his implanted defibrillator and heart pump disabled because of fears that it could be hacked and used to assassinate him.
The somewhat better news is that it is possible to improve the security of those devices, and there are organized efforts to do that without compromising their access and value to practitioners and patients.
Chris Schmidt, chief guidance officer at Codiscope, was the bearer of most of the bad news. In a talk titled, “The Bad Guys Have Your Pacemaker: How to Stop Attacks on Your IoT Devices,” he said that in a recent investigation of “smart” pacemakers, “we discovered a lot of scary things,” indicating that application security still, “gets the least amount of attention.”
[ MORE FROM SOURCE: SOURCE 2016: It's behavior, not names, that gives attackers away ]
Among them:
Schmidt said he could spend three weeks talking about successful intrusions into IoT devices – baby monitors, thermostats, refrigerators, printers, cars and more.
“And those are just the ones we know about,” he said. “There are probably at least 10 times that amount.”
There are other systemic factors that make medical device security difficult, according to Penny Chase and Steve Christey Coley, of the MITRE Corporation.
In a talk titled, “Toward Consistent, Usable Security Risk Assessment of Medical Devices,” they noted that those devices:
Besides all that, they noted that the interests of multiple stakeholders – researchers, manufacturers, providers, patients and regulators like the Food and Drug Administration (FDA) are involved.
Amid all that bad news, Schmidt insisted that, “all hope is not lost,” because the problems with those devices are mostly the same and, at least from the technical standpoint, “they’re not hard to fix.”
The solution, he said, is to “build security in” – something his former boss, Gary McGraw, CTO of Cigital, has been preaching for more than a decade, and even wrote a book with that title 10 years ago.
That, he said, means making it part of the process from the beginning – the design phase. He noted that there are secure software development practices available, and that sites like GitHub have made peer review simple.
“Be transparent while you’re building, not after you’re on the market,” he said, noting that, “more eyes spot more bugs.”
Finally, he urged developers to, “hack yourself. Try to break the stuff you created,” before putting it on the market.
The message from Chase and Coley is that management of the risks of such devices requires, “a delicate balance of security, safety and privacy – they overlap.”
“Each can interfere with the other,” Chase said. “You don’t want the AV (antivirus) firing during surgery.”
And, for some patients, the availability of a device can trump the small risk that it could be compromised.
That, they said, has led to efforts to adapt the Common Vulnerability Scoring System (CVSS) to healthcare, by focusing on what the actual impact of a vulnerability is on patient safety, when put in the context of its value to the providers and patients.
The so-called “base score” can exaggerate the risk, they said, while undervaluing a device’s value to patients. They cited examples of a medical staff person being required to confirm the setting of a fusion pump before it is used, which presumably would catch an attempt by an attacker to change it remotely.
They said one effort to bring context to the ranking of risks is to use other frameworks like the Common Weakness Scoring System (CWSS) and the related Common Weakness Risk Assessment Framework (CWRAF).
“The goal is to take the environment into consideration along with the base score,” Coley said. “We don’t want FUD (fear, uncertainty and doubt) to make patients fearful of life-saving therapy.”