Stephanie Domas, lead medical device security engineer at Battelle, is also a certified ethical hacker. Domas recently answered some of Medical Design and Outsourcing’s burning questions about cybersecurity in the medtech sphere. The following is a shortened transcript, edited for clarity, of our chat with Domas.
Q. On a scale of one to 10, how is the medical device industry doing in terms of being prepared to address cybersecurity threats?
Stephanie Domas: Cybersecurity is so new to medical devices, a lot of education is required, but it’s more than that. There’s actually a big culture change that needs to happen. Medical device manufacturers have always been focused on managing the ability for a device to cause patient harm, and it’s something that they’re really good at, but what you’re starting to see now is the expansion of this concept of harm to extend past just that physical patient harm.
Now you have these broader areas of harm, through patient data loss, or data ransoming, or your device being a pivot point into the network. That expansion of risk management is a culture shift. It really changes how these devices are developed, and how resources go into making them secure. It’s a culture change beyond just the education.
So, I can’t lump everyone into one group, but I’m going to divide them into two. There’s the group of manufacturers that understand the threat – they’re incorporating security best practices and they’re hiring or contracting with the right talent. I would give that group a nine. It’s not to say they’ve figured everything out. They’re still learning – the industry as a whole is still learning. But I’m really optimistic about this group; they’re taking it very seriously. But I would say that that’s a small subset of the companies out there.
More of what you see is on the other side of the fence. These are the companies that produce connected medical devices, but haven’t yet accepted that security threats expand past the traditional view of patient harm to include things like data stealing, data ransoming, and network infiltration. We still have a lot of these companies that have connected devices, but haven’t yet accepted the idea that their device needs to have cybersecurity considered into it, even if it can’t immediately cause patient harm. Unfortunately, we’re still seeing a lot of those, and I would, sadly, give that group a one.
Q: There’s been an incredible increase in networked medical devices, which creates a huge surface area for potential attacks. How real is that threat?
SD: The threat is very real. It stems from several places. An attacker has two routes. You have the threats that are against the physical medical device themselves, in which an attacker leverages a technical exploit against their target.
And then you have the attacks that are from non-adversarial or accidental sources. This is usually the human factor, people who are simply doing something that they shouldn’t on a network. It’s the easier approach [for a malicious hacker].
We see this very commonly in the field, not just in medical. Any type of enterprise network out there is routinely struggling with this non-adversarial or accidental threat. These threats include people clicking on malicious links or inserting USB drives they found in the parking lot.
And it’s not all malicious. Some threats are simply mistakes. One example that I heard recently was of a medical staff member who had a cellphone that was low on batteries, and they saw that an anesthesia machine had a USB plug on the front. They plugged their device in to try and charge it.
That anesthesia machine was not programmed to handle that type of a device and it shut down. That’s an unintended threat. The person was just trying to charge their cellphone. They didn’t mean any harm by it, but that device didn’t have enough defensive programming to be able to handle the fact that something it wasn’t expecting was plugged into it. We can compensate for that unintended threat by taking all of that defensive action and just assuming that the outside world is trying to attack.
MDO: So how can medical device companies protect themselves?
SD: Defensive programming or defensive design is key. It’s been traditional for companies to design devices with intended-use cases, and the assumption is that they’ll provide a little bit of protection against misuse – but that’s assuming that the user is going to be using that device in its intended manner.
Defensive programming, or defensive design, is taking a step back from that and saying, “All right, I can’t trust the environment I’m in.” Instead of accepting things that come from the outside world, they ask, “Is there a way that I can validate that data before I use it.” This means really locking down features so that there’s resilience against unintended or malicious actions.
MDO: Are there documents that can help with defensive design?
SD: There are some industries that have been in cyberspace a lot longer than medical has. The industrial control systems, aerospace, and financial industries have all been addressing cybersecurity for a long time. From those industries you’ve got a lot of really great guidance documents, which can be leveraged in medical.
But the documents are only as good as companies treat them. Firms can chose to treat these protocols as checkpoints, that is, doing the easiest thing that satisfies the requirement but won’t really raise the security of the device. But if those guidance documents are really thoroughly addressed, they can really be used to make good, informed security decisions. A lot depends on the company, whether they are really taking it seriously or treating it like a checkbox.
Another factor that I see is hesitation. People might be afraid to commit to a particular approach, because there’s just still a lot of unknown in the space. The FDA draft for post-market guidance was just released in January. It’s still in draft. And in general, there’s still a lot of unknown about what exactly the FDA wants to see about cybersecurity in a 510(k).
Manufacturers don’t want to commit or publicize the way they’re doing something until they really understand what it is that the FDA wants to see. They’re afraid to dive in and adopt a bunch of guidance that ends up not being what the FDA wants to see, even if the guidance is good and raises the security of the devices. It’s understandable, but they might end up back in that “one” category.
MDO: Can companies completely outsource a cyber security protocol or plan to leave it out of their day-to-day?
SD: Absolutely. I like to use this example: You wouldn’t ask your friend, the rocket scientist, to perform an appendectomy on you if were sick. You’d want a certified doctor. They’re both incredibly smart people, but with very different areas of knowledge.
Why would companies tackle their own security? Use security experts. Simply telling your development team to start adhering to security guidance is setting yourself up for failure. No matter how talented your team is, security is just not their domain. Some companies have that bandwidth and enough workload to justify the internal teams but, honestly, for most companies, it’s just not realistic.
That cyber landscape is changing every day, so it’s a lot more effective to make use of third-party companies that really live and breathe cyber. They’re always staying up to date on the latest tools, the latest techniques, and it’s hard for internal smaller teams that aren’t dedicated to cyber to keep up with all of that.
MDO: When medical device companies ask what they should do in terms of security practices, what advice do you give?
SD: First, I tell them to learn what other industries have done before, so we’re not trying to reinvent the wheel here. Look at aerospace, industrial control systems. It’s not a one-size-fits-all, so adapt them for specific medical needs.
Second, use the right talent. I talked about this earlier, but make sure you’re really using security professionals. Take security seriously, both pre- and post-market. The post-market emphasis on cyber security is something that’s very new. You really have to have that follow through on making sure to maintain these devices.
Finally, implement a responsible disclosure policy. Make use of the security researchers out there. Make use of your users. Make it easy for them so that if they find vulnerability in your product, it’s a very collaborative effort to try and get that fixed through that responsible disclosure policy. You have to state that you are willing to accept information about vulnerabilities that have been discovered in your device, that you’re not going to litigiously backlash against anyone.
At this point, security research done on somebody else’s device is technically violating the copyright of that device. There’s actually a modification to the digital copyright, or Digital Millennium Copyright Act, that’s set to go into effect next year that would make an exception for security research. Until that goes into effect, security research is technically violating the copyright on those devices, so that responsible disclosure is your way of saying as a company that you’re not going to legally retaliate. It’s the best way to let users know that you actually want to hear about vulnerabilities, that you’re going to work collaboratively with them.