The use of mobile devices continues to put healthcare data at risk. A federal government standard could offer more protection.
Mason Berhenke, Texas Instruments
Healthcare is undergoing a technological renaissance, with untethered medical devices, more data-gathering and ever-improving accessibility. As more devices become mobile, there is extra opportunity to hijack equipment. The more data gathered, the more data is vulnerable. As accessibility improves, so does the potential threat of an attack.
To address the increasing cybersecurity risks that these trends bring, vendors implement whatever best practice they see fit. But are these best practices good enough? Given the number of recent data breaches, the answer is no. The U.S. Department of Health and Human Services (HHS) listed 27 reported incidents of information breaches at healthcare providers nationwide in April 2019 alone involving IT incidents — the highest number of reported incidents since the HHS began tracking them. April is not an isolated month, as every month in 2019 has seen an increased number of incidents compared to 2018.
Making data safer
Medtech vendors could implement stronger crypto algorithms or software that securely transfers data from devices to the cloud, but how can they be certain that their implementations are actually secure? This uncertainty represents a security gap, and the best way to fill that gap is to use government-approved security.
From a medical perspective, the term “government-approved security” seems a bit vague. Regulators such as the FDA and the HHS have rules related to cybersecurity, but the rules don’t specify any feature standards. The FDA’s 2016 postmarket regulations provide guidance on how to manage “ cybersecurity vulnerabilities for marketed and distributed medical devices,” including how to assess, contain and prevent threat sources in existing products. In October 2018, the agency issued an updated draft premarket guidance that includes some postmarket information. It also held a public workshop in January 2019 to get feedback on that guidance and worked on a joint security plan.
The HHS’s Health Insurance Portability and Accountability Act (HIPAA) Security Rule defines those to whom it applies as “health plans, health care clearinghouses and any health care provider who transmits health information in electronic form.” When it comes to technical safeguards, there are no requirements for exact security features. The rule only states to implement encryption “whenever deemed appropriate.”
Seeking clarity elsewhere
Without clear guidance from healthcare regulators, the de-facto guidance falls on the National Institute of Standards and Technology (NIST), the top authority for security standards in the U.S. NIST is a non-regulatory agency that operates several programs to validate aspects of security from the algorithm level all the way to cloud computing. The Federal Information Processing Standards (FIPS) 140-2 is one of these programs used to certify cryptographic security of electronic hardware.
FIPS is a requirement in several government agencies that transmit data from unclassified all the way to top-secret. If FIPS is trusted by government agencies for handling sensitive data, medical technology vendors should see it as a design requirement for their devices all the way down to the wireless microcontrollers (MCU).
With core hardware like wireless MCUs, FIPS pertains to the cryptographic algorithms and codes used to store and transmit the data. NIST-approved algorithms such as secure hash (SHA) or advanced encryption standard (AES) all have publications dictating the standard for implementing the algorithms correctly. Any silicon manufacturer can take the standards and try to include them in the chips.
NIST provides validation procedures
There is still a problem of certainty with the implementation of these algorithms. Healthcare professionals and patients may not use wireless medical devices without knowing their data is secure. This is where NIST comes with procedures for third-party validation of the FIPS algorithms.
“Third-party” is an important distinction. Silicon manufacturers surely do some sort of validation, but this validation could be flawed. For example, a manufacturer could implement AES that technically works and passes their tests, but in reality, the algorithm does not randomize the data enough. If this flaw ends up being implemented in a medical device, it leaves room to crack transmitted data or even take over devices.
To avoid these potential flaws, NIST requires silicon manufacturers to submit their designs to accredited third-party testing labs in order to earn the moniker “FIPS Validated.” This label truly means government-approved security. “FIPS Validated” is the standard that medical technology vendors should be thinking of from concept to production of their devices.
Making government-approved, “FIPS Validated” security a mainstay in medical devices is a worthwhile investment. With it, healthcare professionals and patients can be assured that their data are handled by the highest standard of security. It means that the trend of untethered devices can continue instead of being stifled by fear.
The consequences without validation are obvious: the medical technology sector is handling lives, and any security flaw could mean life or death. Today’s technological trends in healthcare are improving the industry, and it is worthwhile for vendors to observe a high level of security to keep these improvements moving forward.
Mason Berhenke received his bachelor’s degree in computer engineering from Iowa State. He joined Texas Instruments in 2016, where he currently works as a product marketing engineer for the SimpleLink connected MCU team.
The opinions expressed in this blog post are the author’s only and do not necessarily reflect those of Medical Design and Outsourcing or its employees.