Stay informed with free updates
Simply sign up to the Technology sector myFT Digest — delivered directly to your inbox.
The writer is a fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence and the Cyber Policy Center. She is the author of ‘The Tech Coup’
Imagine discovering your most intimate medical data such as test results for STDs, cervical cancer screenings, or reproductive health treatments are exposed on the dark web. Last month, a million Dutch citizens faced this nightmare after a July ransomware attack on Clinical Diagnostics was reported.
Healthcare is a lucrative target for criminals, with ransomware attacks posing a “direct and systemic risk to global public health”, according to the World Health Organization. While hundreds of millions of patients around the world are affected by data breaches every year, transparency and accountability are failing.
When Change Healthcare was attacked in the US last year, hackers compromised almost 200mn patient records. UnitedHealth paid $22mn in ransom. The Nova criminal group was lucky to get Clinical Diagnostics to pay up as well, although the amount has not been disclosed.
The real harm is likely to be more significant than any dollar value. The loss of trust in the proper handling of screenings for cancers, for example, may make people turn away. That can hurt public health as well as individuals who may learn they have cancer only when it is too late.
Getting clarity on what happened in an attack is not easy and the incentives for transparency are lacking. There are some good reasons to pace the release of information, for example, to not tip off the perpetrators of new attacks. But too often crucial information never becomes public.
Laws may require reporting of data breaches to the competent authorities, such as the data protection agency, and there are obligations to inform victims. But even then, we lack transparency on the role tech suppliers played. We don’t know whether the victim organisation or the software vendor followed proper protocols. The underlying technology vulnerabilities thus remain largely unaddressed, and hospitals unknowingly keep using tools with a record of failure.
US Senator Ron Wyden has been particularly vocal about vulnerabilities of certain Microsoft products in relation to breaches such as the hack of US healthcare provider Ascension. Wyden is critical of “dangerously insecure default settings” and has urged the Federal Trade Commission to investigate.
Given the significant market share of Big Tech players such Microsoft, Oracle, AWS and Google in critical services such as healthcare but also government and defence, any unaddressed security flaw or negligence in following protocols can lead to a single point of serious failure.
Wyden’s office learnt the details of the Ascension ransomware attack directly from Ascension itself, but contact of this sort is the exception and offers no structural transparency.
Other examples illustrate this accountability gap too. In October 2024, the Securities and Exchange Commission fined four companies a total of $7mn for misleading disclosures about the 2019 SolarWinds breach. Meanwhile, SolarWinds itself saw most charges dismissed or settled by the SEC. Clearly accountability is skewed when vendors face minimal consequences for security failures while their customers absorb financial and reputational damage.
Current regulatory approaches focus primarily on breach notification requirements on hacked organisations rather than prevention or scrutiny through tech accountability. The emphasis on disclosure after the fact does little to address the underlying causes that enable these breaches.
When the wrong people pay the price for cyber incidents, change will not come quickly or effectively enough, and tech companies will get away with continuing to sell software that has been proved to be vulnerable. Additionally, the resilience of resource-constrained healthcare facilities needs to improve. Solutions would include mandatory security standards, liability frameworks and penalties for negligence as well as public funding to help resource-constrained hospitals upgrade outdated systems.
The status quo is untenable. To restore trust and protect public health, regulators must hold technology vendors accountable for security failures, not just their customers. Without such reforms, patients will continue to pay the price by forgoing life-saving screenings and living with the fear that their most private data is just one hack away from the dark web.