You say data breach, I say cybersecurity incident
You say privacy breach, I say an individual sending out emails
Potato, potahto, tomato, tomahto
Let’s call the whole thing off
(With apologies to Ella Fitzgerald)
What is a data breach? Is it the same as a privacy breach? Or a cybersecurity incident? And how should we describe their causes?
A flurry of debate over where to lay the blame for data breaches was whisked up with the release earlier this year of the OAIC’s report into the first 12 months of the notifiable data breach (NDB) scheme in Australia.
964 eligible data breaches were notified to the OAIC. Of these, 35% were categorised by the OAIC as “attributed to human error”, such as unauthorised disclosure of personal information or loss of a portable storage device. The remainder were either “malicious or criminal attacks” (60%) or “system faults” (5%).
Speaking at an official Privacy Awareness Week event to launch the report, panel member Richard Buckland, who is Professor of lots of things starting with ‘Cyber’ at UNSW, suggested that really, when it comes down to it, all data breaches should be considered caused by human error.
His point, as I understood it, was that humans program computers, and humans design systems, and humans make decisions about how much energy, budget and time to put into training and supporting other humans who operate or use those systems (we poorly named ‘users’), and how best to protect systems and users from malicious attacks from other humans. He was speaking about the need to improve the data literacy and cyber awareness of all humans, but especially the humans in his student body who will be programming the computers and designing the systems the rest of us humans will use.
Of course, that subtlety was lost when I tweeted simply that the professor had suggested that “all data breaches should be considered caused by human error”.
To which a fellow privacy expert Tim de Sousa responded with horror:
“Really, really, no. To err is human. Humans err. We know this. It’s risk management 101 (and a legal obligation under APP 11) to design your systems to mitigate known risks. Breaches ’caused’ by human error are *systems design flaws*.”
And then various other people who were most likely not in the audience and had not read the OAIC report jumped into the debate about who causes data breaches and how to categorise them, because that’s Twitter sometimes.
So who is right, Richard or Tim? They’re both right of course. The debate only arises because of the need for the OAIC to make crude distinctions for reporting purposes when categorising how breaches occurred, when the reality of root cause analysis can be far more complex than simply attributing blame along the lines of ‘system vs human’.
As the OAIC’s own report notes, “most data breaches—including those resulting from a cyber incident—involved a human element, such as an employee sending information to the wrong person or clicking on a link that resulted in the compromise of user credentials.”
Then there’s the confusion about what is a ‘data breach’ in the first place. There is the legal definition under the NDB scheme, the short version of which is: any loss of, unauthorised access to, or unauthorised disclosure of, certain types of data including personal information. Such a data breach becomes notifiable if it is assessed as ‘likely to result in serious harm’ to one or more individuals.
(I draw a distinction between a ‘data breach’ as per the NDB scheme, and a ‘privacy breach’ which I would define as any conduct in breach of one or more privacy principles. A failure to take reasonable steps to protect data security, which leads to an unauthorised disclosure of personal information, would be both a data breach and a privacy breach.)
Note that the legal definition of data breach is deliberately tech-neutral. Not every data breach is a cyber security incident; leaving a manila folder of paper client files on the bus is a data breach. And conversely not every cyber security incident is a data breach; a Denial of Service attack may significantly impact your business operations, but without risking the data you hold. What matters under the NDB scheme is whether personal information was put at risk of misuse.
So that’s the legal definition of a data breach. Which judging by what we see or hear in the media, is apparently a very different thing to the politician’s definition. Or the PR playbook.
When arguments were raging out the shift to an opt-out model for MyHealthRecord, the government was keen to spruik the benefits and downplay the risks. But along with others I called out the then PM and Health Minister for misleading the public about the extent of data breaches that had already happened. The agency responsible, ADHA, had already reported publicly on 11 data breaches it notified to the OAIC over the previous year, as required under the MyHealthRecord legislation (which contains its own reporting scheme that pre-dates the NDB scheme and has a slightly more expansive definition).
Yet the then PM Malcolm Turnbull stated “There are six million records — six million My Health records. There have been no privacy breaches.” And the then Health Minister Greg Hunt said “When you look at six million people, six years, though, on the latest advice today, no data breaches.”
How did they get away with saying this? Part of the problem is that people tend to talk at cross-purposes. If you think ‘data breach’ or ‘privacy breach’ only refers to external bad actors getting through your cybersecurity defences, then maybe the politicians’ semantics start to make sense. But that’s certainly not what the law states, and it is irresponsible at best, or actively misleading at worst, for politicians to say there have been no data breaches when the law says there have been, the agency responsible says there have been, and the OAIC says there have been. 11 of them, to be precise. That ‘human error’ was to blame rather than human hackers does not necessarily lessen the privacy risks posed to the individuals affected.
The “nothing to see here, no cyber problems, just human error” line seems to be a favourite of organisations responding to a data breach, including NAB. LandMark White, a valuation firm, managed to devalue itself to the tune of $7M after suffering a data breach which it described as a “cybersecurity incident”. Perhaps learning from the effects on its business model by admitting to information security vulnerabilities, the next time it suffered a data breach, the chairman was quoted as saying “it was not a data breach but an individual who was sending out individual emails with separate attachments”. Right, because knowing you have staff who are deliberately leaking data is somehow less of a problem?
There have been other lame semantics, blame-shifting and weasel words recently, such as when a contractor to the Department of Home Affairs accidentally emailed the medical details about hundreds of visa applicants to the wrong person. The official line was: “The document contained bio-data details of visa applicants. No actual personal client medical records were disclosed as part of this incident.”
I can picture now the media management guru advising “instead of calling it health information or medical records let’s call it ‘bio-data’ so no-one knows what we’re talking about and it sounds less serious”.
Then there’s Neoclinical, which accidentally exposed 37,000 Australians’ particularly intimate health information by placing the records on an insecure cloud server. It quickly attempted to shift the blame to the security firm which identified the insecure records and eventually went public about it, as a ‘marketing’ exercise.
With so much debate – and some deliberate obfuscation – about data breaches, it’s no wonder people are confused about where to look, who to blame, or how to prioritise their limited data loss prevention and privacy risk management resources.
But consider this. The single most common cause of a data breach in the first year of the NDB scheme was personal information sent to the wrong recipient by email (28%). Failure to use BCC when sending an email accounted for another 8% of human error breaches. Thus more than a third of all data breaches caused by human error involve the simplest of tasks: sending emails.
Data breaches can be low-tech or high-tech, deliberate or accidental, featuring trusted insiders or external bad actors. Staff training and awareness, tech tools to help with data loss prevention (like email and document classification, and encryption) and accountability (like collecting and monitoring audit logs), vendor and supplier contract management, data breach response plans and phishing simulations – the privacy practitioner needs to cover them all.
Because data protection is not just about your cyber defences, but requires active management of your entire data ecosystem.
If you need assistance with data protection, consider our September webinar on outsourcing and managing contractors, our October CIPM certification training, or our template Data Breach Response Plan which comes included in most of our Compliance Kits.
Photograph © Anna Johnston