We’ve written before about the common causes of data breaches, but what about all the other types of privacy risks your organisation might face?
This month we have helpfully compiled for you a list of Ten Things To Do or Not to Do or Privacy Risks to Avoid and Other Things to Worry About Generally. Which is too long for a blog title, sadly, so let’s just call them Things to Lose Sleep Over.
#1: Not understanding the value of your data
The public release of fitness app Strava’s data was a classic demonstration of an organisation not even realising the value to be found in the richness of its own records – and therefore not protecting them appropriately. Geolocation data not only can make individual customers targets for harm, but can also create risks for groups of people – or even nation states – somehow related to your customers.
#2: Not understanding the identifiability of your data
The devil is in the detail: sometimes, despite purportedly being de-identified, data can reveal the identity of an individual, or at least lead them to be disambiguated from the crowd. This might be because of poor de-identification techniques, like the MBS/PBS dataset. Other times it is the richness of the data which creates new privacy risks, such as the taxi trip data which revealed details about celebrity passengers, or could have allowed individuals to be targeted on the basis that they had visited a particular site – for example, a mosque or an abortion clinic.
#3: Thinking ‘notice and consent’ authorises data flows
Despite how the American model of privacy law works, in the rest of the world you can’t just legalese your way out of privacy obligations, burying expansive or permissive powers in mandatory T&Cs and then claiming your customers ‘consented’ to your practices. (I mean, sure, some companies try, but the law is not on their side.)
There are several problems with relying on consent to authorise the collection, use or disclosure of personal information. The first is that to be valid, the consent must be genuinely free, without a penalty attached to saying ‘no’. So threatening an employee with dismissal if they refuse the collection of their biometric data does not allow the employer to claim that any such collection was conducted on the basis of consent. Further, to be voluntary, consent must be indicated with a proactive ‘yes’ from the individual. A failure to opt out is not consent.
The second problem is that to be valid, the consent must also be informed and specific, which means that the organisation seeking consent must be precise about all the potential uses and disclosures that might occur, and the potential harms that might arise, if the person says ‘yes’. But in the world of open data, predictive analytics, machine learning, algorithms and artificial intelligence, that’s not always possible to predict. When it comes to AI in particular, consent is almost certainly useless as a mechanism to authorise your collection or use of personal information.
And finally, even if you manage to obtain a consent that is voluntary, informed and specific (plus current and given by a person with capacity), consent does not absolve you of compliance with all privacy principles. Privacy law creates obligations covering the entire life cycle of handling personal information, and at many of those points in the life cycle consent is utterly irrelevant. You still have obligations to only collect personal information that is reasonably necessary for a lawful purpose, to ensure that the data is fit for purpose, that you take all reasonable steps to protect data security, and so on.
#4: Thinking authorising data flows is all you need to worry about
When we conduct Privacy Impact Assessments, we are often asked whether or not a proposal to “let X access our data for Y purpose” will be compliant with the privacy principles. But this is not only a question of whether, but also how. Sure, first you need to determine whether the purpose for which you are proposing to disclose data to a third party is lawful – for example, whether there is another law authorising it, or if there is a public interest exception which applies, such as medical research or law enforcement.
But you still need to think about howthe data will be disclosed or accessed, because there is a spectrum of design options to choose from, and neither the lawyers nor the solution architects are going to come up with the most privacy-protective option without some prompting.
An example is the case of identity verification services run by marketing/data aggregation firms which are reported to now have ‘access’ to the electoral roll, following some law changes. What was not explained in the media reporting is whether the firms are simply given copies of the entire electoral roll, whether they can access or extract bulk records, whether they can freely search on any name, how much data they can see about each person, or if they can only ‘ping’ the roll on a case-by-case basis by presenting a suite of already-known data about the individual to return a limited yes/no verification response. Not all data ‘access’ is equal or carries the same privacy risks. Notwithstanding your legal position, your social licence may depend on you choosing the most privacy-protective of all possible design options.
#5: Doing stuff your customers don’t expect
Other than your friends throwing you a lovely surprise birthday party, you probably don’t much like surprises. Your customers don’t either. Some examples of what not to do: collect mobile numbers you told your customers were to enable two-factor authentication, but then use them for spam; offer soccer fans an app which sneakily accesses their location data and microphone to listen out for illegally streamed matches; or ask people to make a submission to a parliamentary inquiry about tax matters but then use their details for political fundraising and disclose their details to an asset management firm for their own marketing purposes.
#6: Not doing stuff your customers do expect
This should be a no brainer, but if you are going to promise your customers a particular privacy control, make sure you follow through. Unlike in the UK, where the NHS asked people to opt out of having their patient records used for secondary purposes beyond their direct care, but then failed to properly record and respect the wishes of 150,000 patients, whose records were shared despite them opting out.
#7: Not implementing ‘need to know’
It’s not just external bad actors you need to worry about, there are also significant risks posed by trusted insiders.
A NSW auditor-general’s report in 2017 found that a third of NSW government agencies were failing to properly safeguard their data, by not limiting access to personal information to only those staff with a ‘need to know’. The first fine issued under the GDPR in Portugal was €400,000 for a hospital which failed to follow the ‘need to know’ principle, when it allowed indiscriminate access to clinical data about all patients to both clinical and non-clinical staff.
And just the other week, the Commonwealth Bank had to enter into an enforceable undertaking with the OAIC, after it was revealed that there had not been appropriate user access controls to stop staff in the banking arm from seeing customer data related to the separate insurance arm.
#8: The rogue employee
Speaking of trusted insiders, watch out: a survey of healthcare workers in the USA and Canada suggests that one in five employees would sell confidential data, for as little as between $500 and $1,000. Then there are the employees who access or disclose customer records for their own benefit or to assist a mate, examples of which have affected several banks, NSW Police and Queensland police; or to sabotage a company’s reputation, which is the claimed reason for a deliberate leak of data from valuation firm Landmark White.
#9: The helpful employee
It’s not just the bad apples you need to watch out for; it’s also the keen beans.
A staff member of a Bunnings store took it upon himself to create a database of customer records (with the aim of notifying customers about activities and events at their local store), as well as an employee performance monitoring database. Unfortunately he did so contrary to organisational procedure and on his insecure home computer, causing more than 1,000 customer records and staff performance reviews to be publicly exposed on the internet. And a NSW government agency was found in breach of the Disclosure principle when an employee responded to a solicitor’s request for information with more details then he was asked to provide, because he was “trying to be helpful”.
#10: The bored employee
Beware boredom. A police officer who looked up the records of 92 women he saw on dating sites claimed he carried out his crimes “partially due to curiosity but also boredom with his job and during slow periods at work”. And a review of 1,368 data breach incidents in the healthcare sector across 27 countries found that the majority of data breaches are caused by trusted insiders, with 31% of them involving staff looking up the records of celebrities or family members “for fun or curiosity”, such as the dozens of hospital staff who were fired after accessing the medical record of an actor in the news after an alleged attack.
The list goes on
This top ten list doesn’t even scratch the surface of the complexities involved in implementing privacy by design, but it does touch on issues that almost every type of organisation needs to deal with.
Managing privacy risks is not just about getting on top of data security. You need to appreciate the value of your data, map how that data is being used throughout the organisation, ensure all those data flows are authorised, limit access on the basis of ‘need to know’, understand where and why staff might be tempted to use or disclose personal information for novel purposes, and follow through on the privacy promises made to customers. And then train, re-train and train again all staff, so they know what they can and cannot do.
If you need pragmatic tools to help with your privacy risk management, check out Salinger Privacy’s Compliance Kits which include resources such as a Privacy Impact Assessment Framework, Privacy Risk Assessment Questionnaire, and Data Governance Protocol. We also have online Privacy Awareness Training in multiple modes, from ready-to-roll to customised options, as well as professional development training for privacy officers, in either a one-day Privacy Management in Practice workshop, or a two-day IAPP privacy professional certification program.
Now that should help you sleep a little easier.
Photograph (c) Shutterstock