Welcome to the new normal.
This month we offer an overview of the privacy issues to think about as you navigate the new normal, with pointers to the best guidance we have found.
Privacy is not just about data
Is there a better illustration of the importance of freedom of association to our sense of autonomy than the whiplash we’re all feeling now, being told how many people you can have at your wedding, or how many metres apart mourners at a funeral must be?
I’m not arguing against the importance of these physical distancing rules, just pointing out that their sudden imposition highlights a truth about the role of physical privacy, which is that, like other types of privacy, what we value is self-determination: the ability to make our own decisions.
So it may seem counter-intuitive, but privacy is not only about the right to be left alone. Privacy includes the freedom to be in a crowd. Or not. When we feel like it.
When it is about data, the law is your guide
Your organisation might be on the receiving end of calls for increased data sharing. Keep in mind that privacy law already anticipates data sharing without consent in a number of ways.
Most privacy laws will have a number of exemptions, which allow personal information to be used or disclosed for secondary purposes, without the consent of the subject individual. Those exemptions will typically include:
- ‘any other law’ which authorises or requires the use or disclosure
For example, we already have public health legislation which deals with reporting of notifiable diseases and biosecurity hazards.
- law enforcement purposes
Law enforcement exemptions are not open slather, so when police come calling, ask questions about what criminal offence they are investigating, and ensure you can meet the threshold test in the privacy law that applies to your organisation. For example, APP 6.2(e) requires that you (not the police asking for the information) reasonably believe that the use or disclosure of the information is reasonably necessary for one or more enforcement related activities conducted by, or on behalf of, an enforcement body.
- serious and imminent threat
Most privacy laws have some kind of ‘emergency’ exception, but be careful: the differences in wording are subtle, but significant.
For example, the NSW test for the disclosure of non-health, non-sensitive personal information (IPP 11) is narrow: the agency holding the personal information must “(believe) on reasonable grounds that the disclosure is necessary to prevent or lessen a serious and imminent threat to the life or health of the individual concerned or another person”. Case law suggests this is for extraordinary circumstances such as an active threat of homicide (as assessed by someone such as a patient’s long-term psychiatrist), and even then only if the precise disclosure is necessary to contain that threat.
By contrast the NSW rule for health information (HPP 11) also includes “to lessen or prevent … a serious threat to public health or public safety”. A test involving “a serious threat to public health” is much easier to satisfy than a test which requires “a serious and imminent threat to the life or health” of an individual.
So taking the two NSW privacy laws together, even in the midst of a pandemic which arguably poses “a serious threat to public health”, only health information could be disclosed under this test. Non-health information such as location data could not be disclosed, even if it might be useful for contact tracing. But see again the first point: there may be other public health laws which override the privacy laws in certain circumstances.
(If you need help navigating quickly through all the potential exemptions under which personal information, sensitive information and/or health information can be used or disclosed under the privacy principles, see our Untangle guides: Untangling the APPs, or Untangling Privacy: the NSW rules.)
See specific guidance from your privacy regulator
Check your privacy regulator’s website for their guidance. Here are some quick links:
- OAIC – guidance for employers around staff privacy issues
- OVIC – tips for working remotely, safely
- NSW IPC – preserving privacy rights
- OPC NZ – FAQs on managing COVID-19 and privacy, including for specific groups such as employers, landlords, the hospitality sector and healthcare workers
- Europe – the EDPB has guidance on compliance with the GDPR and ePrivacy Directive from the European regulators
- Everywhere else – see the IAPP’s global roundup of regulatory guidance
The foundational principles hold firm
In the midst of a crisis, it is tempting to run around like headless chooks (keeping at least 2m away from all the other headless chooks, of course), blindly grasping for a solution. But there’s no need to make decisions in the dark, because privacy laws already shine a light on what to do.
Privacy laws around the world are built on core principles such as legitimacy, necessity, proportionality, fairness and transparency.
This means that at all times – whether you are building an app, or considering a request to disclose personal information about your customers, or wondering if you can collect health information from your staff – you must ask yourself:
- are our objectives legitimate?
- what is reasonable and necessary to achieve our objectives?
- will our actions actually achieve our objectives?
- is achieving our objectives worth the resulting impact on privacy?
- can we design a more privacy-protective (or less privacy-invasive) way of achieving our objectives?
Two examples of headless-chook thinking spring to mind.
The first is those calling for more information to be published about people with confirmed cases of COVID-19, such as down to the suburb level. This is usually posed as ‘so that we can protect ourselves’.
But we should not forget the lessons learned in the early days of AIDS, which is that both public health and privacy interests are best served by a policy of universal infection control, instead of stigmatising individual patients.
We have surely now reached the point of universal infection control (i.e. everyone should take the same precautions, and assume anyone else is infectious; and anyone showing symptoms should stop work and isolate themselves even further), that knowing how many people in your suburb are infectious is kind of irrelevant.
And in fact knowing how many people in your suburb are infectious is inviting citizens to make their own risk judgments about whether or not to follow universal infection control, which undermines the public health objectives of physical distancing and universal infection control. And will lead to discrimination. And let’s face it, the data will be so quickly out of date as to be useless anyway.
The second example is using mobile phone metadata for contact tracing. Which leads me to my next point:
Don’t design in a panic
Contact tracing refers to trying to find all the people who were in recent physical proximity to a person subsequently diagnosed with COVID-19.
There are a few different ways to go about contact tracing. As a government you could broadcast the details of infected persons’ movements, as South Korea did. Or you could, as a government – as some are calling for – suck up all location data (both past and on-going) from everyone’s mobile phone metadata, stick it all in a centralised, searchable database, and use it to meticulously trace who goes near who else, in case it becomes useful later. Yikes! Not even national security agencies have dared ask for such a dystopian degree of surveillance infrastructure.
Or you could aim for some more privacy-protective approaches.
Indeed a national committee of all the Australian privacy regulators has been formed to respond to COVID-related proposals with national implications (such as development of a contact tracing tool), and they have reiterated the value of conducting short-form Privacy Impact Assessments on proposed solutions to public health and economic problems, to make sure privacy is considered in the design process.
One option is an app to download in which users voluntarily hand over their location data to a centralised government agency, which then notifies users if they happened to be in the vicinity of another app user who subsequently tested positive. This is the model apparently being considered by the UK’s NHS and Ireland’s health service (though by the time I finish writing this paragraph my claim may already be out of date). This is better than the ‘make the telcos hand over everyone’s location data’ model, but not by much.
Genuinely better options start from a position of Privacy by Design.
One is also a voluntary app, but in the case of Israel’s Hamagen app, the location data never leaves the phone. The user’s location data is matched, on their own device, against published details of the “tracked locations of Covid19 patients”. While the details of diagnosed patients are purportedly de-identified before publication, we also know that location data alone is highly individuating, if not necessarily identifying.
One of the best examples I have seen comes from Jaap-Henk Hoepman, whose work on privacy engineering I follow closely. His recent blog outlines how, like Hansel and Gretel, we could allow our mobile phones to leave their trail of breadcrumbs, but in a privacy-protective way. Location data never needs to leave the individual’s device; when one device comes into physical proximity with another, both devices record that fact (an identifier for the other device, a location and a timestamp) within their own device; and that record is purged a couple of weeks later. The records are all encrypted, so that the humans passing by remain blissfully anonymous to each other (or not, as the case may be), and their phones cannot be interrogated to give up meaningful data. But if, within that two week period, a user is diagnosed with COVID-19, the app could be activated (but only using the public key of the proper health authority), which would then trigger notifications from the diagnosed person’s phone to the other devices for which a record was created.
And finally, Australian cryptographer and re-identification guru Vanessa Teague has proposed two tweaks to the Singaporean TraceTogether app, to make it even more privacy-protective. Like Hoepman, Teague focuses on encrypted device-to-device links (the digital equivalent of elbow-bumping, if you like), and using distributed processing instead of a centralised government database to manage the post-diagnosis notification step.
So there are some elegant technical solutions possible which do not result in a panopticon nightmare. Could we really have our public health and our privacy too? Let’s hope so.
But meanwhile, for those of us not in the business of designing national pandemic response apps…
Don’t drop your guard on data security
There are four key risk areas to be aware of, when thinking about data security right now.
- Data security risks when employees shift to home
OVIC has some succinct tips for your staff when they need to work remotely; and Danish ThinkDo Tank Data Ethics has a round-up of tech solutions to use or avoid.
(And if you need to roll out online privacy awareness training to help your workforce remember the privacy and data security basics even when they are working from home, check out our online privacy training options.)
- Data security risks when switching rapidly to new modes of service delivery
While the US government moved swiftly to change the rules so as to allow medical professionals to conduct telehealth appointments, this meant throwing the data security obligations on the health sector out the window. Imagine the implications for a psychiatric patient once Facebook knows about their condition because their doctor chose the Messenger platform for their video consult. Even if you have to make decisions about deploying new tech quickly, take a few minutes to research the data security pro and cons of each option.
- Ensure your messaging doesn’t accidentally encourage risky user practices
It’s a bit embarrassing that the text message sent unexpectedly to all mobile phones on 25 March on behalf of the Australian Government didn’t say much beyond ‘wash your hands’ except to offer a link to a website … thus undermining the standard government cybersecurity advice to people about not clicking on links in messages from unknown numbers, and more disturbingly specific Australian Government advice published the week before about coronavirus-related scams containing malicious links purporting to come from government agencies or telcos. Left hand, right hand anyone?
- Customer authentication processes when F2F is not available
Just because your customers can’t visit you in person is no reason to throw caution to the wind. Data security and integrity should remain front of mind, even as you try to adapt to the new normal.
Centrelink has created a new set-up for the large volumes of people suddenly seeking unemployment benefits, who cannot get through on the clogged phone lines, and understandably don’t fancy standing in long queues to access Centrelink offices during a highly contagious health crisis. The new set-up is that people are supposed to lodge an ‘intent to claim’ online but then wait for a call back … eventually … from an unknown number, which the person is supposed to answer with their identity documents ready. You can hear Virginia Trioli simultaneously gasp and sigh when the Centrelink spokesperson explains this on her ABC Radio program (about 10 minutes into the segment, if you’re keen).
Telling people to take calls at an unknown time from an unknown number and then hand out their identity details undermines sensible and official government security advice about how to avoid scammers, and also poses risks for victims of family violence or stalkers trying to avoid particular callers. As Trioli called it on the spot, this design is a “massive scammers problem waiting to happen”.
If the caller is supposed to authenticate themselves to the customer by demonstrating they must be from Centrelink because they already know X and Y details about the customer, how does Centrelink first know the (alleged) customer is legit, and didn’t enter their ex-partner’s name etc online but their own phone number, in order to find out new details about their ex?
Or if the customer is supposed to authenticate themselves to a caller from an unknown number, how can the customer first know the caller is legit, before they hand over their details? The spokesperson said in this radio interview that if the customer is concerned, they can ask for a number to call back the (alleged) Centrelink staffer on. Now either this number goes direct to the (alleged) Centrelink staffer, in which case the customer still has no idea if they are talking to a legitimate Centrelink staffer or a scam artist, or it goes via the main switch number, in which case they are stuck back on the clogged phone lines, and … oh dear, they have fallen into the seven circles of Centrelink hell.
When it all ends
I know that right know we all feel like kids in the backseat of the car, whining Are we there yet? But this too shall pass.
However if we accept widespread surveillance of our movements in the name of public health, how hard will it be to wind that back again once the pandemic is over?
September 11 was used to create a false dichotomy between security and privacy, and that thinking ultimately led us to surveillance capitalism. It has taken almost 20 years for critiques of surveillance capitalism to pierce public consciousness. Where will this pandemic take us?
I recommend two insightful pieces of writing which focus on ‘what next’, not only in relation to privacy:
- a global view from historian, philosopher and author Yuval Noah Harari, and
- predictions for Australian politics, the economy and society – a thoughtful piece from Walkley award winning The Age journalist Michael Bachelard.
Stay safe dear readers. And stay vigilant to protect privacy.
Photograph (c) Shutterstock