I tend to focus on privacy disasters in this blog (link here to: oh, pretty much every other blog I’ve ever written), but sometimes it is nice to pause and reflect on the privacy successes too. I’ve had particular reason to do so recently.
Firstly, as Privacy Commissioner Timothy Pilgrim reminded us at the iappANZ Summit last week, in just a few short months it will be 2018, and thus the 30th anniversary of the Australian Privacy Act 1988. Secondly, a few weeks ago I helped to celebrate the 30th anniversary of the founding of the Australian Privacy Foundation. Both these events – the creation of the APF as a civil society organisation that exists to this day, as always on entirely volunteer efforts, as well as the first national piece of privacy legislation – had their genesis in the Australia Card debates of 1985-87, when Australians were galvanised about their privacy rights in a way not seen before or since.
July also marked the 10th anniversary of the successful ‘NoID’ Access Card campaign, which was particularly close to my heart. From 2006-2007 I co-ordinated the campaign on behalf of the APF, along with a number of other NGOs. As I look back now, I realise what a very female experience it was: Robin Banks headed up the Public Interest Advocacy Centre which was our primary campaign partner, and the three most influential politicians we worked with to oppose the proposal were Senator Natasha Stott Despoja (Democrats), Senator Kerry Nettle (Greens), and the then Shadow Minister for Human Services, Tanya Plibersek (Labor). Each worked tirelessly to hold the government of the day to account for Joe Hockey’s national ID card thought bubble.
Mind you, we never had the chance to celebrate our campaign victory, as within a few days of the Howard Government finally dropping the Access Card proposal I was on maternity leave (yes, the latter stages of the campaign had involved me waddling around Canberra in an increasingly pregnant state), while the others were shortly thrown into a federal election. There was no victory party or bottles of champagne, let alone time to either reflect or pat each other on the back. So Robin, Natasha, Kerry and Tanya: please consider this blog a very, very belated ‘thank you’. Brava.
And yet, the battle to protect our privacy rights is never won. At the party to mark the APF’s anniversary, I was asked to speak about the ways in which Australian law still doesn’t properly protect privacy.
I spoke briefly about the obvious, well-documented failings, like the exemptions in the federal Privacy Act for political parties, the media and employment records. I noted the loopholes in State privacy laws, like the ‘rogue employee’ exemption in NSW, as well as the near-blanket exemption for NSW Police even when personal information is handled corruptly. (And let’s not forget that SA and WA still don’t have privacy laws for state and local government agencies at all.)
I had my traditional whinge about governments of all stripes paying lip service to privacy, by leaving Privacy Commissioners under-funded, and ignoring consistent, well-reasoned, multi-partisan and multi-stakeholder recommendations to introduce a statutory tort of privacy. And of course, I could not resist taking a swipe at recent shockers like the use of Census records for data-matching without specific legislative authority, and the unauthorised disclosure of a Centrelink client’s details by the Minister.
But since this was a party afterall, I also wanted to talk about the good news from the last 30 years, as well as what might be around the corner.
I reflected on the changes I’ve noticed since 2000, when I first started working in privacy law. Though there is still a long way to go, I see greater awareness of the privacy risks of sharing personal information. Technological evangelists are met with greater scepticism. The US model of ‘notice and consent’ is dying a slow death, and while I’m not holding my breath for the US to catch up with the rest of the world and finally adopt omnibus privacy principles, the enthusiasm with which the big US tech companies are now talking about ethical frameworks for making decisions to limit – yes, limit – their collection or use of personal data is encouraging.
I am similarly heartened to see many of my clients thinking deeply about how they can best protect privacy, above and beyond what the legal minimum requirement asks of them. They get it: they need community acceptance, or ‘social licence’, even more than they need legal compliance. Privacy management as a profession is shifting from legal tick-box compliance to a more nuanced task of finding the appropriate point of intersection between law, technology and ethics.
And then we started talking about the future: what will the next 30 years bring for privacy?
Of course, I do not have a crystal ball. And I’m not even sure that thinking about what are the hot topics right now can help predict what is just around the corner. (When I first joined the NSW Privacy Commissioner’s Office, the three issues which exercised us were bag searches in supermarkets, speed cameras, and RFID tags, which were surely an indicator of the End of Days. Oh, how naïve we were! In quick succession, along came September 11 and all the related justifications for intrusions into civil liberties, and then the explosion of social media. Yikes!)
But what we can predict is the massive disruption across multiple industries likely to result from new technologies, such as drones offering automated delivery of everything from food to weapons, as well as being an effective surveillance tool.
AI and machine learning will see the automation of everything: from self-driving vehicles to intelligent machines taking the place of not only blue collar jobs, but also white collar professionals, as increasingly decisions are made by algorithms instead of human judgment.
And we also know that the immediate privacy challenges include the coming of what UQ legal academic Dr Mark Burdon describes as the ‘sensor society’. This is the effect of the collision of Big Data processing power with the Internet of Things, in which everything from your fridge to your car to a public rubbish bin is collecting data about you, and then somewhere, someone (or, more likely, an intelligent machine) is collating that data, and using it to draw inferences about you, and – finally – to make decisions about you. The risks include profiling, discrimination, pricing inequality, and pre-destination.
If you had asked me in 2000, I would have said that genetic testing was going to be the driver of these types of privacy harms, but now we know that predictions and decisions can be made based on pieces of data collected from our digital breadcrumbs, instead of from drops of blood collected from our bodies.
Increasingly, the privacy challenge for individuals will be trying to beat the algorithm – trying to disprove the computer which says ‘there is no point you enrolling at Uni because you are likely to fail’, or ‘you should not be granted parole because you are likely to re-offend’. The Centrelink ‘robodebt’ scandal has shown the human suffering that can be caused by poor algorithmic design, accompanied by human indifference to the outcomes.
(On a lighter note, my new favourite joke illustrates the risks perfectly: Why did the computer cross the road? Because it was programmed by a chicken.)
So those are the challenges we know are coming, sooner rather than later, because of current technological developments. Which leads us to the perpetual challenge: ensuring the law keeps up with technology.
For the most part, I reject claims that privacy law does not keep up with technology. Principles-based privacy laws are designed to be technology-neutral. Let’s face it – they are based on common sense and good manners, like ‘only use personal information for the purpose for which it was collected, otherwise get consent’. So those principles drafted by the OECD in 1980 still work today for Big Data – it’s just that they tend not to be followed. Big Data is big business, and big business will push that envelope as far as it can. This is why so many people think that the law is outdated. It’s not outdated. It’s just not applied widely or deeply enough.
But there is one area in which I think our privacy laws are sadly out-of-date, and that is their reliance on the identification of an individual as the trigger point for protecting that person’s privacy. Our laws only protect ‘personal information’, and that definition relies on the individual being reasonably identifiable. If you can claim that the person is not identifiable, then all bets are off. But that doesn’t mean a person can’t suffer privacy harm.
In my view it is individuation, rather than identification, which can trigger privacy harms.
In other words, you can hurt someone without ever knowing who they are.
Individuation means you can disambiguate the person in the crowd. This is the technique used in online behavioural advertising; advertisers don’t know who you are, but they know that the user of a certain device has a certain collection of attributes, and they can target or address their message to the user of that device accordingly.
Once we move beyond straight-up advertising, the impact on individual autonomy becomes more acute. Individuation can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left. Or market discrimination, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy. Geolocation data likewise offers high rates of individuation, even without identification. For example, privacy harms could arise from using geolocation data to figure out the likely home address of people who have visited a strip club or an abortion clinic. Individuals could be targeted for harm or harassment, without the perpetrator ever knowing their name.
All these activities hold the potential to impact on individuals’ autonomy, by narrowing or altering their market or life choices, regardless of whether the individual is identifiable.
So perhaps, if our objective is to protect people’s privacy, our laws need to grapple with a broader view of the types of practices which can harm privacy – regardless of whether ‘personal information’ is at stake. I would argue that it is time to re-think the scope of our privacy laws, to encompass individuation and autonomy as well as identification.
So if it’s not too much to ask … I look forward to seeing the Australian Privacy Foundation achieve that goal! Happy Anniversary APF, and here’s to the next 30 years.
PS: As a result of my talk at the APF anniversary function, I was invited by fellow panellist Antony Funnell to appear on his Radio National program, Future Tense, as was our third panellist, Dr Jake Goldenfein. You can listen to the podcast here.
Photograph (c) Shutterstock