We don’t live in a bubble, so why do privacy laws act like we do?
It may seem counter-intuitive, but privacy is very much a public good.
If we think of managing privacy as a private exercise, governed by our own actions and choices, then the regulatory model of notice and consent, choice and control makes sense. But that’s not actually the world we live in. As individuals, we have no more control over the data ecosystem we find ourselves in, than we have over the quality of the air we breathe or the water we drink.
Further, privacy is a shared experience, where the actions of one can negatively affect the whole. That is why privacy protection cannot be left to the choices made or controls exercised by individual consumers or individual citizens. It must be treated and managed and regulated as a public good, because privacy harms are increasingly collectiveharms.
This notion of privacy as a public good, or a public commons, has been raised by a number of privacy regulators and privacy thinkers recently. Many offer analogies with our physical environment, and parallels are drawn with other global challenges such as tackling climate change.
At the IAPP ANZ Summit in late 2019, Australian Privacy Commissioner Angelene Falk compared managing privacy protection to dealing with oil spills, while Victorian Information Commissioner Sven Bluemmel ran with a comparison to the harms of passive smoking.
Privacy and open government advocate Martin Tisné describes the nature of the collective harms which arise from privacy intrusions at scale as “invisible to the average person”, which is why forcing regulatory action and co-operation from governments is so difficult – much like action on CO2 pollution.
Shoshana Zuboff, author of The Age of Surveillance Capitalism, calls the belief that privacy is private “the most treacherous hallucination of them all”. In reality, writes Tisné, we are “prisoners of other people’s consent”.
Because none of us live in a bubble affected only by our own choices. The consequences are shared. One person’s privacy can be negatively affected by a different person making choices about their own personal information.
The most obvious example of privacy as a shared concern is genetic privacy. If one family member shares their DNA with a site like 23andMe, it impacts on all their genetic relatives, no matter whether those relatives agreed, vehemently objected, or were entirely ignorant of the situation.
A second example is the impact of one person’s decisions on the people around them. Military personnel stationed in remote or secret locations should not have their privacy and safety compromised by the choice made by a colleague to use a fitness app when they jog around the base. (And this is not to criticise the joggers themselves. Even Strava users who carefully calibrated their privacy settings to not share their Strava data with others were nonetheless exposed by Strava’s release of ‘aggregated’ data.)
Similarly, even if you have never used Facebook yourself, you will be subject to online behavioural advertising based on information collected about you, and inferred about you, from Facebook users who do know you, because Facebook generates ‘shadow profiles’ on non-users. Academic Zeynep Tufekci has noted that the power of Big Data means that companies such as Facebook can now infer, even about privacy-protective individuals who have deliberately tried to protect their privacy online, “a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health”. As individuals we have no choice about this. There is no way to opt out of being swept up in this vast dragnet of data.
A third example of one person’s privacy choices affecting other people’s lives is for classes of people with shared characteristics. Let’s say an algorithm has been built on data, that was collected from people who consented to share their data in a research project. That algorithm then makes predictions about people who share certain characteristics. For example: that indigenous students are more likely to fail first year Uni than non-indigenous students. Or that people who buy lots of pasta are at higher risk of car accidents.
When that algorithm is operationalised, it is going to result in decision-making affecting everyone with those characteristics, never mind that they were not part of the original group who ‘consented’ to the use of their data for the research project. The result is, as philosopher and mathematician Rainer Mühlhoff puts it, that “data protection is no longer a private matter at everyone’s own discretion… Rather, data protection in the age of predictive analytics is a collective concern”.
The cumulative impact of years of situation-specific choices, made by millions of individuals, about what they agree to share and with whom, is terrifying. Algorithms are based on predictive analytics, built by machines learning from datasets collected from unwitting individuals who just wanted to connect with their friends on social media, or stream movies, or earn points on their grocery shopping. Those algorithms are increasingly used to determine who gets access to housing, finance or employment opportunities, and who gets targeted for intrusive surveillance, government interventions or policing.
Digital rights activist Lizzie O’Shea says the result is that privacy is a class issue: “Our digital experiences are crafted for us on the basis of our membership in a class—an intersecting set of collective traits, rather than an individual with agency or dignity. To suggest a person alone can challenge these practices by taking individual responsibility for their privacy fundamentally misunderstands our social and political environment.”
Fiddling with your privacy settings on Facebook, Spotify or Strava won’t fix a thing. Waleed Aly warns us against being ‘duped’ by promises to improve controls set at the individual level. Former Australian Senator Scott Ludlam has argued that the Facebook / Cambridge Analytica scandal should instead be the catalyst we need to “draw a line under surveillance capitalism itself, and start taking back a measure of control”.
We would do well to heed the “unequivocal call for more regulation” made by New Zealand Privacy Commissioner John Edwards in his blistering speech addressing the harms caused by Big Tech, at the IAPP ANZ Summit last year. Because privacy laws, as they exist today, are not enough. Privacy laws focus only on the effect of conduct on an individual, who must be identifiable in order to trigger the law’s protection.
Instead, we need recognition of the social and collective impacts of the data economy, and political action to protect us from the invisible threats posed not only to our individual autonomy, but also to our democracy, civil rights and social cohesion.
Photograph © Shutterstock