Anorexia. Violent extremism. Holocaust denial. Anti-vaccine conspiracy theories. Gambling addiction. Hate speech. False claims about stolen elections. Genocide.
You might not think of these as privacy harms, but they have one thing in common: they have all been promoted or fuelled by the manipulation and abuse of our personal information.
We are currently witnessing a profound fracturing of societies and communities, driven by the hyper-personalisation of content consumed in the digital environment. This is squarely a privacy concern, for two reasons.
First, because it is the sucking up of our data in privacy-invasive ways which creates digital platforms’ power.
Second, because the power of those platforms is then used to target us individually: to segment us, manipulate us, and shape our experience of the world through filter bubbles built by algorithms fed on our data.
The end result of all the filter bubbles and echo chambers and dark patterns and ‘emotional contagion’ and misinformation and disinformation and manipulation of news feeds is that instead of being enriched, our world actually becomes smaller, our choices more limited. The products we are offered, the prices we pay, the job ads we see, the news stories we read, the ‘truth’ we are told: all of this becomes decided for us by machines built not to serve us, but to deliver profits to Big Tech’s owners. And the more divisive and outrageous the content, the higher the ’engagement’, and the more astronomical the profits.
That algorithmic narrowing and manipulation of our choices ultimately affects our autonomy, and our dignity. Yet that is what privacy law is supposed to protect: our autonomy, which is our ability to make choices for ourselves.
Much has been said in recent years about the role of Big Tech in political polarisation, the spread of misinformation, the lessening of trust in both experts and traditional institutions and the consequent weakening of democratic governments. But not many mainstream commentators identify the root cause of these problems as invasions of privacy. (In the documentary The Social Dilemma, privacy doesn’t even rate a mention until near the end.)
Sure, privacy advocates, regulators and academics have been saying it. As NZ Privacy Commissioner, John Edwards passionately warned of the need for regulation to address the power asymmetry of Big Tech. And as Chair of the consumer protection and competition regulator, the ACCC, Rod Simms called out how the privacy issues raised by Google and Facebook can’t be divorced from issues of market power. But privacy law has not stopped them.
With the benefit of largely untrammelled intrusive data collection and profiling practices, online behavioural advertising has become online behavioural engineering: manipulation and behaviour modification on steroids.
Social media and digital platforms have become addictive and toxic because of the data that is harvested from us. Our personal information has not just been monetised, it has been weaponised, against us. ‘Personalised experiences’ have become chambers and filter bubbles, in which political divides become entrenched, hatred builds and misinformation and disinformation about everything from vaccines to elections thrive. Waleed Aly has compared the power of Google with the power of a nation state like China, and says “Imagine a foreign nation with the power to manipulate our individual psychology. Imagine us handing them such power over the critical infrastructure of our democracy. To be fair, we didn’t knowingly hand it to the tech giants either. They seized it when we weren’t looking, algorithm by algorithm.”
The result is a roll-call of human misery.
Pharmaceutical companies side-stepping consumer protection laws to bombard users with ads for addictive opioids based on their Google search terms.
Instagram damaging teenage girls’ health, with an algorithm which “led children from very innocuous topics like healthy recipes … all the way to anorexia-promoting content over a very short period of time”.
Betting companies grooming suicidal gambling addicts.
Facebook allowing advertisers to target – and exclude – people on the basis of their ‘racial affinity’, amongst other social, demographic and religious characteristics.
Facebook facilitating targeted crypto scams.
YouTube allowing misinformation about covid, disinformation about elections, and the amplification of hate speech.
Facebook promoting to advertisers their ability to target psychologically vulnerable teenagers.
Facebook knowingly radicalising users by recommending groups like QAnon.
Inciting the riot in Washington DC.
Fomenting ethnic violence in Ethiopia.
Inciting genocide in Myanmar.
Yet from the digital platforms to the advertisers and companies which benefit, organisations engaging in intrusive online tracking, profiling and targeting have largely been able to side-step privacy regulation, often by claiming that the data they are using is not identifiable, thus not ‘personal information’ regulated by privacy laws. This ignores the underlying objective of privacy laws which is to prevent privacy harms, in favour of semantic arguments about what is ‘identifiable’.
Some of those companies might say that they are protecting your privacy because they do something fancy like hash (scramble) your email address before sharing and matching up your data, but let’s call that for what it is: bullshit.
So maybe your ‘real’ email address is never shared out in the open, but the fact is that if data about your online habits is being tracked, and shared between unrelated companies on the basis of your email address, and then used to profile you and then treat you differently (for example, show you different products or prices), or to reach you with micro-targeted ads or personalised content or messaging – your personal information is being shared without your consent, and your privacy is being invaded.
Let’s look at Facebook, for example. Advertisers provide details about their customers to Facebook, using a hashed version of their customers’ email address. Facebook can then target ads to precisely those people, having matched the hashed email addresses from the advertiser to the hashed email addresses it already holds about Facebook users. But because neither company is sharing ‘identifiable’ data (i.e. ‘raw’ or unhashed email addresses), the chief privacy officer at Facebook claims that they can serve ads “based on your identity… but that doesn’t mean you’re ‘identifiable’”.
In other words: data which Facebook and other industry players describe as not identifiable, and thus not regulated by privacy laws, is being used to match up customer records from different devices and different apps, and share user attributes between different companies, without your consent.
Another example can be found in the data broking industry. Data broker LiveRamp, formerly known as Acxiom, says they draw data from multiple publishers, sites, devices and platforms (aka “gain second-party segments or third-party data”), build customer profiles and then target ads to around 7 million Australians online. Their website states that “LiveRamp removes directly identifiable personal information and replaces it with pseudonymised record keys during our matching process. This means you can use data with confidence”. (A previous version of their website I saw described this as ‘anonymization’, but it has since been revised to label this as ‘pseudonymisation’.)
But as Justin Sherman wrote recently in Wired, the carefully deployed language around de-identification is a furphy: “that data brokers claim that their ‘anonymized’ data is risk-free is absurd: Their entire business model and marketing pitch rests on the premise that they can intimately and highly selectively track, understand, and microtarget individual people”.
This semantic misdirection about data not being ‘directly identifiable’ is happening not only in the United States where the narrower phrase ‘PII’ is used instead of ‘personal information’. Australian industry analysts have written about how entirely unrelated companies are now matching their own sets of customer data, in order to target individual consumers – such as personally targeted ads for Menulog shown on smart TVs during breaks in Channel 7 content, using hashed email addresses via data broker LiveRamp.
So while individual identities are hidden during the matching process, the end result is still that Company A can find out new information about their customers, and/or individually target people who are not their customers but who have ‘lookalike’ characteristics, using data collected about those individuals by Companies B, C and D. Using various methods, the data collected about you while you are using your banking app can now be matched with the data collected about you when you look up flight prices while logged into your frequent flyer account, and then matched to the data collected about you when you watch streaming TV, including whether or not you instantly respond to the fast food ad you saw on TV. Did you consent to all of that?
This is precisely the kind of unfair and invasive online tracking, profiling and microtargeting, for differential treatment, of individuals across the web that the community expects to be within scope of the Privacy Act for regulation.
Yet marketeers describe this as ‘privacy compliant’, because they use pseudonyms instead of real names or email addresses to facilitate their data matching and build out their customer profiles before they target you. What a joke.
The question is, what is the government going to do, to stop this Big Tech racket? Because clearly the market incentive is to keep exploiting our personal information until the law stops it.
We need law reform, to ensure that these data practices are firmly within scope of privacy regulation. No more ‘we don’t share your PII’ semantic trickery.
We need to start by updating the current law’s flawed and outdated premise that privacy harms can only be done to ‘identified’ individuals, and that therefore only ‘identifiable’ data needs the protection of the law.
To ensure that the Australian Privacy Act is capable of protecting against digital harms, as is expected by the community, and is the stated objective of the current legislative review by the Australian Government, the definition of personal information requires reform to indisputably cover the individuation, as well as identification, of individuals.
Individuation means you can disambiguate a person in the crowd. In the digital world, this means the ability to discern or recognise an individual as distinct from others, in order to profile, contact, or target them and subject them to differential treatment – without needing to know their identity. This might take the form of a decision to show someone a certain ad or exclude them from seeing a particular offer, display a different price, make a different offer, or show them different information. The result might be as benign as the act of showing a profiled customer an ad for sneakers instead of yoga gear, but it could also be a decision to target vulnerable individuals with ads for harmful products, misinformation, or extremist content.
As Sven Bluemmel, the Victorian Information Commissioner, put it recently: “I can exploit you if I know your fears, your likely political leanings, your cohort. I don’t need to know exactly who you are; I just need to know that you have a group of attributes that is particularly receptive to whatever I’m selling or whatever outrage I want to foment amongst people. I don’t need to know your name. … I just need a series of attributes that allows me to exploit you”.
Privacy schemes elsewhere in the world are already broadening out the notion of ‘identifiability’ (or even abandoning it altogether) as the threshold element of their definitions, such as the California Consumer Privacy Act (CCPA) 2018, and the 2019 international standard in Privacy Information Management, ISO 27701. Each has either explicitly expanded on the meaning of identifiability, or has introduced alternatives to identifiability as a threshold element of their definitions.
For example the CCPA includes, within its definition of personal information, data which is “capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household”, without first needing to pass an identifiability test. This theme is further fleshed out within the definition of ‘unique identifier’, which means “a persistent identifier that can be used to recognize a consumer, a family, or a device that is linked to a consumer or family, over time and across different services, including, but not limited to, a device identifier”.
Last year the US Uniform Law Commission voted to approve the Uniform Personal Data Protection Act, a model bill designed to provide a template for uniform state privacy legislation. The model Bill defines personal data to include data “without a direct identifier that can be reasonably linked to a data subject’s identity or is maintained to allow individualized communication with, or treatment of, the data subject”.
Another example is the New York State Privacy Bill 2021, which clearly intends to include, within the scope of what is ‘identifiable’ for the purposes of its definition of personal data, both tracked online behaviour (such as browser searches and an individual’s “interaction with an internet website, mobile application, or advertisement”), as well as geolocation data, and any inferences drawn from that information.
Plus of course the GDPR’s definition of ‘personal data’ includes online identifiers and the notion of ‘singling out’. The Austrian DPA recently ruled that IP addresses (as collected via Google Analytics) constitute personal data, because they allow an entity to ‘single out’ a data subject within the meaning of recital 26 of the GDPR. Further, the DPA found that an actual identification is not necessary, and that there is no requirement that all the information enabling the identification of the data subject must be in the hands of one entity.
Are we on the cusp of change here in Australia too?
The Australian Government has proposed, in the Discussion Paper on the review of the Privacy Act, that the definition of ‘personal information’ should be reformed to include “a non-exhaustive list of the types of information capable of being covered by the definition of personal information”. The examples given include location data, online identifiers and “one or more factors specific to the physical, physiological, genetic, mental, behavioural (including predictions of behaviours or preferences), economic, cultural or social identity or characteristics of that person”.
Importantly, the Discussion Paper says that the definition would therefore cover “circumstances in which an individual is distinguished from others or has a profile associated with a pseudonym or identifier, despite not being named”.
Now some, including the Big Tech and marketing industry players, will argue that they don’t want the Privacy Act reformed, lest it become the ‘law of everything’. But I believe we should take an expansive view of privacy, and a root-cause look at privacy-related harms.
As a threshold definition, ‘personal information’ simply creates the boundaries of the playing field. Other parts of the law – the privacy principles – do the heavy lifting when it comes time to set the rules of play, deciding which data practices are fair, and which should be prohibited. But if much of the data which fuels the digital economy isn’t even considered to be part of the game, how can we ever agree on the rules?
We need the Privacy Act to explicitly include, within its scope for regulation, information which can be used to individuate and potentially harm people, even if they cannot be identified from the information in a traditional sense.
In my view privacy law must become ‘the law of everything’, because in the digital economy, data about people iseverything.
Photograph © Shutterstock