How do you solve a problem like Facebook?
How do you catch a cloud and pin it down?*
By now we all know the story: Facebook allowed apps on its social media platform which enabled a shady outfit called Cambridge Analytica to scrape the profiles of 87 million users, in order to serve up targeted ads to benefit the Trump election campaign in 2016. More than 300,000 Australian users of Facebook were caught up in that particular example of data harvesting, despite only 53 Australians using the app.
Sitting here in Australia, you might be thinking: So what? I never saw a Trump ad, or if I had I would have ignored it because I’m not in America. Or even if the same thing happened here, it’s just ads anyway, I can make up my own mind.
But that’s not the whole story. The Facebook scandal is about so much more than serving up ads in a foreign election campaign. Facebook, and other companies involved in data mining and analytics, are invading our privacy and harming us economically and socially, in ways that are only just starting to become clear.
It’s not just the data you choose to share
Up until recently, Facebook has been successful in steering most discussions about privacy towards either information security, or a focus on the controls that users have over what they post, and who they allow to see those posts. CEO Mark Zuckerberg likes to say that users have choices, and that users stay in control of what they choose to share.
In one sense that’s true: you get to choose which photos you post on Facebook, and which inspirational quotes or worthy news stories you share with your friends.
But in another sense it’s not true at all. Because the information you post is not the whole story. It’s only the tip of the iceberg of data Facebook has collected about you.
Every time you go online, you leave a trail of digital breadcrumbs. Facebook has been busily sweeping up those breadcrumbs, and using them to categorise and profile you. Facebook obviously knows when you click on a Facebook ‘like’ button; but also, unless a web developer has gone out of their way to find tools to block them, Facebook knows every time you simply look at a website that has a Facebook ‘like’ button somewhere on it.
So if you only post or ‘like’ stories about inspirational mountain climbers and funny cat videos, but also do things online that you don’t share with your family, friends or work colleagues (like looking at stories about abortion or dealing with infidelity, googling how to manage anxiety or erectile dysfunction, whingeing about your employer in a chatroom, or spending hours reviewing dating profiles, gambling, playing Candy Crush or shopping obsessively for shoes) – Facebook has you pegged anyway.
Plus, Facebook obtains data from other sources which know about your offline purchases, to build an even richer picture of who you really are. And of course, Facebook may have access to your address book, your location history, the contents of your private messages, and depending on your brand of phone, possibly even a history of your phone calls and text messages.
(And even if, like me, you have never had a Facebook account, they still monitor and create ‘shadow profiles’ on non-users, based on information scraped from other people, over which we non-users have zero transparency or control. Facebook has been conducting what my colleague Steve Wilson and I have described as unlawful indirect collection of personal information, including photographs for facial recognition purposes, for many years now. Regulators are only just starting to have some success in pushing back, with a Belgian court finding in favour of the Belgian DPA that Facebook’s collection of data on non-users is illegal.)
All that information is used to draw inferences and assumptions about your preferences, and predict your likely behaviour. The results are then used to categorise and profile you, and ultimately target you, in a process usually described as ‘online behavioural advertising’.
It’s not ‘just ads’
The objective of online behavioural advertising is to predict your purchasing interests, and drive a purchase decision. So far, the same as any other advertising. But online, the implications for us as individuals are much greater.
In the hard copy world, advertisers will choose what ads to place in which newspaper or magazine, based on the target audience for that publication, and what they know about the demographics – in aggregate – of the readership. You might place an ad for a luxury sedan in the Australian Financial Review, an ad for a family SUV in the Australian Women’s Weekly, and an ad for a ute in Fishing World. Anyone can walk into a newsagent or library, and buy or flick through a newspaper or magazine. Everyone looking at that newspaper or magazine will see exactly the same ads as everyone else.
But in the digital world, advertisers might want to find busy middle class mums – if that’s their target market for a family SUV – no matter what they read online. Ad space is sold according to precisely who they want to target. Enter micro-targeting. Facebook’s promise to advertisers is that it can find exactly who you want, and show them your ad – and exclude everybody else. So two people reading the same newspaper story, or looking at the same website at the same time, will see two different ads.
However by allowing exclusion, the platform also allows discrimination. Facebook has been caught allowing advertisers to target – and exclude – people on the basis of their ‘racial affinity’, amongst other social, demographic, racial and religious characteristics. So a landlord with an ad for rental housing could prevent people profiled as ‘single mothers’ from ever seeing their ad. An employer could prevent people identifying as Jewish from seeing a job ad. A bank could prevent people categorised as ‘liking African American content’ from seeing an ad for a home loan.
The opaque nature of online behavioural advertising also allows fake ads and fake news to proliferate. Further, the content we see is so filtered that we each live in an individually tailored echo chamber which serves only to reinforce stereotypes, or push people towards extremism. (Consider for example the Facebook ads used for a biopic about the hip-hop band NWA: the ad for ‘white’ audiences highlighted gang culture, guns and police chases, while the ad for ‘black’ audiences suggested their music was art and a form of non-violent protest.)
Existing patterns of social exclusion, economic inequality, prejudice and discrimination are further entrenched by micro-targeted advertising, which is hidden from public view and regulatory scrutiny.
Predictive analytics can narrow or alter your life choices
Once we move beyond straight-up advertising and into predictive analytics, the impact on individual autonomy becomes more acute. Big Data feeds machine learning, which finds patterns in the data, from which new rules (algorithms) are designed. Algorithms predict how a person will behave, and suggest how they should be treated.
Algorithms can lead to price discrimination, like surge pricing based on Uber knowing how much phone battery life you have left. Or market exclusion, like Woolworths only offering car insurance to customers it has decided are low risk, based on an assessment of the groceries they buy.
(So when you read about companies ‘tailoring their offers’ for you, it’s not just discounts they could be offering you. It can mean the price you see is higher than another customer; or you might not see the product or service exists at all.)
Banks have been predicting the risk of a borrower defaulting on a loan for decades, but now algorithms are also used to determine who to hire, predict when a customer is pregnant, and deliver targeted search results to influence how you vote.
Algorithms are also being used to predict the students at risk of failure, the prisoners at risk of re-offending – and then launching interventions accordingly. Based on some deeply unethical psychological experiments it conducted on 700,000 unsuspecting users some years ago, when it played with altering news feeds to manipulate users’ emotional states, Facebook now believes it can predict people at risk of suicide, and offers intervention strategies to help. Even leaving aside the accuracy of that claim, interventions are not all well-intentioned. It was revealed last year that Australian Facebook executives were touting to advertisers their ability to target psychologically vulnerable teenagers.
Instead of asking or assessing us directly, business and government decisions about us are increasingly being made according to algorithms, designed on the basis of correlations found through Big Data processing.
Automated decision-making diminishes our autonomy, by narrowing or altering our market and life choices, in ways that are not clear to us. People already in a position of economic or social disadvantage face the additional challenge of trying to disprove or beat an invisible algorithm.
In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, individual dignity, autonomy and free will are programmed out of our society.
Privacy is a collective right
Waleed Aly has written about how privacy (or more precisely the lack of it) is no longer an individual’s problem – it has become society’s problem: “In the networked world of Facebook, your lack of privacy is everyone else’s problem. You could dump Facebook altogether and you’d still be living in a country whose democracy is vulnerable to corruption in new ways”.
Fiddling with users’ privacy settings on Facebook won’t fix a thing. Aly warns us against being ‘duped’ by promises to improve controls set at the individual level. Instead, we need collective, political action. Scott Ludlam has similarly argued that this latest Facebook scandal should be the catalyst we need to “draw a line under surveillance capitalism itself, and start taking back a measure of control”. We need to remember that above all we are citizens first, consumers and users second.
If we want our lives to be ruled by human values and individual dignity, instead of by machines fed on questionable data, we need robust, enforced and globally effective privacy laws. Specifically, what we need is for the American legislature to pass effective privacy laws which rein in Facebook and the data brokerage industry, imposing limits on what personal information they are allowed to collect, and the purposes for which it can be used. The self-regulatory model of privacy protection favoured in America (but rejected by most of the rest of the developed world) has failed us all.
The GDPR commences this week. The obligations include that businesses and governments must offer understandable explanations of how their algorithms work, and allow people to seek human review of automated decision-making. This is a step in the right direction, which Australia, the US and the rest of the world should follow.
* with apologies to Rodgers And Hammerstein
Photograph (c) Shutterstock