Happy Halloween dear readers! As you carve your pumpkins, decorate your house with plastic spiders and work on your scary costumes, it seems an apposite time to reflect on … creepiness.
Privacy practitioners are often called upon to determine whether or how a particular initiative can comply with privacy law. But often a more compelling and relevant question is: will this project fly, or will it crash and burn?
The answer to both questions is often “it depends”. Privacy principles themselves are fuzzy law, meaning they offer plenty of blank space around “reasonable expectations” and “take reasonable steps” that the practitioner has to fill in. And then even if you do comply with the law, a backlash from your customers or the wider public can bring your project undone faster than you can say “Australia Card”.
So how are we supposed to figure out in advance what will be considered unduly privacy invasive, and what won’t?
A couple of my favourite privacy academics, Omer Tene and Jules Polonetsky, have proposed a Theory of Creepy to help us figure it out. They suggest that creepiness arises when new technology rubs up against social norms. Sometimes social norms shift to accommodate the new technology – but sometimes they don’t, and a consumer backlash ensues.
Tene & Polonetsky have provided examples of the kinds of activities that might be seen as cool – or creepy – depending on the context:
- Ambient social apps: These take publicly available location data about people and present it to other users, which might end up being regarded as cool (Foursquare) or creepy (Girls Around Me which showed the social media profiles of women physically near the user at any given time; or the Obama app used in the 2012 US election campaign, which plotted voters’ names, age, gender and political leanings on maps of residential areas).
- Social listening: This involves monitoring customers via social media, and anticipating their needs or responding to their concerns. This kind of intensive surveillance of and approaches to individual customers can be regarded as brilliant best-practice marketing (KLM’s Surprise initiative), or a disastrous cross-over into stalking behaviour (British Airways’ Know Me initiative).
- Data-driven direct marketing: Using a customer’s history of past purchases to offer suggestions as to what they might like to purchase now, which can be seen as expected business practice (Amazon’s book recommendations), or as so surprising that it becomes the case study in what not to do (Target’s marketing to a teenager it figured out was pregnant before her father did).
- New products: The failure of Google Glass wearers to anticipate and adhere to social norms led to the creation of a new epithet – Glassholes – and sent the product back to the drawing board.
Their conclusion is that any new project requires a social or ethical value judgment to be made – and that this judgment should not be left to the engineers, marketers or lawyers. As Tene & Polonetsky say: “Companies will not avoid privacy backlash simply by following the law. Privacy law is merely a means to an end. Social values are far more nuanced and fickle”.
In my view, that fickleness is the core of the problem. Humans are not rational or consistent in their responses. Why was British Airways’ social listening deemed creepy, but KLM’s deemed cool? Both involved unexpected online identification and analysis of customers waiting for flights, and then real-world interactions with those customers. Why do we accept CCTV manned by unseen agents, but not a guy with a camcorder? I don’t have the answer.
So, how can we avoid creepiness, if we can’t predict where a new initiative will fall on the cool-to-creepy continuum?
Like Stephen, and unlike Tene & Polonetsky, I still find privacy law to be our best starting point. Yes it is fuzzy law, yes it has ridiculous loopholes. But at their core, privacy principles represent both common sense and good manners: Only collect what you need. Ask the subject for the information directly. Tell them what you’re going to do with it. Don’t go using data in new and surprising ways. Don’t expose the data unnecessarily. Etcetera.
As I have written before, there are pragmatic ways to develop customer trust and protect privacy but maintain your business objectives, and in many ways, the privacy principles themselves point the way.
The value of proposing a ‘creepiness’ test as part of project management is that it might be a useful way to start a conversation with your marketing department, your engineers or even your CEO, if talking about the law tends to send them to sleep. Of course anticipating and reflecting community expectations is also critical to fleshing out your analysis of potential privacy pitfalls. But ultimately, our principles-based privacy law is the best place to start.
Other than on Halloween, better to be cool than creepy.
Photograph © Shutterstock