Brrr, winter is here! Time to crack open a red to enjoy with a lovely rich home-cooked lasagna. Except hang on – your pasta-buying habits have you marked down as a poor car insurance risk. You’d better hope you have a nice strong handshake to compensate.
The lure of Big Data analytics is that with enough data, organisations can make insightful correlations, which they can then use to make business predictions or decisions. Woolworths’ insurance arm allegedly only makes car insurance offers to those who are considered at lower risk of car accidents, which apparently has something to do with the consumption of red meat over pasta – and it knows what you eat because you shop at its supermarkets.
Of course, correlation is not the same as causation. There is a correlation between ownership of super-yachts and very high incomes, but going out and buying a super-yacht will not deliver you a pay rise. And in any case the correlation may be resting on some shaky assumptions. (What would Woolworths make of my vegetarian friend who drives to the supermarket to buy red meat for her husband and kids, but doesn’t eat it herself?)
Nonetheless, the age of the algorithm is here. From making decisions about who to hire, to predicting when a customer is pregnant, and from delivering targeted search results to predicting the students at risk of failure or the prisoners at risk of re-offending, business and government decisions are being made according to the correlations found through Big Data processing.
At a recent function, UQ legal academic Dr Mark Burdon suggested that in the future ruled by Big Data, our lives will become “trying to beat the algorithm”. In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, serendipity, autonomy and free will become so much more difficult.
So at what point do these predictions become too intrusive? When can an organisation even lawfully collect and use personal information for Big Data projects? How does an organisation tread the fine line between being innovative and smart about their decision-making, and being downright creepy?
Following a set of statutory privacy principles is obviously a good place to start, but sometimes mere compliance is not enough. Organisations need to manage customer expectations and accurately measure the shifting sands of public opinion, if they are to avoid a customer backlash.
In fact, getting privacy ‘right’ can be a business enabler. Harvard Business Review recently published case studies of businesses which improved their privacy practices by offering greater customer control and choice, and were rewarded with more useful data from their customers, rather than less.
Customer trust is the key.
There are pragmatic ways to develop customer trust and protect privacy but maintain your business objectives in a Big Data project, like strategically quarantining particular data types, offering personalisation, prompting a gatekeeper review between analytics and operationalisation phases, using role- and needs-based access controls in the presentation of data, finding the right de-identification technique, and establishing effective data governance.
We have drawn together global research into the factors that influence customer trust, and our own experience guiding clients through advanced analytics and business intelligence projects, to develop a framework to balance business objectives with legal and ethical concerns about Big Data.
The resulting eBook is designed to guide organisations through how to engender customer trust, by building privacy protection in to Big Data projects, from the analytics stage through to operationalising insights into decision-making. It’s available now from our Publications page – you can download it while that lasagna’s in the oven.
Photograph © Shutterstock