If you’re asking if your customers trust you, you’re asking the wrong question.
Privacy risk management is not just about legal compliance, but about ensuring that you can meet your customers’ expectations. (In the context of public services, your ‘customers’ are citizens or residents, but the point remains valid.)
Part of meeting expectations will be ensuring that customers trust what you are proposing to do with their data. According to research into the topic of trust in emerging technologies, trust matters because it impacts on customers’ intentions and motivations, as well as affecting the degree of ‘buy-in’ from both customers and staff. High levels of trust can support change management, project implementation success, and process efficiencies. Meanwhile pre-existing low levels of trust will frame customers’ experiences and interactions, and distrust will lead to active avoidance behaviours. In the private sector then, ‘trust’ can offer a competitive advantage.
In a public sector context, both the Productivity Commission and the OAIC have noted that community trust and acceptance – aka having a ‘social licence to operate’ – is vital for projects involving greater data sharing and release. The UK’s privacy regulator has likewise noted that “trust and public engagement is a prerequisite for government systems to work. Greater trust leads to more rapid and complete take up of services across the population being served”.
So what do the stats tell us about trust – who has it, and how to get it?
First, the level of trust Australians place in an organisation to handle their personal information does depend (in part – but I will get to that) on the type of organisation itself. The OAIC’s regular surveys into community attitudes towards privacy reveal that the organisations with the highest level of trust are health service providers and financial institutions – although with all sectors suffering a significant lessening of trust from 2013 to 2020. The organisations with the lowest level of trust are social media companies.
But other data from the OAIC also shows us that the two sectors with the worst record in terms of the number of notifiable data breaches suffered are… health service providers and financial institutions!
So what’s going on here – the organisations with ostensibly the worst data security outcomes are also the most trusted? And if the companies suffering the lowest level of customer trust – hello, Facebook – are miraculously still in business, why are we bothering to care about trust at all?
Clearly, simply asking whether a sector is trusted is not giving us the full picture.
First, trust in a sector as a whole doesn’t necessarily correlate into use of a sector as a whole. It’s not like you or I can really choose not to engage at all with the banking sector, or the healthcare sector, let alone with government.
Second, it turns out that gaining a social licence to use data is far more nuanced than simply a matter of checking that your organisation or brand enjoys an underlying level of trust.
Instead, you need to look at a multiplicity of factors which impact on whether any particular project will have a social licence to operate.
A multi-year, eight-nation research project by the World Economic Forum and Microsoft sought to measure the impact of context on individuals’ attitudes towards privacy and the use of their personal information. Their research made two critical findings.
First, there are four types of factors which influence an individual’s degree of trust in any given proposal to use their personal information:
- the situational context – i.e. the nature of the proposal itself
- demographics – research has shown that an individual’s gender, age, ethnicity and country of origin can each influence the value they place on privacy
- culture – local cultural norms also play a part, and
- perceptions – about the strength of legal protections available, as well as about the individual’s own level of confidence navigating technology.
From an organisational point of view, you will only have control over the first of those four factors: the situational context.
Second, in terms of the situational context, there are seven variables that individuals consider, when determining whether they would accept any given scenario involving the use of their personal information. Interestingly, the single most important variable affecting the ‘acceptability’ of a scenario was not the type of data at issue, the way it was proposed to be used, the type of organisation or institution seeking to use it or even the pre-existing level of trust enjoyed by the particular organisation proposing the project – but the method by which the personal information was originally collected.
In terms of the method of collection, any given set of personal information may be broadly categorised as having been directly provided by the subject, indirectly provided via another party, observed, generated or inferred. An individual’s ability to control how his or her personal information may be used depends on both an awareness of the collection, and control over that collection. As awareness and control over the point of collection lessen, so too does trust in the subsequent use of that data. Understanding how personal information is collected therefore becomes critical to understanding the likely community expectations around the use of that data.
And the WEF research found that the type of entity proposing the project – i.e. the sector the organisation is in, such as healthcare, finance, government etc – turned out to be the least important of all the variables.
So trust in data-related projects is specific to the use case and the design of each project, as well as the type of customers to be affected, far more than it is about underlying levels of trust in particular organisations or sectors.
Here at Salinger Privacy we have a number of clients doing fascinating and valuable work in data analytics, in public interest areas like medical research, or informing public policy on how best to protect children from harm, or how to better educate students or support vulnerable populations. Being able to achieve those objectives depends so much on public trust and gaining a social licence, so getting the privacy settings right in the design of those projects is a critical issue.
Plus sometimes, even the law will only allow our clients to use or disclose personal information if it will be ‘within reasonable expectations’.
(This ‘fuzzy’ nature of privacy law is actually one of the things I love about it – you do need to use your judgment, and think about what your customers would expect, and what you can do to avoid causing them any harm. The interpretation of what is ‘reasonable’ is shifting all the time, and that’s a good thing. It’s how privacy law manages to stay relevant to both new technologies and shifts in community expectations. If the law was more prescriptive it would quickly become out of date. Instead, privacy principles expect organisations, regulators and courts alike to take the pulse of society, and adapt accordingly.)
So – how can an organisation gain its social licence to use personal information? How do you build trust in your project? How do you know if you will be operating ‘within reasonable expectations’?
In addition to addressing the variables highlighted in the WEF research, you should think about transparency. Qualitative research conducted in New Zealand on behalf of the Data Futures Partnership found that being transparent about how data is proposed to be used is a crucial step towards community acceptance, and that in particular, customers and citizens expect clear answers to eight key questions:
- What will my data be used for?
- What are the benefits and who will benefit?
- Who will be using my data?
- Is my data secure?
- Will my data be anonymous?
- Can I see and correct data about me?
- Will I be asked for consent?
- Could my data be sold?
The answers to those questions will be different for every project, and have almost nothing to do with the pre-existing level of trust enjoyed by any particular entity or brand.
So my takeaway message for you is this. Ask not whether your customers trust you; ask whether you have designed each of your data projects to incorporate the elements needed to make those projects trustworthy.
If you would like to know more about the factors which influence trust in data use projects, join us for a free webinar on 5 May 2021 to celebrate Privacy Awareness Week!
Invite your colleagues who work in privacy or with data to our Masterclass in Data, Privacy and Ethics. We will draw together global research into the factors that influence customer trust, and our own experience guiding clients through data analytics, business intelligence and research projects, to offer a framework for balancing business objectives with legal and ethical concerns about the use of personal information. See the Webinar Overview to register.
Photograph © Shutterstock
If you enjoyed this blog, subscribe to our newsletter to receive more privacy insights and news every month – sign up below.