There is something magical about the number seven. The seven deadly sins, the seven dwarfs, the seven year itch, those plucky child detectives who formed the Secret Seven, and the barn-raising dance number from Seven Brides for Seven Brothers. Plus of course, the seven habits of highly effective people.
Here’s our own set of seven. They might not be magical, but hopefully they are practical. In addition to the PIA tools we have available via our Compliance Kits, these are our seven tips on how to make sure that a Privacy Impact Assessment is effective.
Do more than a legal compliance check
Despite the definition of PIAs from the Privacy Act making clear that they are about measuring and mitigating “the impact that the activity or function might have on the privacy of individuals”, many PIAs are conducted as if they are simply a compliance check against statutory privacy principles. They test that the organisation commissioning or conducting the activity will comply with the law, without ever asking what impact the activity will have on individuals.
An example of how looking for privacy impacts is broader than simply reviewing compliance with data privacy laws is in relation to body scanning technology. When first trialled at airports in the wake of the 11 September 2001 terrorist attacks, full body scanners offered screening officials a real-time image of what a passenger looks like naked. Despite the image not being visible to anyone else, and the image not being recorded, and no other ‘personal information’ being collected by the technology (and thus the technology posed no difficulties complying with the Privacy Act), the visceral reaction by the public against the invasion of their privacy was immediate. The technology was as a result re-configured to instead show screening officers an image of a generic outline of a human body, with heat maps showing where on any given passenger’s body the security staff should pat down or examine for items of concern.
Review the ecosystem, rather than elements in isolation
PIAs which focus on one element of a project or program, rather than the whole ecosystem, will often miss the point. A PIA should examine not just a piece of tech in isolation, but the design of the entire ecosystem in which the tech is supposed to work, including legal protections, transparency and messaging, which together add up to how well users understand how the technology works. How well users understand how a system or product works makes a difference to their level of trust, because they can make more informed decisions for themselves.
An example is the PIA of the COVIDSafe app, which did not examine compliance, or risks posed, by the State and Territory health departments which would actually be accessing and using the identifiable data collected by the app. Each of those health departments was covered by a different part of the patchwork of privacy laws in Australia (and in the case of SA and WA, no privacy laws). The scope of the PIA was limited to the federal Department of Health’s compliance with the federal Privacy Act. The PIA Report’s authors called out this limitation in their report, along with the lack of time available to consult with either State and Territory privacy regulators, civil society representatives or other experts. Despite this, the PIA was reported in the media as giving ‘the privacy tick’ to the app.
Test for necessity, legitimacy and proportionality
A PIA should not only be about assessing one potential vector for privacy harm such as the compromise of personal information.
The OAIC has made clear that a PIA should assess:
- whether the objective of an activity is a legitimate objective,
- whether or not the proposal (in terms of how it will handle personal information) is necessary to achieve that objective, and
- whether or not any negative impacts on individuals are proportionate to the benefits or achievement of the objective.
In particular, a PIA should identify “potential alternatives for achieving the goals of the project”, which could be less privacy-invasive.
The OAIC’s determination against 7-Eleven offers a good example. While finding that the company’s objective of “understanding customers’ in-store experience” was legitimate, the covert collection of biometrics to achieve that objective was neither necessary nor proportionate to the benefits. (The store had implemented facial recognition technology without notice or consent to test who was answering its in-store customer satisfaction surveys.)
In the Clearview AI case, the OAIC further established that the tests of “necessity, legitimacy and proportionality” are to be determined with reference to “any public interest benefits” of the technology; the commercial interests of the entity are irrelevant.
Test the tech
Again the PIA of the COVIDSafe app is a prime example. This PIA turned out not to be a review of the app at all. The reviewers could not test the app’s functionality, let alone test whether assertions made about the data flows were correct. The terms of reference for the PIA were simply whether the Department of Health could lawfully participate in the proposed data flows.
This is related to the failure to test for proportionality. A proper assessment of privacy impacts on individuals should involve balancing benefits against risks. If a PIA cannot test whether the benefits will actually or even likely be achieved, no judgment can be made about whether or not the privacy risks will be outweighed by the benefits. Had the PIA reviewers been able to test the functionality of the app, and had they therefore been able to determine that – as later became apparent – that the app did not work on iPhones and had other technical problems as well, then a judgment could have been made much sooner that the benefits did not outweigh the risks to privacy (let alone the financial costs of the project) at all.
Consider customer expectations and the role of social licence in gaining trust
Public trust is not as simple as asking: “Do you trust this organisation / brand?” It’s about asking: “Do you trust this particular way your data is going to be used for this particular purpose, can you see that it will deliver benefits (whether those benefits are personally for you or for others), and are you comfortable that those benefits outweigh the risks for you?”
When you realise that this more complex set of questions is the thinking behind consumer sentiment, you can see how important it is to assess each different data use proposal on a case-by-case basis, because the nature of the proposal, and the context it is in, will make each value proposition unique. That means the balancing act between benefits and risks from a privacy point of view needs to done fresh for every different project.
Utilise multiple mitigation levers
Levers to address privacy risks can include:
- technology design
- technology configuration (i.e. choosing which settings to use when implementing off-the-shelf tech)
- legislation
- policy (including policy, procedures, protocols, standards, rules etc)
- governance
- public communications
- user guidance, and
- staff training.
Comparing two different covid-related apps offers a good example of how different levers may be pulled to mitigate similar privacy risks. The development of the federal government’s COVIDSafe app was rightly lauded for including strong, bespoke legal privacy protections (such as to prevent use for law enforcement purposes) developed very early on, yet the app itself had design flaws which could leak data to bad actors. By contrast the NSW government’s ‘Covid Safe Check-in’ app did not have specific legal protections until months after its launch, but it had more protections baked into the app’s design: it put the user in complete control of when the app was used, compared with the COVIDSafe ‘always on’ design.
Follow the recommendations
This should go without saying, but simply conducting a PIA is not enough. Unless findings and recommendations to mitigate privacy risks are followed, a PIA will be nothing more than a smokescreen, offering a veneer of respectability to a project.
In particular, a PIA may result in a recommendation to significantly alter the course of a project. Project teams need to be prepared for this possibility. Make sure your project teams allow enough time to absorb recommendations from a PIA, and even pivot, pause or scrap the project if it becomes necessary.
So there you have it: our seven tips for making your PIAs effective. It’s not magic, just logic.
With easy-to-use templates + Salinger Privacy know-how via checklists and more, we can help you steer your PIA in the right direction. Take a look at our complete range of Compliance Kits to see which suits you best.
Photograph © Shutterstock