Recent caselaw demonstrates that privacy laws reach further than some organisations might expect.
Introduction: the identifiability test
Most information privacy and data protection laws around the world have as their starting point some notion of identifiability. Legal obligations will typically only apply to data that relates to an ‘identifiable’ person.
For example, Australian privacy laws create privacy principles, which apply only to data which meets the definition of “personal information”. The Australian Privacy Act defines this as: “information or an opinion about an identified individual, or an individual who is reasonably identifiable”.
The point of this legal definition is that if no individual is identifiable from a set of data, then the privacy principles – the backbone of an organisation’s legal obligations – simply won’t apply. If no individual can be identified from a dataset, then the dataset can be safely released as open data; matched or shared with or sold to other organisations; or used for a new purpose such as data analytics, without breaching privacy law.
Or so the theory goes.
In reality, determining whether or not an individual might be considered in law to be ‘identifiable’ is not straightforward. The scope of what is included within the notion of identifiability may surprise many organisations.
Recent cases have tested the limits
The Office of the Australian Information Commissioner (OAIC) has made a series of determinations which have shed light on the extent to which privacy laws cover data which – at face value – may not appear to be identifiable of any individual.
All three cases involved the use of facial recognition technology, but the issues raised in relation to the scope of privacy laws are applicable to many other types of data and data use practices, including online behavioural advertising, customer profiling and targeted marketing.
The 7-Eleven case
In June 2020, the 7-Eleven chain of convenience stores began using a new customer feedback survey system in 700 stores across Australia. Each store had a tablet device which enabled customers to complete a voluntary survey about their experience in the store. Each tablet had a built-in camera that took images of the customer’s face as they completed the survey.
Those facial images were stored on the tablet for around 20 seconds, before being uploaded to a server in the cloud. A third party service provider converted each facial image to a ‘faceprint’, which is an encrypted algorithmic representation of the face. The faceprint was used to detect if the same person was leaving multiple survey responses within a 20 hour period on the same tablet; if multiple responses were detected, they were excluded from the survey results.
In other words, 7-Eleven was using a facial recognition technology on its customers, to prevent its employees gaming a customer satisfaction survey by leaving multiple positive survey responses about their own performance. At least 1.6 million survey responses were completed.
The OAIC found that 7-Eleven had breached Australian Privacy Principle (‘APP’) 3.3 by collecting ‘sensitive information’ (namely, biometric templates) unnecessarily and without consent, and APP 5 by failing to provide proper notice about that collection.
One of the arguments raised by 7-Eleven was that the information at issue did not constitute ‘personal information’ for the purposes of the Privacy Act.
The Clearview AI case
Clearview AI provides a facial recognition search tool which allows registered users to upload a digital image of an individual’s face and then run a search against the company’s database of more than 3 billion images. The database of images was created by Clearview collecting images of individuals’ faces from web pages including social media sites. The search tool then displays likely matches and provides the associated source information to the user. The user can then click on the links to the source material, to potentially enable identification of the individual.
From October 2019 to March 2020, Clearview offered free trials of its search tool to the AFP, as well as to the police services of Victoria, Queensland and South Australia. Members from each of these police services used the search tool on a free trial basis, uploading images of people to test the effectiveness of the tool. Uploaded images, known as ‘probe images’, included photographs of both suspects and victims in active investigations, including children.
The OAIC found that Clearview had breached APPs 1.2, 3.3, 3.5, 5 and 10.2. One of the arguments raised by Clearview was that the information at issue did not constitute ‘personal information’ for the purposes of the Privacy Act.
The AFP case
Officers from the AFP used the Clearview search tool on a free trial basis. Those officers did so without entering into any formal arrangements with Clearview, and the Clearview search tool was not subject to the AFP’s normal procurement or due diligence processes. The OAIC found that the AFP had breached APP 1.2, as well as a separate requirement under a Code issued specifically for Australian government agencies, which mandates the conduct of a Privacy Impact Assessment prior to commencing any high privacy risk activities. While it does not appear that the AFP argued otherwise, the OAIC canvassed whether the data at issue was ‘personal information’ for the purposes of the Privacy Act.
The arguments about identifiability and ‘personal information’
7-Eleven had argued that the facial images and faceprints it collected were not ‘personal information’ because they were not used to identify any individual.
However the OAIC found that even though individuals could not necessarily “be identified from the specific information being handled”, the information was still ‘reasonably identifiable’ – and thus within the scope of ‘personal information’ – because the faceprints were used “as an ‘identifier’ which “enabled an individual depicted in a faceprint to be distinguished from other individuals whose faceprints were held on the Server”.
Similarly, Clearview argued that ‘vectors’ could not constitute ‘personal information’. From the three billion raw images scraped from the web, Clearview retained metadata about the source of each raw image, and a vector for each raw image: a digital representation generated from the raw image, against which users could compare a new vector (i.e. a new digital file created by running the tool’s facial recognition algorithm over an uploaded probe image), in order to find a potential match. Clearview argued that the vector and metadata held in their database neither showed an individual’s face, nor named or otherwise directly identified any individual. They claimed that their tool merely distinguished images, and did not ‘identify’ individuals. (Any image ‘matches’ would simply present a link to the URL for the source of the original raw image.)
However the OAIC disagreed. First, the OAIC noted that the definition in the Privacy Act does not require an identity to be ascertained from the information alone, thanks to an amendment to the definition in 2014.
Second, the OAIC noted that because “an individual … is uniquely distinguishable from all other individuals in the respondent’s database”, it was irrelevant that the respondent did not retain the original image from which the vector was generated, nor any identity-related information about the individual.
The OAIC thus determined that both the raw image and the vector generated from it constituted ‘personal information’ for the purposes of the Privacy Act.
In the AFP case, the OAIC reiterated that being able to distinguish an individual from the group will render an individual ‘identified’ in privacy law.
Lesson 1: identifiability is not to be considered in a vacuum
The Australian definition of personal information is broader in its scope than the American phrase beloved by information technology professionals and vendors: PII or ‘personally identifying information’. The American / IT industry test asks whether someone can be identified from this piece of information alone. By contrast, the Australian legal test asks whether someone can be identified from this piece of information alone, or once it is combined with other available information.
In the Clearview case, the OAIC stated: “An individual will be ‘reasonably’ identifiable where the process or steps for that individual to be identifiable are reasonable to achieve. The context in which the data is held or released, and the availability of other datasets or resources to attempt a linkage, are key in determining whether an individual is reasonably identifiable”.
This formulation is not novel. In guidance published in 2017, the OAIC explained that an individual can be ‘identifiable’ “where the information is able to be linked with other information that could ultimately identify the individual”.
The identifiability test therefore depends on considering not only the particular information at issue, but also any other information that is known or available to the recipient, and the practicability of using that other information to identify an individual. Who will hold and have access to the information is therefore a relevant consideration when assessing whether an individual will be ‘reasonably identifiable’.
Lesson 2: an individual can be identifiable without learning their identity
The second lesson is that ‘identifiability’ in law does not necessarily require that a person’s name or legal identity can be established from the information. Instead, it implies uniqueness in a dataset. This is similar to the GDPR’s notion of ‘singling out’.
Again, since 2017, the OAIC has maintained that: “Generally speaking, an individual is ‘identified’ when, within a group of persons, he or she is ‘distinguished’ from all other members of a group.”
What is novel about the 7-Eleven case is that the OAIC has now applied that reasoning to data from which there is slim to no chance of re-constructing a person’s name or legal identity, such as vectors generated from faceprints, but which is nonetheless useful for separating one individual from another and subjecting them to different treatment.
In other contexts, the OAIC has noted that it is not only identifiers like biometric vectors which can ‘reasonably identify’ someone; browser or search history are two examples of behavioural or pattern data which could lead to an individual being rendered unique in a dataset.
Conclusion: the implications
While significant, these cases demonstrate a line of reasoning which is entirely consistent with what the OAIC has been saying for many years, since the definition of personal information was updated in 2014.
The Australian legal test for what constitutes ‘personal information’ – and thus what is within scope for regulation under privacy law – includes two elements which may surprise many organisations handling data:
- the data is not to be considered in a vacuum, and
- data can be identifiable without revealing identity: being able to distinguish an individual from the group will render an individual ‘identified’ for the purposes of privacy law.
While not surprising for those who follow OAIC guidance closely, the implications of these cases are far reaching. The logical conclusion is that Australian privacy laws, like the data protection laws of the European Union, extend to data which can be used to disambiguate customers or other individuals and subject them to differential treatment, even in online environments where organisations may not have the facility to trace back to find out the individual’s legal identity.
Regulated entities will face a legal compliance risk if they do not appreciate the breadth of data which is covered by their obligations under the Privacy Act. In particular, organisations should be wary of technology vendors, supplying products used in applications from customer profiling and targeted marketing to security and identity authentication, who may be pitching their products as ‘compliant’ or ‘privacy protective’ on the basis that no-one is identifiable from that data alone.
The correct legal test in Australia suggests that data which can be linked to other data sources, such that an individual can be distinguished from the group and then treated differently, will constitute ‘personal information’, and restrictions on the collection, use or disclosure of that data will apply accordingly.
Want more caselaw insights? Watch our video here.
An earlier version of this article was first published in LSJ Online.
Photograph © Shutterstock