On 25 October 2021 the Australian government released a Discussion Paper crammed full of proposals to amend the national privacy law, as well as a Bill intended to progress certain reforms ahead of the rest.
Here’s what you need to know, to help you prepare for what’s likely ahead, or to draft a submission in response to the proposals.
The power of social media and online platforms, AI, the Internet of Things and the boom in all things digital point to the need for privacy law to keep up with the challenges posed to individual privacy by new technologies. In 2019 the Australian Competition and Consumer Commission (ACCC) published the final report from its Digital Platforms Inquiry, which considered the behaviour of the major platforms such as Facebook and Google. The ACCC’s report highlighted risks for both consumers and businesses from the business models followed by major technology companies, which primarily rely on the collection and analysis of consumer data as the source of their wealth and power. Amongst their other recommendations, the ACCC suggested that the Australian Government should conduct a review into whether the Privacy Act remains fit for purpose in this digital age.
In late 2019 the Government agreed to review and reform the Privacy Act, which led to an Issues Paper released in October 2020. That Issues Paper called for submissions on whether the Privacy Act and its enforcement mechanisms remain fit for purpose.
Twelve months and 200 submissions later, the Attorney General’s Department has released a Discussion Paper, containing both specific proposals and less settled options for reform, clustered around 28 topics, each with their own chapter.
At 217 pages it’s not a quick read, so here are the highlights, followed by our take on key elements of the proposals: the good, the bad and the ugly.
The proposals in the Discussion Paper
Not surprisingly given the European Parliament moving on AdTech, Google phasing out third party cookies, Apple lifting the veil on third party online tracking, and wave after wave of public revelations about the toxic impact of Facebook’s activities, the Discussion Paper has much to say about digital harms, targeted advertising, personalised content and the role of online identifiers.
First, the Discussion Paper proposes a re-drafting of the threshold definition of ‘personal information’, so that it explicitly recognises and includes online identifiers and technical data, and encompasses the use of data with individuated effects. By moving closer to the GDPR’s model which includes online identifiers, indirect identification and the notion of ‘singling out’, this proposal alone will help strengthen and modernise Australia’s privacy laws.
Second, there is an intention to reduce reliance on the ‘notice and consent’ self-management model of privacy regulation, in favour of stricter limits on collection, use and disclosure. With another proposal likely to gain plenty of attention, the Discussion Paper proposes a ‘fair and reasonable’ test to be applied to collection, use and disclosure, on top of existing rules around collection necessity and purpose limitation.
Third, consent. While moving away from requiring consent for routine activities, it appears consent will remain as an option for authorising some types of information handling practices. The Discussion Paper proposes to tighten the legal tests for what constitutes a valid consent, by building into the legislation what has to date been guidance from the Office of the Australian Information Commissioner (OAIC): that consent must be voluntary, informed, specific and current, and requires an “unambiguous indication through clear action”. Combined with another proposal, which is to require ‘pro-privacy defaults’ when choices are to be offered to users, these proposals should spell the end of companies using dark patterns to trick people into sharing their personal information, and then claiming ‘consent’ as their lawful basis for collection, use or disclosure.
Fourth, the Discussion Paper proposes to abolish an existing rule about using or disclosing personal information for direct marketing (Australian Privacy Principle 7), in favour of applying the same standards as for other activities (APP 6). But then direct marketing is mentioned again elsewhere, which leads us to the next significant proposal.
Without yet landing on a firm model, the Discussion Paper suggests some options for regulating how organisations deal with scenarios which inherently pose a higher privacy risk. The Privacy Act currently sets some slightly tougher tests for handling some categories of data known as ‘sensitive information’, such as information about an individual’s health or disability, ethnicity, religion and sexuality. However the Discussion Paper seeks to broaden out this idea to a notion of restricted acts, to which higher standards will apply. What is potentially within scope includes not just the handling of ‘sensitive information’, but also some additional types of data such as location data and information about children, and some particular types of practices such as direct marketing, and automated decision-making with legal or significant effects. The Discussion Paper also asks for further submissions on whether the best way to regulate these types of higher risk practices is by self-management (i.e. requiring individuals to consent), or by organisational accountability and risk management (i.e. requiring organisations to conduct Privacy Impact Assessments or take other steps to identify and mitigate the risks posed by their practices).
One of the themes running through this review process is the need to ensure that the Privacy Act is brought closer into line with the GDPR, in the hope that Australia could finally secure an ‘adequacy’ decision from the European Commission, which would beautifully simplify matters for businesses, Unis and other organisations which deal with customers or service providers in Europe. To date, an adequacy ruling has escaped Australia, primarily because of a number of carve-outs from the Privacy Act’s coverage of the private sector, including exemptions for small businesses, employee records, political parties and media organisations. Yet the Discussion Paper has not directly proposed removing these carve-outs; instead, it raises a number of issues and options, and calls for yet more submissions on the pros and cons of abolishing those four exemptions. So expect to see significant debate, with further pushback from organisations currently benefitting from the exemptions.
Also showing evidence of looking to other jurisdictions for influence and ideas, the Discussion Paper proposes introducing some GDPR-type individual rights, such as the right to erasure and the right to object.
Finally, the Discussion Paper has thrown out a few different models to improve access to justice, including consideration of a statutory tort of privacy (though without yet committing to a particular model, if any), and/or a direct right of action for individuals with a complaint about a breach of a privacy principle. At present complainants can only approach the OAIC, whose backlog of complaints creates delays and operates as a barrier to resolution. The ability to take a complaint to a court with the power to order compensation – as happens now under some State privacy laws – could see a meaningful improvement in access to justice for those individuals keen to have their day in court.
Our two cents’ worth
OK, I would like to think that our views are worth more than just two cents, but here’s a taste of what the Salinger Privacy submission on the Discussion Paper will focus on.
Overall I believe the proposals represent some sensible ways to strengthen the law to deliver on both political promises and community expectations to modernise the Act to effectively deal with digital harms, but there are some opportunities not yet grasped, and a few things in need of a fix.
The definition of personal information
In chapter 2, the Discussion Paper proposes some minor edits to the definition of personal information:
Personal information means information or an opinion that relates to an identified
individual, or an individual who is reasonably identifiable:
- a) whether the information or opinion is true or not; and
- b) whether the information or opinion is recorded in a material form or not.
An individual is ‘reasonably identifiable’ if they are capable of being identified, directly or indirectly.
By amending the definition to cover information that “relates to” an individual, instead of the current test which is “about” an individual, the proposed reforms will address some of the confusion caused by the Grubb v Telstra line of cases, as well as bring the Privacy Act into line with the newer Consumer Data Right (CDR) scheme. This is good news.
Another welcome development is a proposed non-exhaustive list of what will make someone “capable of being identified, directly or indirectly”, with examples including location data, online identifiers, and “one or more factors specific to the physical, physiological, genetic, mental, behavioural (including predictions of behaviours or preferences), economic, cultural or social identity or characteristics of that person”.
Importantly, the Discussion Paper states that the new definition “would cover circumstances in which an individual is distinguished from others or has a profile associated with a pseudonym or identifier, despite not being named”. This is a very important and positive development, to help address the types of digital harms enabled by individuation – that is, individualised profiling, targeted advertising or messaging, and personalised content which can cause harm, but which currently escapes regulation because organisations can claim that they don’t know who the recipient of their messaging is.
However, I would like to see this language actually used in the definition itself, to be absolutely sure that ‘identifiable’ in law incorporates the notion ‘distinguished from others even if identity is not known’. (For more on how the GDPR’s notion of ‘singling out’ may or may not include people whose identity is not knowable, see our research paper on the subject.)
As Sven Bluemmel, the Victorian Information Commissioner, put it recently: “I can exploit you if I know your fears, your likely political leanings, your cohort. I don’t need to know exactly who you are; I just need to know that you have a group of attributes that is particularly receptive to whatever I’m selling or whatever outrage I want to foment amongst people. I don’t need to know your name. And therefore, arguably depending on how you interpret it, I don’t need ‘personal information’. I just need a series of attributes that allows me to exploit you.”
That’s why we need the definition of personal information to indisputably cover individuation, as well as identification, of individuals.
Some of the other aspects of the proposals are a mixed bag. Sticking with the threshold test that a person must be ‘reasonably’ identifiable will not address current weaknesses in the definition. The word ‘reasonably’ waters down the scope of the definition more so than other international privacy laws, which set the threshold at any degree of identifiability.
Whether or not someone is ‘reasonably’ identifiable is not a measure of the likelihood that someone will suffer harm, but is a test based on ‘reasonable’ levels of resources and motivation. This leaves a gap between the test applicable to the data holder, and the reality of whether or not an individual can actually be identified from the data, such as by a motivated party willing to go beyond ‘reasonable’ steps. The OAIC has said that an individual “will be ‘reasonably’ identifiable where the process or steps for that individual to be identifiable are reasonable to achieve”. So even where re-identification of patients from publicly released MBS/PBS was demonstrated by a team of experts, the OAIC found that the steps the experts took to achieve actual re-identification were more than ‘reasonable’, and therefore the data did not meet the definition of ‘personal information’.
Yet the Discussion Paper also says that on the flipside, to apply de-identification such as to fall outside the scope of the definition of ‘personal information’, an organisation must meet a test which is that there is only an “extremely remote or hypothetical risk of identification”.
In my view there is a gap between the test arising from the definition of personal information (“not reasonably identifiable”) and the test in the proposed definition of de-identified data (“extremely remote or hypothetical risk of identification”), creating a legislative no-man’s land of data which is not personal information but nor is it de-identified. There should not be a gap between the two.
Not acting to close that gap would represent a missed opportunity to bring within scope for regulation the types of harm evidenced by various submissions made to the review thus far. Bad actors will continue to argue that because no one is ‘reasonably’ identifiable in their data, they are not regulated by the Act at all.
It’s not difficult to anticipate the argument from AdTech and others: ‘Well it wasn’t reasonably identifiable information because we cleverly used hashed email addresses to match up customer records from different devices and different apps and share user attributes between different companies’.
(I say it’s not difficult to anticipate this argument because that’s how data broker LiveRamp, formerly known as Acxiom, says they draw data from multiple publishers, sites, devices and platforms (aka “gain second-party segments or third-party data”), build customer profiles and then target ads to around 7 million Australians online. Their website claims to offer ‘data anonymization’ because “LiveRamp removes personally identifiable information (PII) and replaces it with pseudonymous record keys during our matching process so you can use data with confidence”.
Um, what? As the GDPR makes abundantly clear, use of pseudonymous record keys which enable data linkage does not ‘anonymization’ make. This marketing double-speak about ‘anonymization’ makes me feel like Inigo Montoya in The Princess Bride: “You keep using that word, but I do not think it means what you think it means”.
So maybe individual identities are hidden during the matching process, but the end result is still that Company A can find out new information about their customers, or individually target people who are not their customers but who have ‘lookalike’ characteristics, using data collected by Companies B, C and D. This is the kind of online tracking, profiling and targeting of individuals across the web that the phase-out of third party cookies is supposed to stop.)
So Salinger Privacy will be arguing that the word ‘reasonably’ in the definition needs to go the way of the dinosaurs, and that the line between identifiable and not should be based on the “extremely remote or hypothetical risk of identification” test.
The Discussion Paper also proposes to add a definition of ‘collection’ that expressly covers “information obtained from any source and by any means, including inferred or generated information”. This would be an improvement, but I would argue that the definition of ‘collection’ needs to be pitched not in relation to the nature of the information but to the action of generating or inferring information.
Also, I suggest that inferred or generated data should be included in the list of things which comprise ‘personal information’. Otherwise here’s the likely conclusion from AdTech and similar players: ‘The inferences we drew happened some time after we collected the data, so that’s not a ‘collection’ but a ‘use’, and the Act doesn’t say that APP 6 (which regulates ‘use’) applies to inferred information, so woo hoo we’re off the hook’.
(I know that’s not what the OAIC or the Discussion Paper mean when they talk about ‘collection by creation’, but instead of letting those arguments play out in some expensive litigation between the OAIC and Big Tech in the future, let’s nip them in the bud now with some clear legislative drafting.)
Again, I’m not just hypothesising here about what certain players might say. Take a look at Facebook’s submission on the Issues Paper, which says that the information it infers about people is not, and should not be, regulated as ‘personal information’. Facebook wants to protect its investment of “time, money and resources” in developing and using its inferences about people, which instead of being treated as personal information worthy of legal protection are characterised in the submission as the company’s “intellectual property” which should be protected from “inappropriate interference”, by which it means having to comply with the APPs.
The ‘fair and reasonable’ test
In chapter 10, the Discussion Paper proposes the introduction of a new requirement: that “collection, use or disclosure of personal information under APP 3 and APP 6 must be fair and reasonable in the circumstances”.
This is proposed in relation to routine activities (e.g. use or disclosure for a primary purpose, or a directly related secondary purpose), and activities authorised on the basis of the individual’s consent. It is not proposed to apply to activities authorised under a different law, or under an exemption such as those relating to law enforcement or research purposes.
To supplement this ‘fair and reasonable’ test, the proposal includes factors which could be legislated as relevant to any application of the test. The draft list is:
- Whether an individual would reasonably expect the personal information to be collected, used or disclosed in the circumstances
- The sensitivity and amount of personal information being collected, used or disclosed
- Whether an individual is at foreseeable risk of unjustified adverse impacts or harm as a result of the collection, use or disclosure of their personal information
- Whether the collection, use or disclosure is reasonably necessary to achieve the functions and activities of the entity
- Whether the individual’s loss of privacy is proportionate to the benefits
- The transparency of the collection, use or disclosure of the personal information, and
- If the personal information relates to a child, whether the collection, use or disclosure of the personal information is in the best interests of the child
This is a welcome suggestion, but in my view it still needs some strengthening. Otherwise imagine the argument from tech platforms about why content which might harm teenage girls or push vulnerable people towards extremism is still being fuelled by algorithms designed to generate ‘engagement’: ‘Well our free services need ad revenue to operate, for ads to be successful we need high levels of engagement with the platform, to get high levels of engagement we need users to see certain content which we know will engage them, and so in those circumstances this [anorexia-promoting / conspiracy-theory fuelled / hate-filled / extremist / genocide-promoting / do I need to keep going about the types of harms here] content is “reasonably necessary to achieve the functions and activities of” our company, and anyway we can’t foresee which of our users are at “risk of unjustified adverse impacts or harm” from that content, but just in case we have included something in our T&Cs to set expectations and be transparent, so we have now met the “fair and reasonable” test’.
Also, I would argue that the ‘fair and reasonable’ test should apply to all instances of collection, use and disclosure, including where the collection, use or disclosure is authorised by another law, or under an exemption. The ‘fair and reasonable’ test should be able to flex to the circumstances of the use case. Think about the data hungry activities of Australian Government agencies: the likes of the ATO, Centrelink and the NDIA often operate on the basis of specific legislative authority to collect, use or disclose personal information. Shouldn’t we expect those activities to also be ‘fair and reasonable’?
Perhaps then agencies wouldn’t be able to get away with releasing deeply intimate information about a person’s relationship history, tax affairs and social security benefits to a sympathetic journalist, in response to some public criticism about their agency.
And don’t we want our law enforcement agencies to also only use personal information in a ‘fair and reasonable’ manner? Legitimate investigations and even covert surveillance will be ‘fair and reasonable’ in the right circumstances. After all, police forces with nothing to hide will have nothing to fear, right?
Accountability for high privacy impact activities
Another significant proposal is the idea to create a list of ‘restricted practices’, which while not prohibited will require additional steps from organisations to identify and mitigate privacy risks.
The draft list (at Proposal 11.1) is:
- Direct marketing, including online targeted advertising on a large scale
- The collection, use or disclosure of sensitive information on a large scale
- The collection, use or disclosure of children’s personal information on a large scale
- The collection, use or disclosure of location data on a large scale
- The collection, use or disclosure of biometric or genetic data, including the use of facial recognition software
- The sale of personal information on a large scale
- The collection, use or disclosure of personal information for the purposes of influencing individuals’ behaviour or decisions on a large scale
- The collection use or disclosure of personal information for the purposes of automated decision making with legal or significant effects, or
- Any collection, use or disclosure that is likely to result in a high privacy risk or risk of harm to an individual.
While not explicitly saying so, this proposal looks a lot like the introduction of mandatory Privacy Impact Assessments for certain activities. (Proposal 11.2 also suggests alternatives to organisational accountability which instead rely on self-management options like requiring consent, explicit notice or opt-outs, but they are clearly not the favoured option and we know that notice and consent is broken, so let’s not even go there.)
The Australian Government Agencies Privacy Code already makes PIAs mandatory for the public sector in relation to ‘high privacy risk’ activities, with the OAIC maintaining a list of the types of activities it considers to inherently pose high levels of risk. This new proposal looks set to extend the requirement to the private sector as well.
Through its latest determinations against 7-Eleven and Clearview AI, the OAIC was already signalling that PIAs are now expected under APP 1 for what it is calling ‘high privacy impact’ activities, as a way for organisations to demonstrate that they have effective privacy risk management processes in place.
The Salinger Privacy submission will argue that this list of ‘restricted practices’ should be incorporated into APP 1, and be the trigger for a mandatory PIA to be conducted. However even better would be to adopt the GDPR model, which is that if, after the conduct of a PIA and the implementation of all mitigation strategies, there is still a residual level of high risk, then the regulator must be consulted, and the regulator has the power to stop or prohibit the activity. (Now that might have stopped a company like Clearview AI in its tracks sooner.)
I will also suggest a tweaking of the list of ‘restricted practices’. For example instead of just “online targeted advertising on a large scale”, I would throw in behavioural tracking, profiling and the delivery of personalised content to individuals. (Netflix and the ABC’s iView would otherwise be able to say ‘Well we don’t show ads so this list does not apply to our activities’.)
Conversely, I would not consider all direct marketing to be a high privacy impact, even when delivered at scale. A brochure mailout or email newsletter delivered to the first party customers of a retailer poses very low privacy risk if there is no personalisation of messaging or pricing, or tracking of engagement or conversions.
Some further food for thought is whether or not the OAIC should be able to add to the list of restricted practices, and/or whether or not some ‘restricted practices’ should instead be prohibited, either by the Act or via OAIC developing guidance over time about ‘no-go’ zones. Recent calls for a moratorium on the use of facial recognition in government come to mind.
Kids’ privacy is getting a lot of attention in these and related proposals from the Australian Government. Whether or not a proposed activity is in the best interests of a child gets a mention in the list of factors relevant to applying the ‘fair and reasonable’ test (Proposal 10.2), and processing personal information about children on a large scale is included in the list of ‘restricted activities’ which will require additional risk mitigation steps (Proposal 11.1).
Plus Proposal 13 raises the curly and interrelated issues of children’s capacity, parental consent, and age verification. The Discussion Paper proposes two options on which the Government is seeking feedback: require parents to consent on behalf of children for all instances of handling personal information about children under 16, or only for those instances where the lawful basis for collecting, using or disclosing the information is ‘with consent’ in the first place.
In my view, the first option is utterly unworkable. So many legitimate and routine activities need to happen in a child’s life without stopping to ask for a parent’s consent for every separate thing. Imagine a school contacting a parent to ask ‘do we have your consent to collect and use information about what little Johnny did in the playground at recess today?’ (If the parent says ‘no’, then what?) Such a legal requirement would either cause routine activities to grind to a halt, or organisations will implement horrible unwieldy bundled ‘consents’, which will make a mockery of Proposal 9 – which is to spell out in legislation that every consent must be voluntary (i.e. not part of the conditions of use), informed, current, specific (i.e. not bundled), and an unambiguous indication through clear action.
The Discussion Paper is also asking for feedback on whether organisations should be permitted to assess capacity on an individualised basis, rather than taking a fixed date – the child’s 16th birthday – as the magical day on which they transform from helpless to capable of making independent decisions.
Plus there’s plenty more about kids’ privacy to be found in the Online Privacy Bill, discussed further below.
Regulation and enforcement
There’s a whole lot going on under this heading in the Discussion Paper (chapters 24-28).
Some of the proposals seek to fix long-standing enforcement problems, or propose sensible measures like a tiered civil penalty regime. (That will be particularly important if small businesses are brought into the fold.) So far so good.
Some are more radical ideas like industry funding of the OAIC, as happens now with the corporate regulator ASIC, and splitting apart the OAIC’s functions so that a ‘Privacy Ombudsman’ handles the complaints function. This idea of splitting policy / strategic / advisory functions off from the complaints-handling / enforcement functions is pretty funny, when you consider that the OAIC was created explicitly to bring those functions all under the one roof for privacy and FOI. (Just fund the OAIC properly will be my submission in response.) I should probably move this idea into the ‘Bad’ pile. Which brings us to…
Ugh, the criminalisation of re-identification rears its head again! First prompted in 2016 by some egg-on-faces in the Australian Government when the MBS/PBS dataset was shown to have not been properly de-identified before its public release, instead of penalising, say, the release of poorly de-identified data in the first place, the Government moved to criminalise the conduct of researchers and security specialists who conduct re-identification attacks on data. This terrible, horrible, no good, very bad idea was rightly criticised by the Privacy Commissioner and opposed in Parliament due to fears of its impact on public interest research and cybersecurity efforts.
Why re-introduce the idea now (Proposal 2.6)? Just… no. If you’re worried about malicious re-identification attacks on public data, introduce a statutory tort. Don’t penalise the white hat hackers.
Also: dear governments, please stop drinking the Kool-Aid on the wonders of open data. De-identification is not a magic solution to privacy compliance, and unit record level data is unlikely to ever be safe for public release unless treated with some pretty complex differential privacy techniques, as was demonstrated in 2016 (MBS/PBS), 2018 (Myki), and 2020 (Flight Centre).
A direct right of action that’s not very… direct
Chapter 25 discusses the idea of a ‘direct right of action’. The ACCC recommended that “individuals be given a direct right to bring actions and class actions against APP entities in court to seek compensatory damages as well as aggravated and exemplary damages (in exceptional circumstances) for the financial and non-financial harm suffered as a result of an interference with their privacy under the Act”.
The Discussion Paper noted a number of submissions made about the OAIC’s lack of resources, which has caused complaint-handling delays, and means it operates as a ‘bottleneck’. Unlike in other jurisdictions, the OAIC is effectively the gatekeeper, and can dismiss complaints without proceeding to either conciliation or a formal determination, thus quashing the complainant’s appeal rights.
So you would think a direct right of action would fix that, right? Er, no. Proposal 25.1 is to create a right of action which is only triggered if the complainant first goes to the respondent, then to the OAIC, and then can only proceed to the Federal Court if the OAIC first determines that the complaint is suitable for conciliation. Too bad if they dismiss it instead, or if it languishes in the queue so long that the respondent has skipped town in the meantime.
For a ‘direct right of action’ it’s not very.. direct. Nor is it very accessible to most people. Hands up who wants to pay a QC in the Federal Court and be exposed to costs orders if you lose?
Other jurisdictions do this better. NSW for example allows privacy complainants to lodge a complaint in a no-cost tribunal, so long as they first complained in writing to the respondent and the respondent did not resolve the matter to the complainant’s satisfaction within 60 days. The NSW Privacy Commissioner has a right to be heard in the tribunal, but does not operate as a brake or a bottleneck on matters proceeding. A cap on compensation keeps things manageable for respondents.
There are some aspects of the proposals which are messy, or about which the politics could get messy.
The bits they squibbed
The Discussion Paper kicked the can down the road on the four major exemptions: small businesses, employee records, political parties and media organisations. Rather than propose specific outcomes, chapters 4-7 of the Discussion Paper dance around these contentious areas, while calling for further submissions on a number of questions.
(So if you have a view, make a submission!)
For example, consideration of the small business exemption includes whether, rather than just bringing all businesses within scope of the Act, as comparable jurisdictions do, certain ‘high risk’ practices should be prescribed in. In my view, creating yet more exceptions to an exemption will create confusion, and would be unlikely to lead to an ‘adequacy’ ruling from the European Commission.
Then there’s the idea of a statutory tort of privacy (chapter 26), which has been kicking around as an idea for what seems like forever, but which never quite makes it over the line, despite it enjoying pretty widespread support other than from the media and other businesses afraid of being sued for serious invasions of privacy. The Discussion Paper throws up four options, one of which is to not introduce a tort but extend the application of the Act to “individuals in a non-business capacity for collection, use or disclosure of personal information which would be highly offensive to an objective reasonable person”.
Individuals doing offensive things are hardly going to respond to a letter from the OAIC. Nor will this resolve the problem for victims who have suffered harm at the hands of organisations which are exempt, or at the hands of rogue employees, whose employers get to escape liability.
OK, so I know that the proposed rights of objection (Proposal 14) and erasure (Proposal 15) will generate a lot of attention, but I just can’t get too excited about them. We already have a right to opt out of direct marketing, and we can withdraw consent to activities which were originally based on our consent, like participation in a research project. We also already have a right of correction, which the OAIC has said can include deletion in some circumstances.
While I’m not opposed to introducing more rights, the right to erasure in particular is mostly privacy theatre. It will cause messy compliance headaches, but deliver little of substance for individuals. Better to prohibit or prevent bad practices by organisations in the first place, than rely on individuals having to clean up afterwards.
The Online Privacy Bill
And now we come to the messiest bit of all: the law reform you have when you’re still in the middle of consulting about law reform!
The government has, for some years, been flagging its intention to significantly increase penalties for breaches of the Privacy Act, to levels closer to the GDPR and equal to the CDR scheme and the Australian Consumer Law. So, as expected, the Government is proposing to increase the civil penalties for an interference with privacy (such as a breach of an APP), from its current maximum of $2.1M, to whichever is greatest out of $10M, three times the value of the benefit gained by the organisation from their conduct, or 10% of domestic annual turnover.
But rather than include that in the Discussion Paper, the Government is moving on penalties ahead of the rest of the review, with a Bill also out for public consultation at the same time as the Discussion Paper.
Great, I thought – let’s do it!
But not so fast. There is a world of difference between Schedules 1, 2 and 3 of the Online Privacy Bill.
Schedules 2-3, or what is described in the Explanatory Paper as ‘Part B’ of the Privacy Legislation Amendment (Enhancing Online Privacy and Other Measures) Bill 2021, involve increasing the civil penalties as outlined above, as well as some other tweaking of OAIC powers when conducting investigative or enforcement activities.
Schedules 2-3 of the Bill will also improve the Privacy Act’s extra-territorial reach, by removing the condition that – to be within reach of the law – an organisation has to collect or hold personal information from sources inside Australia. So foreign companies which collect personal information of Australians from a digital platform that does not have servers in Australia will more clearly be subject to the Privacy Act.
Schedules 2-3 of the Bill get the big tick of approval from me.
Schedule 1 on the other hand…
Schedule 1, or what is described in the Explanatory Paper as ‘Part A’ of the Bill, creates a space in the Privacy Act for the introduction of a binding ‘Online Privacy Code’, which would create new obligations for certain kinds of bodies: social media companies, data brokers, and large online platforms. Either the industry would need to develop the Code within 12 months, or the OAIC can step in and develop it.
The content of the Code would need to flesh out how some of the APPs will apply in practice to those industries, and would cover three broad areas:
- Upgrading the APPs in relation to privacy policies, collection notices and what consent means
- Introducing a right to object (i.e. the ability for a consumer to ask a company to cease using or disclosing their personal information), and
- Some vague ideas about how to protect children and vulnerable groups (plus one concrete but terrible idea).
The Discussion Paper for the main review process says that the Online Privacy Bill “addresses the unique and pressing privacy challenges posed by social media and online platforms”. But in reality most of those issues, like the role of notice and consent and how to protect children, are not unique to social media or online platforms, and – if you have read this far you will know – most of these issues are already being addressed in the broader Discussion Paper.
The one big thing that’s in Schedule 1 of the Online Privacy Bill that’s not also in the Discussion Paper is age verification for the use of social media, along with a requirement for parental consent to sign up users under 16.
You know what that means, right? It means age verification for everyone, not just the kids. And age verification usually means identity verification, which means giving Big Tech more personal information. Which is not very privacy-friendly, for a Bill supposed to be about privacy.
So where has this come from, and why is it not part of the rest of the reform and review process?
Age verification and parental consent is part of a bigger political crackdown on Big Tech, which is driven by reactive politics rather than sensible policy. It fits snugly alongside the Prime Minister’s ‘real names in social media’ and ‘voter identification’ thought bubbles, which play well with voters but which are terrible ideas that create more problems than they solve.
Here is my bold prediction: age verification will fail, as it always does. But meanwhile this issue alone will prove to be a furphy which distracts from the bigger issues raised by the wider Act review.
This is some bad politics. Schedule 1 of the Bill plays into the hands of social media companies, who can sit back and enjoy the debate about age verification and online anonymity, while doing precisely nothing about the underlying business model which causes harms, not only to children.
(Also excuse me but politicians who voted against revealing who funded Christian Porter’s blind trust don’t get to complain about anonymity online.)
Plus, besides the anti-privacy age verification bit of the Bill, I have some more pragmatic concerns.
First, making parents consent before letting kids under 16 loose on social media will do nothing to change the data exploitative business model underpinning social media, or the harms that flow from it.
Second, the most toxic of the bad actors will drag out the process for developing the Code, then they will argue that they’re not covered by the Code, then they will argue about the definition of personal information some more. (The US experience with the Californian privacy law suggests that we will end up in arguments about what it means to ‘trade’ in personal information, what it means to ‘enable online social interaction’, and so on.)
Third, the whole idea of an Online Privacy Code massively over-complicates the regulatory environment. Just fix the Privacy Act for all players, instead of introducing a two-tier regulatory system. One of the strengths of the Privacy Act is its technology and industry neutral position. Why mess with that? For example, any new provisions for protecting children and vulnerable groups, or for clarifying the elements needed to gain a valid consent, should apply to all sectors – as is already proposed in the Discussion Paper.
Politically, the government is keen to be seen to beat up on Big Tech ahead of the election, so the Online Privacy Bill makes it looks like they are doing something, while ignoring the bigger issues which show the need to reform the Privacy Act for all players.
Submissions on the Online Privacy Bill are due by 6 December, so get your skates on for that one. (Sorry, there goes the weekend.)
Submissions on the Discussion Paper are due by 10 January.
2022 will no doubt bring plenty of robust discussion about the shape of privacy regulation in Australia, as we attempt to mould our legislation into a more contemporary design, to reflect the realities of the digital economy.