To counter the myriad problems with automated decision-making, including gender and racial bias, inaccurate data, skewed outputs and opaque logic, ‘algorithmic transparency’ is the latest buzz word.
But like many buzz words, it doesn’t mean much. ‘Transparency’ could mean everything or nothing. And the proposal included in the Australian Attorney-General’s Department’s Discussion Paper, as part of the review of the Privacy Act, is at the ‘nothing’ end of that scale.
Proposal 17.1 simply suggests that privacy policies be required to “include information on whether personal information will be used in ADM which has a legal, or similarly significant effect on people’s rights”.
I did not support this proposal, because it is pointless. Transparency over the fact that automated decision-making is being used – via a document almost no-one ever reads anyway – achieves nothing.
In 2014, the World Economic Forum described ‘the algorithm’ as “a new nexus of control and influence”. Algorithms “generate the predictions, recommendations and inferences for decision-making in a data-driven society”.
From setting insurance premiums to deciding who gets a home loan, from predicting the risk of a person re-offending to more accurately diagnosing disease, algorithmic systems – especially those turbo-charged by AI – have the ability to re-shape our lives. However far from being neutral, algorithms carry the bias or agenda of their makers, and can reflect the preferences, gender and ethnicity of the user. As the use of algorithmic systems increases, so too does the need for appropriate auditing, assessment, and review.
Not the only voice
I am by no means the only voice suggesting that much more significant reform is needed.
Research conducted for the OAIC in 2020 showed that 84% of Australians believe that they should have the right to know if a decision affecting them is made using AI, and 78% believe that when AI is used to make or assist decisions, people should be told which factors and personal information are considered by the algorithm.
The European Commission’s submission in response to the Discussion Paper indicates that the current proposal is not sufficient to protect individuals’ rights. (And since the European Commission will decide on the ‘adequacy’ of Australia’s reformed privacy regime for the purpose of regulating data flows out of the EU, we should listen.)
The Commission suggested that Proposal 17.1 should “be complemented by including a right for individuals significantly affected by decisions based solely on automated processing (e.g. rejection of an online credit, e-recruiting, etc.) to at least receive an explanation about the underlying ‘logic’ of such decisions, to be able to challenge them and obtain their review by a human being.”
In addition to similar rights found under the GDPR, the Bill currently before the Canadian Parliament to update the Canadian privacy laws includes new rights. If enacted, section 63 of the new Consumer Privacy Protection Act (Canada) will add to the existing Access and Correction rights the following:
Automated decision system
(3) If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual that could have a significant impact on them, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision.
Explanation
(4) The explanation must indicate the type of personal information that was used to make the prediction, recommendation or decision, the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.
In the USA, President Biden recently released a ‘blueprint’ for an ‘AI Bill of Rights’. The proposed rights include not only telling people that an automated system is being used to make decisions about them, but also how it works, and ensuring there is a human who can quickly remedy problems.
Why existing laws are not enough
Some will argue that such issues should be left to discrimination law and/or consumer protection law. However neither of those approaches is sufficient. Consumer protection law and the ACCC do not address questions of public sector decision-making, while discrimination law only covers certain protected grounds. Further, discrimination law requires that people wait for discrimination to occur (and then they must be able to prove it); administrative law is similar in relation to public sector matters. However privacy law is – or could be – more adept at requiring the root cause of algorithmic unfairness to be addressed.
As an example of why discrimination law is not sufficient, consider citizenship or visa status, which are not protected attributes under discrimination law. A Dutch algorithm was built to detect fraud in the allocation of childcare benefits. A report released by Amnesty International last year found that a design decision had been made to include the citizenship of the parent as a data field, with non-Dutch citizens automatically considered a higher risk of committing fraud. So even before the machine learning process started, there was a built-in bias apparent, which was later exacerbated by the opaque nature of the algorithm.
The result was horrific: “For over 6 years, people were often wrongly labeled as fraudsters, with dire consequences … Authorities penalized families over a mere suspicion of fraud based on the system’s risk indicators. Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care.”
The manifest problems with the ‘Robodebt’ system here in Australia offer another example of algorithmic unfairness which does not relate to any protected attribute found in discrimination law. Algorithms, or the data on which they are designed or used, may reflect biases on grounds such as socio-economic status, which are not protected by discrimination law.
What does transparency even mean?
The idea of transparency in algorithmic systems generally focuses on both ensuring people are aware when they are interacting with an algorithmic system, and that the system and its outcomes are explainable. What this may mean in practice is not always consistent; for example it may mean communicating to affected individuals any of the following:
- the fact that the system they are interacting with is automated or uses AI
- the specification and design of an algorithm
- the system’s purpose
- the features and weightings the system uses
- the kinds of data inputs used
- where those data inputs came from
- the kinds of outputs it generates and how the outputs are used
- the logic, model or reasons used to generate the outputs
- the level of human intervention in the system
- whether the system has been tested, validated, certified or audited, and/or
- whether the system has implemented a fairness model.
Clearly communicating about data practices is vital, but it is only one part of the challenge of building a just algorithmic system. Explainability for AI systems in particular can be technically challenging.
There can also sometimes be tension between explaining the logic of an algorithmic system to the public for transparency purposes, and concerns regarding proprietary information. However, research conducted by the UK Information Commissioner’s Office has shown that the risk of exposing proprietary information when being transparent about algorithms is quite low.
A further challenge in relation to transparency is that algorithmic systems, especially AI systems, have the ability to extract or even create new meanings, insights or outcomes. In many cases, these insights will be used for purposes which go beyond the purpose for which the information was collected in the first place. To be truly effective, transparency must deliver comprehensive explanations and offer the ability for individuals to make meaningful choices about the use of their personal information.
Assessing algorithmic systems through a published Algorithmic Impact Assessment can provide an added level of transparency to how these systems work, and provide insight for people into how their information is being used.
Transparent does not mean fair
I also want to sound a note of caution about transparency, which is that just because an organisation is being transparent about their practices does not necessarily protect privacy, or prevent harm, especially if customers have no alternative to choose from. Transparency alone does not mean oversight of algorithmic systems, and it doesn’t mean that they are inherently better. Transparency over how an automated system works won’t fix problems with its design.
That’s why arguing for more ‘transparency’ – alone – is an example of tech bros coming up with weak solutions so as not to actually change the status quo.
What we need
Transparency alone will not challenge, change, or bring oversight to algorithmic systems; transparency offers individuals very little if the systems themselves are harmful.
I argued in the Salinger Privacy submission to the review of the Privacy Act that we need:
- a right to human review of automated decision-making
- a requirement for algorithmic explainability, and
- a requirement for algorithmic auditability.
In particular, rights should focus on ensuring both the design and deployment of algorithmic decision-making systems, and the data on which they are trained, designed or implemented, are fair and fit for purpose.
Here’s hoping the final outcomes of the review of the Privacy Act offer more than just window-dressing when it comes to algorithmic rights.
For more on ways in which genuine accountability and transparency can be achieved in practice for algorithmic systems, see Algorithms, AI, and Automated Decisions – A guide for privacy professionals.
To keep up to date with what’s happening with the review of the Privacy Act, keep an eye on our Privacy Act Reforms resources page.
Photograph © Wilhelm Gunkel on Unsplash