Imagine reading an ethical framework for organising birthday parties, which says that it will be important to meet legal requirements in terms of not making too much noise, that matching napkins and paper plates are fundamental to planning your party, but that the success of your party could potentially be affected by a tsunami hitting your house.
The framework fails to mention some key things to plan for such as the number of guests, food, drink, music, decorations, lighting, dress code, the birthday cake, speeches, or wet weather backup plans, let alone any actual ethical questions such as deciding whether you have to invite your boorish brother-in-law, or whether you should cater specially for guests with a gluten intolerance.
You would be a bit worried about the utility of such a framework, right? Over-stating the importance of some factors, over-stating the risk of others, but also missing some really key things to consider in your planning. Plus, not actually addressing any ethical questions at all.
That’s how I felt when reading Artificial Intelligence: Australia’s Ethics Framework, a Discussion Paper by CSIRO’s Data61, released by the Department of Industry, Innovation and Science on 5 April, the objective of which is “to encourage conversations about AI ethics in Australia”.
Its definition of what is covered by the Privacy Act is wrong, it fails to mention other critical privacy laws which could apply to organisations developing or applying artificial intelligence (AI) or machine learning (ML), its assumptions about what matters in the application of privacy law in practice is wrong, it misses the bulk of what is important, and it throws into the mix a random consideration which is unrelated to the discussion at hand and the risk of which is overstated.
Surely, a discussion of ethics must begin with foundational concepts and accurate explanations of the law, and then move on to ethical dimensions which challenge how to apply the law in practice, or which ask difficult questions about topics which go beyond the requirements of the law. A framework which does not achieve this – and which could lead its audience into misunderstanding their legal obligations – could be worse than no framework at all.
I was worried enough to gather a couple of other privacy pros, with whom I prepared a joint submission to the Department of Industry. That submission is reproduced below, along with the names of additional colleagues working in the privacy field who agree with its sentiments.
You can make your own submission until 31 May 2019 at the Department’s website.
UPDATE, NOVEMBER 2019: In November the Department replaced its lengthy Discussion Paper with a short set of 8 AI Ethics Principles, without further explanatory or supporting material, which comprises motherhood statements (such as this gem, the principle dealing with privacy: “Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data”) which despite being valueless statements of the bleeding obvious will somehow now be tested by industry. The sole suggestion for “maintaining privacy” is to use “appropriate data anonymisation”. This v2 approach to resolving privacy risks (‘We can fix everything with anonymisation!’) is as simplistic and wrong as the v1 approach (‘We can fix everything with consent!’)
The Department states that as a result of the submissions received on their Discussion Paper they “analysed the submissions and engaged with AI experts from business, academia and community groups to help analyse the feedback. This enabled us to develop the revised set of AI ethics principles”. I note however that none of the authors of our joint submission (below) were contacted.
The Principles don’t even start to touch on whether personal data should be being used to train AI in the first place, let alone how the application of the results will impact on human dignity or autonomy. The end result looks like the Department dealt with our critique of the Discussion Paper (along with 129 other submissions, including multiple others also critical of their approach to privacy) by simply backing away from all discussion of either the law, ethical complexities or moral nuance of using ML/AI, and producing some airy-fairy gumpf in place of pragmatic guidance. That’s a big chunk of $29.9M of your tax dollars at work, folks. – Anna Johnston.
THE SUBMISSION AS LODGED
Introduction
The world of machine learning (ML) and artificial intelligence (AI) is set to dominate the technology of the future, and reframe human interactions. Ensuring that there is a strong legal and ethical framework to underpin the development and implementation of ML and AI is critical to ensuring that technology serves humans in a manner that is fair, non-discriminatory, and beneficial.
For these reasons, we are pleased to see the Australian Government’s commitment to starting the discussion on this important topic. While the principles espoused in the Artificial Intelligence: Australia’s Ethics Framework Discussion Paper are a good start, we submit it lacks a firm or accurate basis.
We are only just beginning to understand what ML and AI could do. But we must thoroughly understand what we are lawfully allowed to do, before we can truly understand what we should do.
Any discussion of ethics must therefore begin with foundational concepts and accurate explanations of the law, and then move on to ethical dimensions which challenge how to apply the law in practice, or which ask difficult questions about topics which go beyond the requirements of the law.
It is our submission that unfortunately, the Artificial Intelligence: Australia’s Ethics Framework Discussion Paper does neither.
This submission focuses on our area of expertise, which is privacy law, privacy management in practice, and the ethics of collecting, using or disclosing personal information. While mention of privacy is only a small part of the Discussion Paper, an understanding of privacy law, both in theory and in practice, is essential to developing an ethical framework which is accurate and beneficial for its audience, as well as protective of the humans who will be affected by the development of AI.
Our concern is that the Discussion Paper, as it stands, requires substantial re-writing to accurately reflect Australia’s legal privacy landscape.
The Discussion Paper’s definition of what is covered by the Privacy Act 1988 (Cth) (Privacy Act) is wrong, it fails to mention other critical privacy laws which could apply to organisations developing or applying ML or AI, its assumptions about what matters in the application of privacy law in practice is wrong, it misses the bulk of what is important, and it throws into the mix a random consideration which is unrelated to the discussion at hand and the risk of which is overstated.
This submission relates primarily to Chapter 3 of the Discussion Paper, titled ‘Data governance’, as this is our particular area of expertise. It also comments on the proposed Principles and Risk Assessment Framework in Chapter 7, in relation to privacy compliance and privacy risks.
This submission seeks to provide context for our answers to the following questions posed in the Discussion Paper:
- Are the principles put forward in the Discussion Paper the right ones? Is anything missing?
Our submission: There is so much missing in the description of privacy law as to be misleading. A sound ethical framework cannot be developed in the absence of a robust understanding of privacy law.
- As an organisation, if you designed or implemented an AI system based on these principles, would this meet the needs of your customers and/or suppliers? What other principles might be required to meet the needs of your customers and/or suppliers?
Our submission: No. The description of privacy law is so inaccurate as to be misleading.
- Would the proposed tools enable you or your organisation to implement the core principles for ethical AI?
Our submission: No. The fourth Principle at Chapter 7 introduces a concept not reflective of privacy law: “private data”. Believing privacy law is only about ‘private’ data is a common misunderstanding. Its repetition here will not assist your audience. This misconception could leave those entities engaging in AI and ML activities vulnerable to data breaches and sanctions.
Phrases such as “protected and kept confidential” relate only to a sliver of what is covered by privacy law, and are too vague to be edifying. It is not only data breaches or unauthorised disclosures which could cause privacy harm to a person. It is the very nature of data collection for ML development, or the re-purposing of existing datasets for ML development, or the application of algorithms derived from such data in AI or automated decision-making, which could generate privacy harms. Such activities underpin important decisions made by businesses and government that directly and significantly impact people’s lives. This has been either misunderstood or downplayed by the authors of these principles.
Further, the Risk Assessment Framework at chapter 7.2 contains an over-reliance on ‘consent’ as if it were the sole mechanism by which the collection, use or disclosure of personal information may be lawfully authorised. Consent is not a magic bullet. It’s not even the rule when it comes to collecting, using or disclosing personal information. It is the exception to the rule. Additionally, any consent must be freely given, informed, and specific – elements of validity very difficult to attain in the contexts described.
As such, the Risk Assessment Framework is seriously misleading for its users, and not fit for purpose as a risk assessment tool.
Getting the law right
The content of the section on privacy in Chapter 3 is so under-developed and inaccurate as to be misleading.
A discussion of ethics must begin with foundational concepts and accurate explanations of the law, and then move on to ethical dimensions which challenge how to apply the law in practice, or which ask difficult questions about topics which go beyond the requirements of the law.
This Discussion Paper does neither.
Its definition of what is covered by the Privacy Act is wrong, it fails to mention other critical privacy laws which could apply to organisations developing or applying AI or ML, its assumptions about what matters in the application of privacy law in practice is wrong, it misses the bulk of what is important, and it throws into the mix a random consideration which is unrelated to the discussion at hand and the risk of which is overstated.
In any final guidance to be offered on this topic, we suggest that the following errors must be corrected, at the very least:
- Privacy, while not defined in legislation, is defined in Office of the Australian Information Commissioner (OAIC) resource materials as ensuring that individuals have transparency and control regarding the way in which organisations and government handle their personal information, and can where practicable choose to interact with those entities without identifying themselves.
- Privacy laws in Australia cover ‘personal information’, not ‘personal data’.
- The Privacy Act is not the only privacy law which will apply to organisations working with AI. For instance, State and Territory-based privacy laws will apply to those working in the public university sector (other than ANU), and State and Territory-based health privacy laws will apply to organisations managing health information, no matter whether they are also covered by the federal Act or more local privacy laws. The European General Data Protection Regulation (GDPR) will apply to organisations in Australia which have an establishment in the EU, or which offer their goods or services to, or monitor the behaviour of, people in the EU.
- Although noted in the Bibliography at 58., we submit that a thorough review of the Office of the Victorian Information Commissioner (OVIC)’s Artificial intelligence and privacy: Issues paper (June 2018) in the formulation of this Discussion Paper would have assisted with terminology nuances and an understanding of the breadth of the privacy law landscape in Australia. The OVIC issues paper, for example, reflects considerations in relation to the State of Victoria’s privacy law and references the fair information handling principles underpinning that law (and others worldwide) first set out by the Office of Economic Cooperation and Development (OECD) in 1980.
- At the outset, the Discussion Paper makes no distinction between personal information and sensitive information, nor the differing legal requirements relating to the two. Personal information, in most privacy laws in Australia, has a sub-set known as sensitive personal information, to which higher standards apply. Using the language of ‘sensitive’ to describe all personal information confuses readers about the actual legal definitions, and the differing obligations that arise from those definitions.
- The Discussion Paper comments that there may be a need to explore what privacy means in the digital world. The authors of this paper may have benefitted from exploring the meaning of privacy and its iteration in the Australian privacy law landscape by reviewing the extensive research on this topic compiled by the Australian Law Reform Commission in its Report 108.
- In terms of exploring key concepts, we additionally submit the authors of the Discussion Paper should further explore what consent means; particularly in terms of its proper application in existing privacy law (both in Australia, and internationally) and the hallmarks of a true consent. The matters discussed in 3.1 (Consent and the Privacy Act) and the related sub-sections confuse the topic entirely.
- Consent is not the sole privacy issue, nor is it the sole solution to all privacy problems. “Protecting the consent process” is not “fundamental to protecting privacy”.
- If it is truly intended to collect, use and disclose personal information in accordance with the law (e.g., the Privacy Act), the authors of the Discussion Paper must first understand that consent is not ‘the rule’; it is an exception to the rule – the ‘rule’ here being the restrictions or limitations on collection, use and disclosure of personal information as set out in the law.
- Consent is just one of many exceptions that may be applied, as appropriate, during decision making processes.
- Privacy law creates privacy obligations covering the entire life cycle of handling personal information, and in many cases consent is utterly irrelevant. For a plain language explanation of the role of consent in privacy law, we refer you to a paper by one of the authors of this submission, ‘Why you’ve been drafting your Privacy Policy all wrong’.
- Consent is only explicitly required under the APPs at the time of collection, where the information being collected is ‘sensitive’ personal information. Consent may or may not be sought at the time of collecting personal information. It can also be sought later, before an unrelated secondary use or disclosure is to occur. Nor is consent everlasting, irrespective of when it was sought. Suggesting that consent must be gained at the time of collecting personal information will conflate and confuse consent with the requirement to offer a ‘collection notice’, which is a separate legal obligation (unrelated to whether or not you need consent for your proposed data use), which is indeed required at the time of collection.
- The paper confuses the acts of providing a collection notice or transparency obligations in privacy law (which are about “making people aware”) with consent, which is a separate act of seeking permission or agreement to stray from the rules set out in privacy law.
- In any case, consent is not a ‘get out of jail free’ card and is a significant problem where there is misuse of personal information in a way that compromises individuals’ trust, even if otherwise lawful. The HealthEngine incident is a good example of this. Further, consent must be reviewed periodically in a way commensurate with the sensitivity and risk, so as to ensure that it remains current.
- Conceptually, consent is an even less appropriate means to authorise data flows in the context of AI than in other contexts. Consent would likely be ineffective where AI is concerned; most people would be unaware of the impacts of AI or its possible consequences and as a result ‘informed’ and ‘specific’ consent would be near impossible to achieve.
- The introduction of the topic of consent in the Discussion Paper without any context for that discussions begs the question: this discussion is about consent to what? Any discussion about privacy laws must first explain what they actually do, which is to regulate both data flows and data governance.
- Data flows are regulated to the extent that privacy principles say when personal information can be collected, used or disclosed. For each of these, the privacy principles offer various grounds on which personal information may be lawfully collected, used or disclosed, and ‘with the consent of the individual’ is but one of those grounds. In the context of AI and ML, it is likely the least useful ground.
- Much more challenging for organisations developing or applying AI or ML are compliance with the rules around the use or disclosure of personal information for secondary purposes. The datasets on which ML is trained will almost certainly have been created in the first place for a primary operational purpose related to the individual (e.g. to treat a patient, to transport a passenger from A to B, to connect a customer’s phone call). Re-use of that dataset for training ML is a secondary use, unrelated to the primary operational purpose. The starting point in privacy law is that secondary uses are not allowed, unless an exception to that rule applies. ‘With the consent of the individual’ is one such exception, but is generally not pragmatic in the case of large datasets. There are other exceptions such as ‘for law enforcement purposes’ which will generally not apply, which leaves research exceptions as the most likely path for the development of ML in particular. However research exceptions (which differ in scope between the federal Privacy Act and State and Territory-based privacy laws) typically define ‘research’ narrowly; require elaborate processes to balance and test the ethical implications of allowing the secondary use or disclosure of personal information without consent; and raise additional questions about whether the proposed research is in the public interest, such as beneficence and impact on vulnerable populations. An ethical framework which fails to mention the process by which Human Research Ethics Committees must wrestle with the ethical implications of an AI or ML project, before allowing it to proceed lawfully, seriously underplays the legal and ethical requirements of organisations working in the AI or ML fields.
- Data governance includes the need for transparency, amongst other matters such as enabling rights of access and correction. This includes notice to individuals about how their personal information will be collected, used or disclosed. Notice is not the same as consent. The delivery of a meaningful notice poses considerable challenges in the context of the application of technologies quite removed from the individual whose personal information is at issue, such as AI and ML. This should be an important focus of any discussions around privacy and AI.
- The paper conflates the currency of consent with the absence of a ‘right to be forgotten’, as though consent can always be considered current unless someone has asked to be erased. This is just nonsense. The ‘right to be forgotten’, which is a unique feature of the EU General Data Protection Regulation (GDPR), is unrelated to the issue of consent. It is related to the rights of access and correction.
- The right to be forgotten is overstated in this Discussion Paper in terms of topics for organisations to worry about. Even in the GDPR, the right to be forgotten is not an absolute right, and should not impact on the business practices of companies which are only collecting or using personal information lawfully and fairly, and still have a current need for it.
- The right to be forgotten may not explicitly exist in the Australian Privacy Act, but APP 12 requires that information once no longer needed for the purposes for which it was collected be deleted, destroyed or de-identified. Not doing so, irrespective of any specific request from an individual, would be a breach of the Act.
- The Facebook/Cambridge Analytica case study is less about consent, and more an illustration of the failure of either of those companies to adhere to legal limitations on the secondary use or disclosure of personal information, beyond the expectations of the individual.
- In describing the Facebook/Cambridge Analytica case study with language such as “this incident … demonstrates that it may not be sufficient to merely follow the letter of the law” implies that Facebook complied with privacy law when it allowed members to ‘consent’ on behalf of their friends to let Cambridge Analytica scrape and re-use their data, but that these practices somehow fell foul of ethical considerations beyond the law. This is seriously misleading, as Facebook has already been found to have not complied with privacy law by the UK privacy regulator, and more recently by the Canadian privacy regulator, with investigations in other jurisdictions still open as at the time of writing, such as in the United States where Facebook is reported to be expecting a fine of between $3 billion and $5 billion.
- The Discussion Paper contains a very poor description of the notifiable data breach (NDB) scheme and requirements, and in any case provides only a retrospective action for breaches whether or not they involve AI. AI also proceeds at such a speed that significant damage is likely to be done in the time that any breach takes to be detected, and the AI activity ceased.
- The NDB scheme covers ‘personal information’, not personal data. The notifiable data breach scheme is not limited to ‘personal information’, but also two other types of data: Tax File Numbers and credit-related information. The NDB scheme is not limited to unauthorised access or disclosure, but also covers loss of relevant data.
- Organisations may also be subject to additional data breach notification schemes under other privacy laws, most notably the GDPR, which has a broader definition of what constitutes a data breach, and stricter timeframes for reporting.
- Hypothesising that human error data breaches are indicative of ‘security gaps’ in the same / similar way as malicious or criminal attacks is wildly misleading.
- The impending Consumer Data Right (CDR) would not significantly change the privacy landscape for the consumer nor provide any effective safeguards or transparency in the specific use of AI. Consumers already have a right to access their information under the APPs and the CDR would provide another mechanism by which businesses could share customer information and use it for a range of purposes, included those that utilise AI.
- The ‘Key Points’ at 3.5, by focusing only on ‘consent’ as a mechanism for resolving any and all privacy risks, fail to deliver an even baseline level of explanation about what organisations must do in order to meet their legal requirements, in terms of either authorising data flows, or enabling data subject rights as part of routine data governance.
- Critically, the proposed Principles outlined in Chapter 7 do not reflect the scope of privacy law, let alone grapple with ethical considerations beyond the law.
- Chapter 7 proposes the following Principle: “Privacy protection. Any system, including AI systems, must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm to a person.”
- This introduces a concept not reflective of privacy law: “private data”. Believing privacy law is only about ‘private’ data is a common misunderstanding. Its repetition here will not assist your audience.
- Further, phrases such as “protected and kept confidential” relate only to a narrow understanding of what is covered by privacy law, and are too vague to be edifying. It is not only data breaches or unauthorised disclosures which could harm a person; it is the very nature of a data collection, its use for ML development, or its application in AI or automated decision-making, which could lead to privacy harms. We discuss this further below. This has been either misunderstood or downplayed by the authors of these principles.
Implementation issues
Further, in our view:
- The discussion of de-identification and re-identification is overly simplistic. Further, de-identification is not the panacea for compliance with privacy law, particularly if the de-identification is not permanent and irreversible. Where that is the case, de-identification is merely a protective measure, but does not remove the information from the obligations of privacy law, nor the community’s expectations about how their information should be used.
- The discussion of the risks from location data is overly simplistic. A more nuanced discussion could be developed from considering the fallout from the public release of Strava fitness data, as one example. For more on the privacy risks posed by location data, see the following papers by one of the authors of this submission:
- The Discussion Paper suggests a number of impractical solutions to legal and ethical problems. For example:
- It poses a strange and impractical distinction between ‘training data’ as though it can always be a discreet dataset, and AI would not continue to use and learn from live, ongoing data.
- The suggestion that a Code could be applied to data scientists, whilst helpful, does not resolve the issue. AI is becoming so accessible that every employee with access to data is a ‘data scientist’ although that is not their primary function.
- Similarly the suggestion that AI systems can be isolated and regulated is impracticable. AI is becoming so embedded in regular business practices that ‘system’ regulation is very difficult.
- The Risk Assessment Framework at chapter 7.2 repeats the over-reliance on ‘consent’ as if it were the sole mechanism by which the collection, use or disclosure of personal information may be lawfully authorised.
- In terms of outcomes, the Discussion Paper does not express in any real terms how consent can be translated into a reliable governance approach; rather, it seems to presume a generic level of appropriateness associated with using a consent model based on a view (however formed) that having consent will address both privacy compliance risks and allay privacy-related fears in the community. In this way, consent is treated as both the primary mechanism to (get the community to) allow a thing and the benchmark for success in terms of the proposed Risk Assessment Framework for AI Systems. On the former, it presupposes a level of engagement and sophistication within the populace from whom consent will be sought. On the latter, it fails to address the intricacies and risks in decision making where personal information is concerned. While it is clear that consent is intended to be a decisive, rigorous and universal proposition, in the context of this Discussion Paper it appears to be an ill-considered broad brush approach to a complex area of public policy.
- Reliance on consent also pre-supposes that data scientists know what data they will be using and what insights or results it will generate. Most often, they don’t. They start with an unfettered dataset and let the AI create and apply algorithms from and to the data. You can’t get valid pre-consent from individuals for the future use of their personal information when the data scientists don’t even know what they’re looking for.
- Consent is not a magic bullet. As such, the Risk Assessment Framework is seriously misleading for its users, and not fit for purpose as a risk assessment tool.
Ethical considerations in relation to privacy
Once you have guidance which at least starts with an accurate description of the law, then you could move on to ethical considerations.
The ethical issues considered in the Discussion Paper needs greater breadth and depth.
An example of an ethical issue involving privacy in the context of ML and AI is how the data used to train ML in the first place was obtained. For example, the questionable ethics of scraping personal information from the web was recently highlighted in an NBC News investigation of IBM’s development of facial recognition algorithms from photos used, without either the subject or the copyright owner’s consent, from photo-sharing site Flickr.
Another example of ethical issues in the collection of the data used to train ML is the recent revelation that humans listen to and transcribe conversations heard by digital assistants.
Any discussion of these types of examples must however start with the recognition that what may be lawful (if unethical) behaviour by technology companies in the United States would not necessarily be lawful in other parts of the world with more comprehensive privacy laws, such as Australia.
An example of an ethical issue not covered by privacy law is the practice of individuating individuals without identifying them. The rights protected by privacy laws currently stop at the point where an individual can be claimed to be no longer identifiable. Leaving aside the vexed question of whether data can ever be truly described as unidentifiable, privacy harms can still be done to humans, such as through targeting an individual for communication, intervention, or denial of goods or services, even when the human is not identifiable. Organisations involved in the development or application of AI or ML must grapple with the ethical implications of activities which can cause privacy harms, even if legal.
For more on the topic of individuation, see the paper by one of the authors of this submission: ‘Individuation – Re-thinking the scope of privacy laws’.
The extent to which predictive analytics can narrow or alter people’s life choices, in ways that are not necessarily transparent to the affected individual, must also be more comprehensively considered in any serious discussion of ethical issues in ML and AI. For more on the topic of predictive analytics, see the paper by one of the authors of this submission: ‘How do you solve a problem like Facebook?’.
We also refer you to the work of international legal scholar Daniel Solove, who has written extensively on the taxonomy of privacy harms.
Discussion of this topic must also grapple with issues of community expectations around privacy, and the importance of gaining a social licence for the use of people’s personal information or impacts on their privacy. Thus the objective of any research or development activity is relevant, as will be the likely applications of that development in the real world. The use of ML to train AI applications to detect treatable cancers more effectively than humans or current technologies can would likely sit high on a measure of social licence, while the use of AI or ML in other scenarios, from how to prioritise child protection interventions, to decisions around policing, bail or sentencing, to which potholes should be fixed first, will be more problematic.
We note that the new Ethics Guidelines for Trustworthy AI from the European Commission have a more definitive position, which embraces a fuller understanding of existing privacy law, and the need to adopt a protective position. The Commission’s Privacy summary says: “Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.”
Further, where the ethical principles are concerned, the principle of Do No Harm is poorly named and described, as well as impracticable – Do no harm is not the same as minimise harm, or design without any intention of harm as the definition in the paper suggests. AI and ML activities often have no indicators of harm, until applied in the real world.
Conclusion
Downplaying or inaccurately describing legal requirements does not assist those working in the AI/ML fields to understand where their legal requirements end, and their ethical requirements start.
It is our submission that the Risk Assessment Framework is seriously misleading for its users, and not fit for purpose as a risk assessment tool.
By presenting ‘consent’ as a mechanism for resolving any and all privacy risks, this Discussion Paper fails to deliver an even baseline level of explanation about what organisations must do in order to meet their legal requirements, in terms of either authorising data flows, or enabling data subject rights as part of routine data governance. We suggest that for the most part, consent will be irrelevant to the development of ML or AI technologies, and other privacy compliance considerations come into play. These privacy compliance requirements require nuanced solutions, not a misplaced faith that ‘getting consent will fix everything’. Consent is not a magic bullet.
Once you have guidance which starts with an accurate description of the law, then you could move on to ethical considerations which help flesh out the application of the law in practice, or which grapple with ethical considerations beyond legal requirements.
We suggest that the CSIRO and Department of Industry should engage with privacy regulators, and practitioners with specialist expertise in privacy law and practice, to assist in a redrafting of the Principles and Framework, as well as the contextual discussion underpinning them.
This submission was authored by:
- Anna Johnston, Director of Salinger Privacy, and former Deputy Privacy Commissioner of NSW, CIPM, CIPP/E, FIP;
- Nicole Stephensen, Principal of Ground Up Consulting, and Executive Director (Privacy and Data Protection) at the Internet of Things Security Institute; and
- Nicole Hunt, Privacy and Ethics Specialist, former Director of Privacy for the Australian Digital Health Agency, Senior Privacy Advisor for the NBN, and Deputy Director at the Office of the Privacy Commissioner.
Each author is an experienced privacy specialist.
In addition the following individuals, also experts in the privacy field, lend their name to this submission, in order that CSIRO and the Department of Industry appreciate the importance of accurate and nuanced discussion of privacy law and privacy-related ethical dimensions in any guidance, principles or risk assessment frameworks being developed for industry and academia working on the fields of machine learning and artificial intelligence.
- Malcolm Crompton, AM, FAICD, CIPP. Privacy Commissioner of Australia 1999-2004, Founder and Lead Privacy Advisor to Information Integrity Solutions Pty Ltd
- Melanie Marks, Principal, elevenM Consulting
- Sophie Bradshaw, Principal, Elgin Legal
- Dr Monique Mann,Vice-Chancellor’s Research Fellow, Technology and Regulation, Faculty of Law,Queensland University of Technology
- Kara Kelly LLB, CIPM
- Dr Roger Clarke, Visiting Professor, UNSW Faculty of Law, Visiting Professor, ANU Computer Science, and Principal, Xamax Consultancy Pty Ltd
- Stephen Wilson, Principal of Lockstep Consulting
- Nathan Mark, BA LLB. LLM Research Student focussing on inter-jurisdictional data and digital evidence
- Andrea Calleia, Privacy Learning Manager, Salinger Privacy, CIPM
- Nathan Kinch, Co-founder and CEO of Greater Than X and inventor of Data Trust by Design
- Mathew Mytka, Chief Platform Officer of Greater Than X, former Head of Platform Product at Meeco
Photograph (c) Adobe Stock