Businesses and institutions need professionals who can evaluate AI, curate standards that apply to their enterprises, and implement strategies for complying with applicable laws and regulations.
The AI Governance Professional (AIGP) training course prepares you with baseline knowledge and strategies for responding to complex risks associated with the evolving AI landscape. This training program meets the rapidly growing need for professionals who can develop, integrate and deploy trustworthy AI systems in line with emerging laws and policies.
An AIGP trained and certified professional will know how to implement and effectively communicate across teams the emerging best practices and rules for responsible management of the AI ecosystem.
The AIGP program was developed by the International Association of Privacy Professionals (IAPP), which is the world’s largest comprehensive global information privacy community.
Course Overview
This course is a comprehensive program, led by our expert facilitator. Learn with like-minded peers to expertly prepare for the AIGP exam. See below for the program in detail.
This course will enable you to:
- Understand the technological foundations of artificial intelligence and the AI development lifecycle
- Evaluate AI’s effects on people and apply appropriate principles
- Know how current and emerging laws apply to AI systems
- Implement responsible AI governance and risk management, and
- Contemplate ongoing issues and concerns.
What’s included
- The benefits of learning in an environment which includes being able to share first hand experiences and questions with privacy-experienced peers
- Guidance from our expert facilitator
- Comprehensive course notes in a Participant Guide (digital copy + hard copy shipped within Australia and to NZ)
- Exam voucher, and
- IAPP Membership
Note: Your exam can be scheduled at a date and location to suit you. Online proctoring now available. If you are already an IAPP Member, your membership will be extended by 12 months upon registering for this course.
Who should attend?
- Anyone involved with implementing AI governance and risk management in their organisation
- Privacy Officers
Next available programs
ONLINE (LIVE VIRTUAL CLASSROOM) – Course is spread over 4 half-days.
NEXT DATES: 11 + 13 + 18 + 20 March 2025. 9am to 12:30pm each day, AEDT.
Early bird pricing ends 28 January 2025: $2,800 + GST per person
Book in 3+ participants together for a 5% discount, automatically applied at online checkout.
Please contact us to request a quote for training a team in-house, across Australia or New Zealand, or subscribe to our newsletter to be notified of future dates.
Our Facilitator
Salinger Privacy is a market leader in privacy training, consulting and pragmatic compliance tools. Salinger Privacy was established in 2004 by one of Australia’s foremost privacy experts on privacy law and practice, Anna Johnston. Salinger Privacy has delivered training on behalf of the Australian, NSW and Victorian Privacy Commissioners, and the International Association of Privacy Professionals, Australia / New Zealand.
This training will be facilitated by Andrea Calleia, our Director of Learning. Andrea has extensive experience in the learning and development field, and has specialised in developing and delivering privacy training since 2003. Andrea managed the education program for the NSW Privacy Commissioner’s Office, and on behalf of Salinger Privacy facilitates Privacy Officer training for the Office of the Australian Information Commissioner. Andrea has sat on the IAPP’s ANZ Advisory Board since 2020, to promote and serve the privacy profession in Australia and New Zealand.
The program in detail
The course is broken into eight modules:
Module 1: Foundations of artificial intelligence
Defines AI and machine learning, presents an overview of the different types of AI systems and their use cases, and positions AI models in the broader socio-cultural context.
Module 2: AI impacts on people and responsible AI principles
Outlines the core risks and harms posed by AI systems, the characteristics of trustworthy AI systems, and the principles essential to responsible and ethical AI.
Module 3: AI development life cycle
Describes the AI development life cycle and the broad context in which AI risks are managed.
Module 4: Implementing responsible AI governance and risk management
Explains how major AI stakeholders collaborate in a layered approach to manage AI risks while acknowledging AI systems’ potential societal benefits.
Module 5: Implementing AI projects and systems
Outlines mapping, planning and scoping AI projects, testing and validating AI systems during development, and managing and monitoring AI systems after deployment.
Module 6: Current laws that apply to AI systems
Surveys the existing laws that govern the use of AI, outlines key GDPR intersections, and provides awareness of liability reform.
Module 7: Existing and emerging AI laws and standards
Describes global AI-specific laws and the major frameworks and standards that exemplify how AI systems can be responsibly governed.
Module 8: Ongoing AI issues and concerns
Presents current discussions and ideas about AI governance, including awareness of legal issues, user concerns, and AI auditing and accountability issues.