How do you design a trustworthy system?
We help our clients build fairness, ethics, accountability and transparency (‘FEAT’) into AI projects, algorithmic systems and automated decision-making.
Our Algorithmic Impact Assessments use the Four D’s Framework to assess privacy risk, across design, data, development, and deployment.
For an assessment of an algorithmic system to be robust, it should encompass:
- Legal compliance – ensure the algorithmic system is lawful, with particular focus on privacy, anti-discrimination, and consumer protection laws
- Social impacts – consider the social, political, and economic context for a deeper appreciation of potential privacy-related harms, and
- Technical considerations – integrate testing for accuracy, performance, fairness and bias.
As well as examining the various types of privacy-related harm which can arise from algorithmic systems, we partner with the Gradient Institute to test for accuracy, performance, fairness and bias in training data, ML models and AI systems.
For more about the Four D’s Framework, see Algorithms, AI, and Automated Decisions – A guide for privacy professionals.