Salinger Privacy

  • About
    • About Salinger Privacy
    • Videos, Podcasts and Media Mentions
    • Work with us
  • Consulting
    • Our Consulting Services
    • Privacy Impact Assessment
    • Privacy by Design advice
    • Algorithmic Impact Assessment
    • Privacy Compliance Reviews
  • Training
    • Overview
    • Training Calendar
    • Public Courses and Workshops
    • In-house Privacy Training and Workshops
    • Online Training
    • Webinars
    • IAPP Certifications
    • Training Advisory Services
    • Login
  • Privacy Resources
    • Privacy Resources
    • Compliance Kits
    • Resources on key privacy topics
    • Free Handbook
    • Newsletter
    • Login
  • Who We Are
    • Anna Johnston
    • Melanie Casley
    • Andrea Calleia
    • Stephen Wilson
    • Chris Culnane
  • Blog
  • Contact
  • Compliance Kits
    • For Business
    • For NSW Public Sector
    • For Victorian Public Sector
    • For Australian Government
    • Login

Man made software in His own image

July 23, 2015, Stephen Wilson

Share this post

Share this post on twitter Share this post on Linkedin Share this on Facebook

In 2002, a couple of Japanese visitors to Australia swapped passports with each other before walking through an automatic biometric border control gate being tested at Sydney airport. The facial recognition algorithm falsely matched each of them to the others’ passport photo. These gentlemen were in fact part of an international aviation industry study group and were in the habit of trying to fool biometric systems then being trialled round the world.

When I heard about this successful prank, I quipped that the algorithms were probably written by white people – because we think all Asians look the same. Colleagues thought I was making a typical sick joke, but actually I was half-serious. It did seem to me that the choice of facial features thought to be most distinguishing in a facial recognition model could be culturally biased.

Since that time, border control face recognition has come a long way, and I have not heard of such errors for many years. Until today.

The San Francisco Chronicle of July 21 carries a front page story about the cloud storage services of Google and Flickr mislabelling some black people as gorillas (see updated story). It’s a quite incredible episode. Google has apologised. Its Chief Architect of social, Yonatan Zunger, seems mortified judging by his tweets as reported, and is investigating.

The newspaper report quotes machine learning experts who suggest programmers with limited experience of diversity may be to blame. That seems plausible to me, although I wonder where exactly the algorithm R&D gets done, and how much control is to be had over the biometric models and their parameters along the path from basic research to application development.

So man has literally made software in his own image.

The public is now being exposed to Self Driving Cars, which are heavily reliant on machine vision, object recognition and artificial intelligence. If this sort of software can’t tell people from apes in static photos given lots of processing time, how does it perform in real time, with fleeting images, subject to noise, and with much greater complexity? It’s easy to imagine any number of real life scenarios where an autonomous car will have to make a split-second decision between two pretty similar looking objects appearing unexpectedly in its path.

The general expectation is that Self Driving Cars (SDCs) will be tested to exhaustion. And so they should. But if cultural partiality is affecting the work of programmers, it’s possible that testers have suffer the same blind spots without knowing it. Maybe the offending photo labelling programs were never verified with black people. So how are the test cases for SDCs being selected? What might happen when an SDC ventures into environments and neighbourhoods where its programmers have never been?

Everybody in image processing and artificial intelligence should be humbled by the racist photo labelling. With the world being eaten by software, we need to reflect really deeply on how such design howlers arise. And frankly double check if we’re ready to let computer programs behind the wheel.

 

Photograph © Shutterstock

Filed Under: Uncategorized

If you enjoyed this blog, subscribe to our newsletter to receive more privacy insights and news every month.

Privacy Compliance Kits

Recent Posts

  • OAIC determinations shed light on when data is regulated as ‘personal information’
  • Big Tech, Individuation, and why Privacy must become the Law of Everything
  • Should birds of a feather be FLoC’d together?
  • Why can’t Aunty get the ABCs of privacy right?
  • Privacy law reform in Australia – the good, the bad and the ugly
  • Between 7 and 11 lessons you can learn from the latest OAIC privacy case
  • Privacy and gender: what to ask, when and why
  • What covid apps can teach us about privacy, utility and trust in tech design
  • Cat or carrot? Assessing the privacy risks from algorithmic decisions
  • Not too much identity technology, and not too little

Archive

  • 2022
  • 2021
  • 2020
  • 2019
  • 2018
  • 2017
  • 2016
  • 2015

Search

Salinger Privacy we know privacy inside out

Salinger Privacy can help you navigate the complexity of the regulatory environment, and ensure the trust of your customers.

CONTACT US

T: 02 9043 2632
PO Box 1250, Manly NSW 1655
Email Enquiry

© Salinger Consulting Pty Ltd
ABN 84 110 386 537

Our Privacy Policy

Subscribe to our newsletter.

These details will be added to our mailing list to receive the Salinger Privacy eNews and Product News newsletters. You can unsubscribe or adjust your preferences at any time, from the bottom of any newsletter.