The hidden dangers of facial recognition technology

As artificial intelligence becomes prevalent in society, computers are increasingly making autonomous decisions that affect us all. Not surprisingly, it turns out that computer software can be just as biased in decision-making as its human programmers.

In a recent New York Times opinion piece, MIT researcher Joy Buolamwini wrote that artificial intelligence can “reinforce bias and exclusion, even when it’s used in the most well-intended ways.” She cited her own research on facial analysis technology from IBM, Microsoft and Face++, noting that “on the simple task of guessing the gender of a face, all companies’ technology performed better on male faces than on female faces and especially struggled on the faces of dark-skinned African women.”

Last week Microsoft responded to these concerns, announcing that it has reduced the error rates of its facial recognition technology by up to 20 times “for men and women with darker skin” and by nine times for all women.

It was a critical fix to make, because the use of facial analysis technology is rapidly increasing globally. Here in New Zealand, there are already examples of the technology being used by government departments and prominent local corporations.

In February the Ministry of Primary Industries announced that an AI avatar, called Vai (Virtual Assistant Interface), will help international visitors arriving at Auckland Airport. MPI’s press release called the avatar “a digital biosecurity officer.”

The technology behind Vai was provided by kiwi company FaceMe, which noted in a blog post that “FaceMe’s avatar technology uses biometrics to learn human interactions and will interact accordingly to ease the customer’s experience.”

It’s important to point out that Auckland Airport’s Vai system doesn’t use facial recognition technology to identify individual people. However, FaceMe did confirm to me that its technology is capable of facial recognition and that it is working with other customers to implement it.

FaceMe is very aware of the privacy issues around facial recognition software. Chief Operating Officer Bradley Scott said by email that FaceMe has “applied the principle of ‘Privacy by design’ and [we] aim to meet the highest global regulatory standards for privacy protection.” He said the company has “paid particular attention to the new GDPR standards from the European Union.”

You may ask what’s new about Auckland Airport using FaceMe’s software to analyse biometric data about a person. After all, New Zealand Immigration already uses biometrics. It collects and stores photographs and fingerprints for everyone entering the country.

The difference is that facial recognition software goes much further – it scans a person’s face and collects many bits of data, which it then analyses using AI.

Like Auckland Airport, Air New Zealand is also using software capable of interpreting biometric data from people. Its prototype digital Customer Service rep, Sophie, was created by Auckland company Soul Machines. In a blog post last September, Soul Machines claimed that Sophie exhibited “advanced emotional intelligence and responsiveness as she answered questions about New Zealand as a tourist destination and the airline’s products and services.”

I reached out to Soul Machines Chief Business Officer, Greg Cross, to find out more. He firstly clarified that its technology isn’t being used for “biometric identification.” In other words, the software does not identify individuals. However, Soul Machines does gather “biometric signals” – data which it has termed “Emotional Intelligence.”

The goal of Soul Machines’ digital humans is to respond appropriately, in real-time, to real humans. Cross said that the technology includes “key sensory systems, brain models [and] virtual neurotransmitters which allow for real-time responsiveness.”

“If you smile at our Artificial Humans, they will smile back in the same way we respond as humans,” Cross told me.

Like FaceMe, Soul Machines is keen to show it is protecting the privacy of actual humans who its avatars encounter. “We believe that consumers should own and control their own data,” Cross said, “and should be able to choose who, when and how much they choose to share at anytime.”

You may think you’ll never willingly share your biometric data – especially for identification – given the concerns listed above about bias and privacy. But we’re entering an era where there will be potentially big benefits to sharing your biometric data. You may be more easily swayed than you think.

For one thing, your biometric data adds an extra layer of security to your online interactions. Why use easy to hack passwords or easy to forget PIN numbers when a computer can verify your identity simply by scanning your face?

Facial analysis might also speed up your shopping expeditions.

In China, Alibaba has developed a “smile to pay” system for the KFC brand of restaurants. A customer simply smiles at a 3-D camera after placing an order. Their face is then scanned and payment is verified through the Alipay app.

Why would these KFC customers allow a facial recognition system to scan their faces? Perhaps because it’s easy and it’s safer than carrying around a wallet. If you’re still not convinced, consider this statistic: according to a Visa survey last year, 86 percent of respondents were “interested in using biometrics to verify identity or to make payments.” Ease-of-use and better security were cited as the two main reasons.

Also consider that internet technology has a history of getting users to willingly cough up personal data, for benefits such as ease-of-use, utility and social networking. Google and Facebook are prime examples. Both corporations know a tremendous amount about our personal lives, yet most of us continue to use their products. We do so because Google is so darn useful and Facebook connects us to other people. It’s not a huge leap from inputting your personal information into Facebook, to using facial recognition software to buy a bucket of fried chicken.

While we haven’t yet implemented that level of facial analysis technology in New Zealand, last month it was revealed that our largest supermarket company is using facial recognition software to identify shoplifters. Foodstuffs, which runs the New World, Pak’nSave and Four Square brands, admitted to using the technology in some North Island stores – which it has declined to name.

Of course, there’s a difference between knowingly using biometrics (as in the Alibaba example) and not being aware you’re being scanned (as in the Foodstuffs example). But either way, there are real bias and privacy dangers that shouldn’t be ignored as facial analysis software becomes more widespread in our society.