Blogs

Should Technology Companies Exit Facial Recognition Business?

By Hubert Yoshida posted 06-11-2020 20:55

  



CNN has reported that IBM is canceling its facial recognition programs and calling for an urgent public debate on whether the technology should be used in law enforcement. While debate on whether and how facial recognition should be used in Law enforcement is critical, cancelling facial recognition programs will deny society of a technology that can have significant social and economic benefits in terms of identification (who are you?), authentication (are you really who you say you are?), and welling being (How are you feeling?).  Facial recognition is already used to access applications, prevent crimes, protect restricted areas, diagnose disease, and other valuable services.

 

The primary concerns about facial recognition is that it can be used at any time without seeking one’s permission to have one’s image captured and once captured there is little one can do to control the sharing, use and alteration of that image. The more immediate concern is the use of facial recognition for racial profiling and mass surveillance and identification of protestors for possible retaliation. These concerns should be addressed by legislation such as the Commercial Facial Recognition Privacy Act of 2019, proposed in Washington, D.C., in March of this which would prohibit commercial users of facial recognition technology from collecting and resharing data for identifying or tracking consumers without their consent.

 

Technology such as pixilation and lidar can also be applied to Facial Recognition to anonymize the identity of the individual while still monitoring the activity for safety and security. Hitachi’s Smart Video solutions can protect privacy and improve transparency even as it gathers and analyzes vast amounts of data. Hitachi use AI software which automatically obscures and protects people in surveillance videos in real time through pixilation, while movement and actions remain recognizable.

 

 

In 2018 there were many articles published about the racial biases of facial recognition. A 2018 study by MIT found that while determining gender using three different facial recognition programs, the error rate for light-skinned men was 0.8%, while darker-skinned women were misgendered 20% to 34% of the time. Also, In 2018, Amazon’s facial recognition tool, Amazon Rekognition, misidentified 28 members of Congress as people who had been arrested for crimes. While comparing the lawmaker’s faces against mugshot databases, Rekognition misidentified lawmakers of color at a rate of 39% even though they only made up 20% of the people. While errors like this could lead to disastrous consequences, like denial of economic benefits, false imprisonment or even police brutality, the problem is in the accuracy of the facial recognition algorithms and training data, correcting these errors in the technology, could help to make facial recognition beneficial.

 

Facial recognition algorithms improve by being supplied large datasets of faces. Existing facial recognition products work well on “pale males” because the algorithms were supplied datasets of majority White men, reflective of the tech industry itself. An MIT and Stanford University study found that a widely used training set was more than 77% men and 83% White people. They also found that, algorithms in Asia tend to perform well on Asian males and not as well on White males. Hitachi Vantara being a global company, eliminates demographic biases in their training data.

 

Fortunately, those results from 2018 are being corrected. A NIST (National Institute of Standards and Technology) report that was published at the beginning of 2020, tested over 200 facial recognition algorithms and found them to be highly accurate and with vanishingly small differences in their rates of false-positive or false-negative readings across demographic groups. NIST found that the most accurate algorithms did not display a significant demographic bias. For example, 17 of the highest performing verification algorithms had similar levels of accuracy for black females and white males: false-negative rates of 0.49 percent or less for black females (equivalent to an error rate of less than 1 in 200) and 0.85 percent or less for white males (equivalent to an error rate of less than 1.7 in 200).

 

Mark Jules, Hitachi Vantara Global VP Smart Spaces and Video Intelligence has this to say about of Video Analytics and Privacy in a recent blog.

 

However, people and political systems also play a critical role in protecting individual privacy and civil liberties. Governments and companies have long possessed tools that—if misused—could lead to negative outcomes. It is our collective responsibility to ensure that our governments, businesses and society use technology responsibly and for good purposes.

Video is just another tool in our collective societal toolbox to help uncover immensely valuable insights that can make our cities, businesses, public spaces and services safer and work better. The combination of technology and policy protections provide us with an incredible opportunity to transform our world into one that is safer, more sustainable and prosperous. We hope our industry peers will join Hitachi in doing our part to make this world possible. “


#Hu'sPlace
#Blog
0 comments
0 views

Permalink