[ad_1]
Facial-recognition technology is showing great promise when it comes to security — unless you’re a person of color, and especially if you are a Black woman or older and female.
A new study from the National Institute of Standards and Technology shows that facial-recognition systems are not working too well when it comes to Black, Asian and Native American faces, CNET is reporting.
READ MORE: Was it ‘techno-racism’? Facial recognition bug denies Black man’s passport photo
The technology generated more false positives when it came to comparing two different photos of the same person person of color — a practice known as one-to-one matching used in tasks like unlocking a cell phone or checking a passport photo, according to CNET.
The study included high profile companies like Microsoft, Intel and Panasonic. Amazon, whose Rekognition software has been criticized for racial and gender bias, did not submit its algorithm for the test, CNET reports.
The testing showed the recognition was even more problematic for women’s faces and among the elderly.
“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” Patrick Grother, a computer scientist with the institute and main author of the report, said in a statement provided to CNET.
“While we do not explore what might cause these differentials, this data will be valuable to policy makers, developers and end users in thinking about the limitations and appropriate use of these algorithms,” Grother said.
READ MORE: Man sues Apple for $1 billion, claims facial recognition software led to his false arrest
The report also noted that there were higher rates of false positives for Black females, and that this could lead to false accusations from law enforcement, Grother said in his statement.
“In a one-to-one search, a false negative might be merely an inconvenience — you can’t get into your phone, but the issue can usually be remediated by a second attempt,” he said. “But a false positive in a one-to-one search puts an incorrect match on a list of candidates that warrant further scrutiny.”
The institute studied more than 18 million photos of 8.5 million people for the report, and pulled those photos from the State Department, Homeland Security and the FBI, CNET reported.
[ad_2]
Source link