[ad_1]
Across the world and within the United States the use of facial recognition technology is on the rise. Touted as a powerful security tool, facial recognition has been rolled out nationwide with little fanfare and with most citizens unaware they are being surveilled.
Yet one critical issue remains largely unaddressed: facial recognition technology relies on algorithms that struggle to equally identify matches, the software has demonstrated clear racial bias with some algorithms up to 100 times better at identifying white people.
RELATED: Detroit activists aim to ban racist facial recognition software
In December 2019, the National Institute of Standards and Technology (NIST), an organization that acts as a facial recognition watchdog among other tasks, published a report examining 189 algorithms. The software programs NIST looked at came from 99 developers around the world and focused on how well the algorithms identified people from various demographics.
The organization’s testing showed that many programs were 10 to 100 times likelier to misidentify East Asian and Black faces than they were white ones. In particular, the algorithms struggled with Black female faces and most incorrectly matched the face in question with images in their database.
The report is the third of several assessments in NIST’s Face Recognition Vendor Test, a program aimed at discovering the capabilities of different in-use algorithms. Craig Watson, an Image Group manager at NIST, told Scientific American that the reports were intended to “inform meaningful discussions and to provide empirical data to decision-makers, policymakers, and end-users to know the accuracy, usefulness, capabilities, and limitations of the technology.”
Despite the disparities facial recognition technology has shown across demographics, it is still used in the United States. Digital rights advocacy group Fight for the Future published a map offering a visual representation of how often US law agencies use the software to pan through millions of photos of Americans, often without consent.
While the use of facial recognition software in airports isn’t particularly surprising, examples such as Baltimore police using the technology to identify and detain people at protests is less expected and more worrisome, both from a privacy perspective and with the greater risk of people of color being misidentified in mind.
Facial recognition protests in the US have led to some positive outcomes, including on university campuses at Harvard, Columbia, and UCLA among others.
Cities are also leading the change. San Francisco has become the first city in the world to ban facial recognition technology, citing the loss of its citizens’ civil rights and liberties, while Somerville, Massachusetts, and Oakland, California followed suit. More recently, Arvind Krishna, CEO of leading tech company IBM, spoke out against facial recognition in a statement that also called on US Congress to combat entrench and systematic racism.
In addition to the inherently troubling racial biases facial recognition technologies show, they represent a gross infringement on civil liberties. Because facial recognition often takes place in public spaces, there is no option to opt-in, rather the choice is made for citizens.
Use is not confined to the state level, either. Companies including Walmart, McDonald’s, and many others are putting the technology through its paces in unsettling ways. In the case of the fast food giant, by scanning its servers’ faces in Japan to see if they are smiling and providing good customer service.
RELATED: Amazon stops police use of facial recognition technology
Walmart, meanwhile, is working on a system that can detect dissatisfied shoppers at the checkout. It has also made noises about smart systems that detect where a shopper’s attention is focused to maximize the potential of sales and offers. It already uses the tech to detect shoplifting. Walmart’s extensive use of facial recognition is not an anomaly, several leading retailers, including Target, have similar systems in place.
In the wake of ongoing protests sparked by George Floyd’s death at the hands of law enforcement, facial recognition’s troubling bias and privacy violations are under renewed scrutiny. The face of the future may be quite different if more Silicone Valley names join IBM’s CEO in turning their back on facial recognition technology.
Brad Smith is a technology expert at TurnOnVPN, a non-profit promoting a safe and free internet for all. He writes about his dream for a free internet and unravels the horror behind big techs.
[ad_2]
Source link