[ad_1]
Black activists and technologists are increasingly raising the alarm about what one has called ‘the new frontier of civil rights’—that is, growing concerns over inherent racism in artificial intelligence and computer algorithm technologies.
It was proven that algorithms behind many social media platforms were manipulated in an attempt to influence black voters. The released Mueller report outlined it plainly: Russian operatives deliberately targeted black activists online through social media—using the algorithmic technology behind online platforms to purposely cause dissension among black voters in the 2016 election (on page 32 of the Special Counsel’s report).
According to a report from Axios, these foreign actors gamed a number of tech platforms in addition to social media, including Google, PayPal, and Instagram.
“One of the reasons social media platforms continue to be exploited is because the opacity of algorithms being used make many of these fictitious posts or misleading campaigns go viral,” according to the Axios report.
How Algorithms Can Negatively Affect Black Lives
Algorithms are inextricably linked with Artificial Intelligence (AI) and Big Data. Algorithms, in a nutshell, are lines of code that tell computers what to do. While not all algorithms are used for AI, they do provide instructions for AI systems, as per TheNextWeb.
The problem with algorithms is that they are “inherently racist,” according to Mutale Nkonde, a US-based policy analyst and a 2018-19 fellow at Data & Society Research Institute in New York City.
“Algorithmic decision-making is based on historical data,” says Nkonde. “A system will look at, for example, how many people were evicted in Bed-Stuy [a Brooklyn neighborhood] over the last 10 years. When you are going for an apartment and then when your landlord does a credit check, if you’re black and it says there were historically huge amounts of evictions [among the black community], then you may not get the apartment.”
Last month, the Housing and Urban Development agency charged Facebook with housing discrimination violations. Facebook’s ad-targeting technology allowed property owners to target their properties to Facebook users based on race and other factors.
Algorithms today can help determine whether you get a loan, a job, or insurance. Just about all business verticals are using algorithms to some extent for decision making. Couple that with the massive amount of information collected against people through social media apps and search engines—so-called ‘Big Data’— and the potential to stealthily discriminate against black people is a concerning issue.
“Algorithmic decision-making is at the core running all of society’s systems,” says Nkonde.
Even the prison industry has moved into the digital age. Superstar rapper Jay-Z recently invested in Promise, a startup dedicated to reducing incarceration rates. The startup offers ‘community-supervised alternatives to jail and prison’ according to its website.
While the intent of the startup, keeping more black people out of the prison system is good, Nkonde sees the technology behind it as problematic.
She says Jay-Z and “a bunch of other billionaires have bought into this company where instead of going to jail, you’re going to be given an ankle bracelet that will track you,” she says.
In a TechCrunch article featuring Promise’s co-founder and CEO, Phaedra Ellis Lampkins, the startup’s technology was described: “Instead of a county paying to incarcerate someone simply because they can’t afford to post bail, they can use Promise to monitor compliance with court orders and better keep tabs on people via the app and, if needed, GPS monitoring devices. Counties, courts, case managers and other stakeholders can also access progress reports of individuals to monitor compliance.”
“The problem is there is secondary use for that data. The data that you generate can then be sold to various other businesses and that’s where the real money is,” says Nkonde. “That is a reinforcement of the ‘black code’ and reinforcement of the racist code that has always driven American society. Except now, it’s called ‘AI’.”
Ellis Lampkins refutes Nkonde’s allegations. In an emailed statement to Black Enterprise, she called Nkonde’s statements, “factually inaccurate.”
“We have not built and do not have any plans on building GPS monitoring technology. We believe that constant monitoring technology would create more injustice and be a violation of the foundation of our company. We built Promise so that there would be technology that improved the lives of those who were impacted by the criminal justice system, ” she wrote. She also clarified her company’s position on GPS monitoring of those entangled in the legal system and the potential for data collection against those individuals.
“People are on GPS because the court mandates it. We do not provide the GPS. We do not recommend it and we do not have it in our app. The court has many conditions such as AA, parenting class, etc. We also do not provide those services. Because we do not collect the data, there is no way it could ever be sold,” stated Ellis Lampkins via email.
Black Politicians Craft AI Bias Legislation
However, the problems of biased algorithms and AI are being raised by politicians. Sen. Cory Booker and Rep. Yvette Clarke have presented legislation addressing tech bias.
Nkonde, in fact, serves as a senior tech policy adviser for Congresswoman Clarke. She says Clarke and Booker have gone beyond just looking into well-documented cases of facial recognition technology’s inability to correctly identify people of color but they have “gone for the money,” says Nkonde.
“They’ve gone for the algorithm; they’ve gone for ‘what makes systems think?’ Those algorithms are protected by commercial law so this is actually the first time in history that anybody has really gone for how AI works.”
She says she loves the fact that Sen. Booker is involved with this cause because she feels he can make bias in technology a presidential issue. “I’m hoping that what journalists and activists and policymakers can do is really frame this as the new frontier of civil rights,” she says.
Last year, Nkonde and Clarke gave a briefing in DC on algorithmic bias. “We were looking at white supremacy online because the algorithms that feed you videos on YouTube were feeding white supremacists more extreme content and we were speaking about that,” says Nkonde.
Facebook has also taken measures to combat bias in AI. At its developers’ conference last year, a Facebook executive said that it is focused on “how to build fair and unbiased systems.” The company says one way to do so is by having diverse individuals involved in building its AI systems.
“If AI is built by a small group of technologists it will only see a narrow point of view,” said research scientist Isabel Kloumann at the event.
Still, activists say that Facebook and other tech companies are not doing enough and the red flags aren’t being raised enough about how potentially harmful AI and algorithms can be to black people.
“The real issue,” says Nkonde, “is technologies do not serve everybody equally. Because algorithms/AI systems are built by people, those people encode their values in those systems.”
“What AI actually does is increase the likelihood for white, rich, cis men to succeed and decreases the likelihood for everybody else in society to succeed.”
[ad_2]
Source link