[ad_1]

Facebook announced it will remove deepfakes on its platform, a move experts call a “baby step” in the right direction as the social media giant grapples with the spread of disinformation ahead of the 2020 presidential election.

The term “deepfake” refers to video or audio that has been altered, usually with artificial intelligence or deep-learning technology to depict a person doing something they never did or saying something they never said.

The company will remove content that has been altered “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said the words that they did not actually say,” Monika Bickert, the vice president of global policy management at Facebook, announced in a blog post Monday evening.

In addition, content will be removed if it is “the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

Videos that are parody or satire or edited to omit or change the order of words are exempt from the policy changes.

The policies applies to ads as well, including those posted by a politician, according to Facebook.

In May, a video that appeared to be digitally altered to impair the speech of House Speaker Nancy Pelosi went viral, ringing the alarm bells for the possible rise of this new digital threat as the 2020 elections approached.

At the time, Facebook responded by saying its third-party fact-checking partners had deemed the video “false,” and it was “heavily reducing its distribution” in Facebook’s newsfeed.

While taking action against deepfakes is a good step for Facebook, one expert says it remains a small component of the disinformation problem.

“As we head into the 2020 elections, this is a good step forward,” Dipayan Ghosh, a former privacy and public policy advisor at Facebook and former technology and economic policy advisor in the Obama White House, told ABC News Tuesday.

Currently, deepfakes are not a major problem but are likely to “become a more significant one in the coming months and years,” said Ghosh who is now a Shorenstein Fellow at the Harvard Kennedy School.

Deepfakes serve a “specific and narrow sort of purpose, whereby you are trying to impersonate someone else,” he said. “The use-case of that kind of tool and technology in the mind of a disinformation operator is quite limited.”

Ghosh noted, however, that Facebook is “among the first to have a policy against deepfakes in clear terms.”

“This is not something that it necessarily had to do,” he added. “So, I think we should give them credit for taking these baby steps in the right direction.”

Still, we should “acknowledge that this is a baby step,” Ghosh added. “And one that maybe doesn’t require Facebook to give up a lot of business.”

“We shouldn’t see it as a huge commitment from the company, it’s a bit of an obvious thing to do,” he said.

It’s not going to “eliminate the disinformation problem,” according to Ghosh. “It’s at best a small component of the disinformation problem as we approach the 2020 election.”

“The real problem lies in more traditional disinformation advertising,” Ghosh said. “And Facebook has not made the primary commitment that perhaps it should around limiting micro-targeting and limiting political advertising more broadly to address the threat of coordinated disinformation across its network.”

Others remain skeptical as well. Drew Hammill, deputy chief of staff for House Speaker Nancy Pelosi said via Twitter that the “real problem” is “Facebook’s refusal to stop the spread of disinformation.”

Amnesty International also took to Twitter, offering its criticism. “We should be more worried about the power of Facebook’s algorithms to shape & manipulate online experience of billions of people,” the organization tweeted.

“Facebook’s new policy focuses on potential future harms while downplaying and ignoring the actual harms from manipulated media that we’ve already witnessed,” said the president and CEO of the Center for American Progress, Neera Tanden in a statement.

The new policy does “nothing” to address politicians and their campaigns willingness to “lie to the public on any of Facebook’s dominant platforms,” her statment continued. “Facebook’s policy leaves itself open to be a site for propaganda and disinformation rather than authenticity and community.”

 

[ad_2]

Source link