Mark Zuckerberg, at Aspen Ideas, says combatting election interference ‘above our pay grade’

Facebook founder Mark Zuckerberg addressed the Aspen Ideas Festival crowd on Wedneday.
Ian Wagreich / Aspen Ideas Festival

Facebook CEO Mark Zuckerberg told an Aspen audience Wednesday it will take the government to tighten election security for the social-media giant, saying “that type of work is a little above our pay grade.”

Zuckerberg was a late addition to the lineup of the Aspen Ideas Festival; an announcement about his appearance came Monday afternoon. Nonetheless, a sizable crowd gathered at the Benedict Music Tent to hear him discuss the litany of issues that have dogged the company over the past few years, chiefly Russia’s interference into the 2016 presidential election, privacy breaches and content moderation.

“As a private company, we don’t have the tools to make the Russian government stop,” he told interviewer Cass Sunstein, a professor at Harvard, adding “we can defend as best as we can, but our government is the one that has the tools to apply pressure to Russia, not us.”

The Facebook co-founder, 39, added that individual countries — not social media companies — should be determining which types of election advertising are permitted and forbidden on social media.

In the run-up to the abortion ban referendum election last year in Ireland, for example, some pro-life groups from the U.S. wanted to run political advertisements on Facebook, Zuckerberg said. The Irish government told Facebook it did not have laws about foreign political advertising, so Facebook took it upon itself not to allow the ads.

“During (Ireland’s) election, leading up to that referendum, a bunch of pro-life American groups advertised … to try to influence public opinion there,” Zuckerberg said. “And we went to the Irish and asked folks there, ‘Well, how do you want us to handle this? You have no laws on the books that are relevant for whether we should be allowing this kind of speech in your election, and really this doesn’t feel like the kind of thing a private company should be making a decision on.’”

Facebook also has been grappling with “deepfake” videos. A video last month of Nancy Pelosi circulated on social media showing the House speaker slurring her words. The video was doctored and spawned outcry from Democratic leaders. Technically it was not considered a deepfake video because artificial intelligence was not used to take it; rather, it appeared traditional editing techniques were used to alter the clip.

Facebook left up the video, but flagged it with fact-checker warnings. Twitter also kept it, while YouTube removed it.

“Why not make the policy as of tomorrow be that if reasonable observers could not know that it’s fake, then it will be taken down?” Sunstein asked.

Zuckerberg conceded that company fact-checkers were slow to respond and flag it.

“It took awhile for our systems to flag that and for fact-checkers to rate it as false,” he said. “During that time, it got more distribution than our policies should have allowed.”

Facebook’s challenge, he said, is compromising free speech rights by removing content — even when it is patently false.

“This is a topic that can be very easily politicized,” he said. “People who don’t like the way that something was cut … will kind of argue that … it did not reflect the true intent or was misinformation. But we exist in a society … where we value and cherish free expression.”

Monitoring videos is a delicate practice for Facebook, Zuckerberg said, noting that videos can be used for satire or other means. Other instances could be when a person in an edited film simply does not like how they are being portrayed.

“If (deepfake) is any video that is cut in a way that someone thinks is misleading, well, I know a lot of people who have done TV interviews that have been cut in ways they didn’t like, that they thought changed the definition or meaning of what they were trying to say,” he said. “I think you want to make sure you are scoping this carefully enough that you’re not giving people the grounds or precedent to argue that things that they don’t like or changed the meaning somewhat of what they said in an interview get taken down.”

Facebook does not patrol for false statements, instead relying on outside people to fact-check.

Concerning deepfake videos, Zuckerberg said the company might handle them differently that it does other misinformation. Facebook currently is studying the matter, he said.

“There is a question of whether deepfakes are actually just a completely different category of thing from normal false statements overall,” he said, “and I think there is a very good case that they are.”

rcarroll@aspentimes.com

via:: Post Independent