Tuesday, November 26, 2024
HomeUS NewsEU boss tells Musk, Zuckerberg and Tik Tok chief : NPR

EU boss tells Musk, Zuckerberg and Tik Tok chief : NPR


In a series of letters to tech executives, European Union Commissioner Thierry Breton warned that financial penalties could be issued against social media platforms that do not follow new EU laws governing hate speech and disinformation.

LUDOVIC MARIN/AFP via Getty Images


hide caption

toggle caption

LUDOVIC MARIN/AFP via Getty Images


In a series of letters to tech executives, European Union Commissioner Thierry Breton warned that financial penalties could be issued against social media platforms that do not follow new EU laws governing hate speech and disinformation.

LUDOVIC MARIN/AFP via Getty Images

A top European Union official has fired off letters to top social media executives Mark Zuckerberg, Elon Musk and Shou Zi Chew over the flood of misinformation on their platforms related to the Israel-Hamas war, warning that EU laws can impose severe financial penalties if the spread of falsehoods goes unchecked.

In the letters, European Union Commissioner Thierry Breton reminded the bosses of Meta, X, the platform formerly known as Twitter, and TikTik, of their obligations to combat misinformation under an EU law known as the Digital Services Act, or the DSA.

Violations could result in financial penalties that could add up to 6% of each tech company’s global revenue.

For Meta, that could be a multi-billion-dollar penalty. For X and Tik Tok that could mean hundreds of millions of dollars each.

Breton told Musk that the EU is aware of illegal content and disinformation about the war ricocheting across X.

“Up to you to demonstrate that you walk the talk,” Breton wrote to Musk. “My team remains at your disposal to ensure DSA compliance, which the EU will continue to enforce rigorously.”

In a similar note to Zuckerberg, Breton gave Meta’s chief executive 24 hours to lay out the company’s plan to staunch the tide of war-related misinformation and AI-generated posts carrying fake content manipulated to look real.

The EU, Breton wrote to Zuckerberg, has “been made aware of reports of a significant number of deep fakes and manipulated content which circulated on your platforms and a few still appear online.”

Children on TikTok need protection against violent content

To TikTok’s Chew, Breton noted the platform’s younger audience means there are even strictly requirements.

“First, given that your platform is used extensively by children and teenagers, you have a particular obligation to protect them from violent content depicting hostage taking and other graphic videos, which are reportedly widely circulating on your platform without appropriate safeguards,” Breton wrote.

A spokesperson for TikTok did not immediately return a request for comment.

Since Hamas militants attacked Israel on Saturday morning, fabricated photos and video clips and other bogus content purporting to portray the violence in the region have been wreaking havoc on social media platforms, making sorting fact from fiction a daunting task.

The false content has included video game footage shared as if it was depicting the Middle East; false claims about a Israeli commander being kidnapped; a doctored White House memo; and footage from Guatemala in 2015 purporting to show an Israeli woman being attacked in Gaza.

X and Meta respond to letters

X has seen content safeguards dismantled and a surge of misinformation under Musk’s leadership.

Adding further confusion is Musk’s changes last year to the platform’s verification policies. Now, anyone willing to pay a monthly fee can receive a blue check mark — a badge previously reserved for credible news organizations and notable people. Now, having a paid-for “verification” mark boosts the reach of posts, an arrangement that misinformation experts say has contributed to considerable chaos on the site.

But on Thursday, Linda Yaccarino, CEO of X, said the platform has removed or labeled thousands of posts since the war erupted.

“X is proportionately and effectively assessing and addressing identified fake and manipulated content during this constantly evolving and shifting crisis,” Yaccarino wrote in her letter to Breton. “In good faith, we’ve diligently taken proactive actions to remove content that violates our policies, including: violent speech, manipulated media and graphic media.”

The swift response comes in the face of the EU’s Digital Services Act, one of the toughest online safety laws in the world that took effect in August. The law carries stiff fees for tech companies that violate the rules.

Under the law, social media platforms like Facebook, Instagram, Twitter and TikTok must quickly remove posts inciting violence, featuring manipulated media seen as propaganda, or any kind of hate speech, or be hit with financial penalties that far exceed what any U.S. authority has ever imposed: Up to 6% of a company’s annual global revenue.

Such a threat was enough to prompt Yaccarino to assure the EU that X was shifting its resources to combating the misinformation on the platform.

“There is no place on X for terrorist organizations or violent extremist groups and we continue to remove such accounts in real time, including proactive efforts,” Yaccarino wrote.

A Meta spokesman said since the war broke out the company established a “special operations center” with Hebrew and Arabic speakers to closely monitor and tackle misinformation across its social media sites.

“Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact-checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds,” said Meta spokesman Al Tolan.

Breton with the EU noted in his letter to Zuckerberg that regulators are not only focused on content related to the Israel-Hamas war, but also falsehoods that could spread in connection with upcoming election in Europe in countries including Poland and the Netherlands.

“I remind you that the DSA requires that the risk of amplification of fake and manipulated images and facts generated with the intention to influence elections is taken extremely seriously in the content of mitigation measures,” Breton wrote to Zuckerberg.




This story originally appeared on NPR

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments