Facebook’s Greater Threat Is the Law, Not Lawsuits


A flurry of legal challenges in the US won’t fundamentally change the company in the same way that new European laws will.

Inc. has become a lightening rod for legal challenges in the US, from the FTC’s antitrust case to shareholder lawsuits alleging the company misled investors. Last week, eight complaints were filed against the company across the US, including allegations that young people who frequently visited Instagram and Facebook went on to commit suicide and experience eating disorders. (Facebook has not commented on the litigation, and has denied allegations in the FTC and shareholder complaints.)

While the lawsuits strike at the heart of Meta’s noxious social impact and could help educate the public on the details, they likely won’t force significant change at Facebook. That’s because Section 230 of the Communications Decency Act of 1996 shields Facebook and other internet companies from liability for much of what their users post. Unless US law changes — and there are no signs this is happening soon — Meta’s lawyers can continue to use that defense.       

Also read: Looking for a smartphone? To check mobile finder click here.

But that won’t be the case in Europe. Two new laws coming down the pipe promise to change how Meta’s algorithms show content to its 3 billion users. The UK’s Online Safety Bill, which could come into force next year, and the European Union’s Digital Services Act, likely coming into force in 2024, are both aimed at preventing psychological harms from social platforms. They’ll force large internet companies to share information about their algorithms to regulators, who will assess how “risky” they are.

Mark Scott, chief technology correspondent with Politico and a close follower of those laws, answered questions about how they’d work, as well as what the limitations are, on Twitter Spaces with me last Wednesday. Our discussion is edited below.

Parmy Olson: What are the main differences between the upcoming UK and EU laws on online content?

Mark Scott: The EU law is tackling legal but nasty content, like trolling, disinformation and misinformation, and trying to balance that with freedom of speech. Instead of banning [that content] outright, the EU will ask platforms to keep tabs on it, conduct internal risk assessments and provide better data access for outside researchers. 

The UK law will be maybe 80% similar, with the same ban on harmful content and requirement for risk assessments, it but will go one step further:  Twitter and others will also be legally required to have a “duty of care” to their users, meaning they will have to take action against harmful but legal material.  

Parmy: So to be clear, the EU law won’t require technology companies to take action against the harmful content itself?

Mark: Exactly. What they’re requiring is to flag it. They won’t require the platforms to ban it outright.

Parmy: Would you say the UK approach is more aggressive?

Mark: It’s more aggressive in terms of actions required by companies. [The UK] has also floated potential criminal sentences for tech executives who don’t follow these rules.

Parmy: What will risk assessments mean in practice? Will engineers from Facebook have regular meetings to share their code with representatives from [UK communications regulator] Ofcom or EU officials?   

Mark: They will have to show their homework to the regulators and to the wider world. So journalists or civil society groups can also look and say, “OK, a powerful, left-leaning politician in a European country is gaining mass traction. Why is that? What is the risk assessment the company has done to ensure [the politician’s] content doesn’t get blown out of proportion in a way that might harm democracy?” It’s that type of boring but important work that this going to be focused on.

Parmy: Who will do the auditing?

Mark: The risk assessments will be done both internally and with independent auditors, like the Price Waterhouse Coopers and Accentures of this world, or more niche, independent auditors who can say, “Facebook, this is your risk assessment, and we approve.” And then that will be overseen by the regulators. The U.K. regulator Ofcom is hiring around 400 or 500 more people to do this heavy lifting.

Parmy: What will social-media companies actually do differently, though? Because they already put out regular “transparency reports” and they have made efforts to clean up their platforms — YouTube has demonetized problematic influencers and the QAnon conspiracy theory isn’t showing up in Facebook Newsfeeds anymore.

Will the risk assessments lead tech companies to take down more problem content as it comes up? Will they get faster at it? Or will they make sweeping changes to their recommendation engines?

Mark: You’re right, the companies have taken…


Exit mobile version