Holding Social Media Responsible: Time To Change Section 230?

I have long held that efforts to tamper with Section 230  from the Communications Decency Act of 1996 are dangerously misguided. It is this section that immunizes online service providers from liability for third-party content that they carry. I have also argued that attempts to mandate “age verification” for social media will spectacularly backfire in ominous ways for social media users in general, and will not actually protect children — and I continue to believe that age verification systems cannot achieve their stated goals and will cause dramatic collateral damage.

One of my key concerns in both of these cases is that they would over time cause major social media platforms to drastically curtail the third-party content that they host, eliminating as much as possible that would be considered in any way controversial, in an effort to avoid liability.

I still believe that this is true, that this would be the likely outcome of Section 230 being altered in any significant ways and/or widespread implementation of the sorts of age verification systems under discussion.

But I’m now wondering if this would necessarily be such a bad outcome, because the large social media platforms appear to have increasingly eliminated all pretense of social responsibility, making it likely that the damage they have done over the years through the spreading of misinformation, disinformation, racism, and all manner of other evils will only be exacerbated — become much, much worse — going forward.

Seeing billionaire Mark Zuckerberg today proclaiming nonchalantly that he’s making changes to Meta platforms (Facebook, Instagram, etc.) that will inevitably increase the level of harmful content — he essentially said that explicitly — is I believe a “jumping the shark” moment for all major social media.

I feel it is time to have a serious discussion regarding potential changes to Section 230 as it applies to large social media platforms, with an aim toward forcing them to take responsibility for the damage the content on their platforms causes to society, whether it is third-party content or their own.

I would also add — though this extends beyond the formal scope of Section 230 and social media — that firms who have deployed Generative AI systems (chatbots, AI Overviews, etc.) should be held responsible for damage done by misinformation and errors in the content that those systems generate and provide to users.

It is obvious that the major social media platforms are at best now providing only lip service to the concept of social responsibility, or are effectively abandoning it entirely, for their own political and financial expediency — and the situation is getting rapidly worse.

We must make it clear to these firms that they serve us, not the other way around. Changes to Section 230 as it applies to the large social media platforms may be the most practical method to convince the usually billionaire CEOs of these firms that our willingness to be victimized has come to an end.

–Lauren–

The Helpful Google Ombudsman (Who Doesn't Exist)