AI is Amplifying Social Media Disinformation — and Making Big Tech Civilly Liable May Be the Key to Stemming It

Sousa Brothers
2 min read2 days ago

--

The era of generative artificial intelligence has significantly escalated the ease of disseminating false and misleading information through social media platforms. Misinformation and disinformation proliferate swiftly on these platforms, especially during significant public events like the COVID-19 pandemic and recent U.S. presidential elections.

The rise of AI-generated deepfakes further exacerbates the issue, making it simpler to spread deceptive content via social media. Legal experts emphasize that combating this challenge necessitates the establishment of new federal regulations or voluntary enhancements in self-regulation by tech companies managing these platforms.

Experts advocate for revising existing internet laws, such as Section 230, which shield social media companies from civil accountability. They argue that the current legal framework, crafted almost three decades ago, no longer adequately addresses the consequences of unregulated social media spaces.

Concerns around the spread of misinformation and disinformation on social media have been compounded by the looming threats posed by AI, as highlighted in a recent assessment by the U.S. Department of Homeland Security regarding potential interference in the upcoming 2024 presidential election.

The debate over addressing misinformation and disinformation on social media has been hindered by various factors, including First Amendment considerations, resistance from tech giants, and the complexities of defining and regulating such content. Crafting effective legislation that navigates these challenges while respecting free speech rights presents a formidable task for lawmakers.

While the regulation of misinformation and disinformation on social media remains contentious, some experts suggest that solutions might emerge more effectively through the internal practices of tech companies rather than solely relying on legislative measures. Encouraging social media platforms to enhance self-regulation through public pressure or legislative amendments, such as amending Section 230 to enforce accountability, could be a more feasible approach.

Ensuring transparency in algorithms, regulating the use of private data, identifying AI-generated content, and combating the proliferation of bots are proposed strategies to incentivize social media companies to combat misinformation. However, any legal amendments in this direction are likely to face rigorous legal challenges and scrutiny.

Efforts to address misinformation and disinformation on social media underscore the critical role of accurate information in maintaining a healthy democracy. The ongoing discourse surrounding the reform of Section 230 reflects the bipartisan concerns over the power wielded by tech companies in moderating content and the challenges of balancing free speech with accountability.

Major social media platforms have implemented their own policies to tackle misinformation, emphasizing the removal of harmful content and deceptive media that could mislead users. Despite these efforts, instances of misleading information, such as the circulation of AI-generated content, continue to pose challenges for platforms like Meta, TikTok, and X, underscoring the ongoing struggle to combat misinformation effectively.

--

--