AI bots easily bypass some social media safeguards, study reveals.

Sousa Brothers
2 min readOct 15, 2024

--

A recent study conducted by researchers at the University of Notre Dame has revealed that artificial intelligence (AI) bots can easily bypass some social media safeguards, raising concerns about the effectiveness of current policies and mechanisms in place to protect users from malicious bot activity.

The researchers analyzed the AI bot policies and mechanisms of eight social media platforms, including LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter), and Meta platforms Facebook, Instagram, and Threads. They attempted to launch bots on these platforms to test the enforcement of bot policies and found that while some platforms presented challenges, others were trivially easy to bypass.

The study found that Meta platforms were the most difficult to launch bots on, requiring multiple attempts to bypass their policy enforcement mechanisms. Despite this, the researchers were ultimately successful in launching a bot and posting a “test” post on their fourth attempt. TikTok also posed a modest challenge due to its frequent use of CAPTCHAs, but three platforms — Reddit, Mastodon, and X — provided no challenge at all. The researchers noted that it was very easy to get a bot up and running on these platforms, despite their stated policies and technical mechanisms.

The study’s findings are alarming, as they highlight the ease with which malicious bots can be created and deployed on social media platforms. The researchers emphasized that the current policies and mechanisms are insufficient to keep users safe from harmful bot activity. They argued that laws, economic incentive structures, user education, and technological advances are needed to protect the public from malicious bots.

The researchers used Selenium, a suite of tools for automating web browsers, and OpenAI’s GPT-4 and DALL-E 3 to create the bots. The study was led by Kristina Radivojevic, a doctoral student at Notre Dame. The research was published on the arXiv preprint server and underscores the need for more robust measures to prevent the misuse of AI bots on social media.

In conclusion, the study’s results indicate that social media platforms need to strengthen their policies and enforcement mechanisms to prevent the spread of malicious bots and ensure user safety. This includes requiring platforms to identify human versus bot accounts, as well as implementing more stringent measures to detect and block suspicious activity.

source: https://techxplore.com/news/2024-10-ai-bots-easily-bypass-social.html

--

--

Sousa Brothers
Sousa Brothers

No responses yet