

AI in social media and third party fact checking
This blog explores how AI is flooding social media with misinformation, especially as platforms like X and Facebook drop fact-checkers. With fake images and unchecked hate speech spreading, many users don’t even realize what they’re seeing isn’t real. While AI has creative potential, some level of fact-checking is needed to keep social media from becoming a chaotic mess of false and harmful content.
EDUCATION TECHNOLOGY
Olivia Ludwick
2/23/20252 min read
AI is one of the most heavily contested topics in the 21st century. From Elon Musk at Tesla, to open chat AI, Artificial Intelligence is everywhere we look from Siri on your iPhone to those random posts that you get on TikTok.
Artificial Intelligence can often produce illogical and incorrect information which many social media users have ready access to at their fingertips. For example, Instagram now has a feature called MetaAI, which allows you to generate AI responses and AI images to have pretty much any picture you desire. Though this can be seen as a positive, as it can amplify the creativity of users on the platform and easy access to information, falsified AI material is flooding the gates of sites like Facebook and Instagram who no longer have any third party fact-checkers.
This model follows that of X (formerly Twitter, renamed by Elon Musk), in which the site is entirely community-regulated out of concerns that fact-checkers were silencing certain political opinions and creating one-sided platforms. X has something called Community Notes in which a user from X's community is able to publish a note on information stating that it is factual or facetious. Though you have to apply to publish Community Notes, often information that is harmful is not flagged and is kept up on the platform. Often this information is AI-generated.
As someone whose mom is an avid user of Facebook, she often sends me videos and pictures that are clearly AI-generated, but she is unaware. Facebook does not even have a flag anymore warning users that some information may be AI-generated. Even if the AI content is not innately harmful, having users be unaware of the kind of content they are consuming is unethical.
Additionally, X does not take down any posts that contain harmful rhetoric, as they believe hate speech should not be banned. Though you cannot necessarily ban hate speech, the platform has allowed people to be comfortable hiding behind a screen and saying blatantly racist and dehumanizing rhetoric. For example Ye (Kanye West), recently went on a rant on X claiming that Jewish people are terrible, calling himself a Nazi, and applauding Elon Musk for his salute at Donald Trump's Inauguration rally. These posts are still public and push harmful misinformation about certain religious and ethnic groups.
Because social media platforms are drifting away from third-party fact-checking, sites have become places of hate, misinformation, and undeclared AI. In order to ensure the safety of American citizens, fact-checking should be restored to at least a certain level on these sites to guarantee a platform that empowers all voices and prevents people from believing facetious content.