With the pressing issues of misinformation and hate speech across social media platforms, Meta CEO Mark Zuckerberg announced last week that all platforms will switch from the use of third-party fact checkers to a system similar to X’s Community Notes.
In a statement released by Meta on Jan. 7, the company holds that their switch is motivated by a need for “more speech” and “fewer mistakes.”
The initial third-party fact-checking mechanism, introduced in 2016, utilized anonymous checkers who were able to take down any post they believed to be misinformation and hide posts without users’ knowledge by way of shadow bans. Meta claims that in December 2024 millions of pieces of content were taken down each day, and that “one to two out of every 10 of these actions may have been mistakes.”
This new form of fact-checking is a “crowdsourced approach to reviewing online content.” In this way, platforms will rely on their users to both write and rate the helpfulness of notes. A note voted helpful by a group of users with histories of varying opinions and political views will be displayed. Notes lacking a variety in voted ratings will likely not be shown.
Upon interviewing York students on their experience with X’s Community Notes, Excalibur received a spectrum of responses, from those who were concerned, to satisfied, and entertained.
Second-year English studies student Nicki sees the humorous side to Community Notes, expressing her view of them as “pretty useful, and also kind of funny.” She “gets notifications when someone adds a note to a post [she] liked.” For example, an amusing correction of a tweet that claimed the anime, Jujutsu Kaisen, did not have the letter N in the title, while the note underneath confirmed that it did.
On X, users with an active phone number, who have spent at least six months on the platform, and have no violations can volunteer to become an anonymous contributor. They can flag misleading posts and add notes providing context for other users. However, these notes only become visible to regular users after being approved by other contributors through a voting process that determines their helpfulness.
The speed at which these community notes appear is important in preventing misinformed users. As well, if a community note “isn’t rated as helpful by a diverse-enough group of contributors,” it will never be shown by the algorithm. These are the main concerns of this form of fact-checking.
York psychology student Ava expresses that Community Notes “always or most probably” provide a link to more information when a claim is flagged as incorrect, mainly to news websites. When asked if Community Notes are capable of eliminating all possible forms of bias, Ava says, “No, but that’s the whole point of X. It’s supposed to be biased. You’re supposed to have your own opinion. It’s very subjective […] it’s not a news website.”
Up until now, Meta platforms have been using automated checkers to detect guideline violations, which they argue have led to censorship. They will now wait for users to report less severe violations to take it into consideration, but continue to use automated systems for “high-severity violations” like drug detection, fraudulent activity, terrorist threats, and child sexual exploitation.
This switch allocates responsibility on the community to determine violations, which may pose serious questions of whether or not misinformation or hate speech is reported.
Fourth-year theatre student Celine Daaboul finds the introduction of Community Notes “concerning,” as “the spread of misinformation has extreme political consequences and holds the greater possibility of harmful propaganda being spread.”
“Considering the rise in far right ideologies in young men, I am extremely concerned about this power falling into the wrong hands,” Daaboul adds.
Given these varying perspectives, it’s clear that only time will tell if this new method of fact-checking will prove useful in stopping the spread of misinformation or if it will lead to confirmation bias.