1st Platform: Twitter/X
For this week’s blog post, I’ve decided to research two social media sites that I use, Twitter/X and YouTube. Researching X’s attempts to curb misinformation was interesting because I found a lot of information showing that since Elon Musk’s takeover, many of the misinformation policies the platform used to have were rolled back. The current policies related to misinformation seem to fall under the ‘Authenticity’ category of X’s rules page.
One of the ‘Authenticity’ subcategories that X uses to curb misinformation is Civic Integrity, which states “You may not use X’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process.”
The civic process includes political elections, censuses, major referenda, and ballot initiatives. Violations of this policy include misleading information about how to participate, suppression, intimidation, and false or misleading affiliation. For example, if someone tweeted the false claim that people can vote by text message, that would be violating these rules. Posts that partake in this can be downranked, excluded from search results, be restricted from likes, replies, etc.
X also restricts misleading and deceptive identities stating, “You may not misappropriate the identity of individuals, groups, or organizations or use a fake identity to deceive others.” Parody, fan accounts, and commentary are not in violation of this. Consequences of this can include suspension and/or your profile being modified.
Another policy that helps to curb misinformation on X is the policy on synthetic and manipulated media, which states, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”). In addition, we may label posts containing misleading media to help people understand their authenticity and to provide additional context.” An example of this would be inauthentic or fictional media being presented as reality or fact. Consequences include post deletion, labeling, or the account being locked.
These policies deserve merit because in my experience, I’ve never come across voting misinformation and any manipulated video/images I’ve seen are usually labeled as such. But since Musk’s takeover several policies have been rolled back. In November 2022, Twitter/X stated it would no longer enforce its COVID-19 misinformation policy. This policy removed posts making false claims about COVID-19 vaccines. In my view, X should have a policy where all false claims relating to health are removed so that it stops the spread of health misinformation.
In September 2023, X removed a feature that allowed users to self-report political misinformation. This seems to be in contradiction of X’s civic integrity policies. If users come across a post that includes false information meant to suppress voting, they have no place to report it.
Despite these rollbacks, I believe that one of the positive things Musk has instituted is the Community Notes feature. Contributors write notes that can add context and/or provide evidence disapproving a claim made in a post. In my own Twitter/X experience, I’ve come across posts that I might have believed if not for the Community Notes feature.
The screenshot below is of a post that includes a picture claiming to be from the Solar Eclipse on April 8th, 2024. But users added a Community Note describing how this is an AI-generated image. A drawback of Community Notes is that a note will only be shown on a post if it’s rated helpful by enough people. In my experience, Community Notes are seen on posts that go viral or are seen by a lot of people, so many posts slip through the cracks.

I think that one of the things X can do to improve upon their existing efforts is to reinstate the policies they’ve rolled back. How can X have a policy that prohibits misinformation regarding civic integrity but removes a feature that allows users to self-report misinformation they come across?
Another recommendation would be to remove repeat offenders instead of allowing them back on the platform, which Musk has done. It would also be helpful to ‘nudge’ users to think about accuracy with an accuracy prompt. X has made some good attempts at curbing misinformation, but they could do better.
2nd Platform: YouTube
Unlike Twitter/X, YouTube has multiple pages that specifically describe its efforts to combat misinformation. In 2022, YouTube announced how they had invested in a program they call the 4 Rs of Responsibility, which combines humans and machine learning to combat misinformation on the platform. The 4 Rs are “remove violative content quickly, raise up authoritative sources, reduce the spread of problematic content,” and reward content creators that follow the rules.
YouTube’s misinformation policies page states that they use a clear set of facts to base their misinformation policies. For example, they rely on expert consensus from local and international health organizations regarding COVID-19 medical misinformation. With newer misinformation that doesn’t have a consensus of facts, it is harder to detect this kind of misinformation.
This typically falls in the category of “borderline content,” which is content that “comes close to – but doesn’t quite cross the line – or violates community standards.” Borderline content is not recommended to users on the platform, helping to prevent its spread.
When researching YouTube, an article from The Guardian described how a group called Doctors for Truth had videos on YouTube about health and election misinformation. I went onto YouTube and researched this channel/group, and I didn’t find these videos. I think this shows that YouTube takes its misinformation policies seriously because these videos were either removed from the platform or are prevented from coming up in search or recommendations.
Another example of YouTube enforcing these policies is that it announced in August 2023 that they had begun a mass takedown of videos spreading cancer misinformation. YouTube stated that they would specifically remove videos that encouraged people to not seek professional medical treatment and/or promoted cancer treatments that have been proven to be ineffective or harmful. I think this is a positive step for YouTube in ensuring that health misinformation isn’t being spread on their platform.

Like Twitter/X, YouTube has specific policies regarding misinformation about elections, COVID-19, and vaccines. But in June 2023, YouTube announced that they would “stop removing content that falsely claims the 2020 election or other past U.S. presidential elections were marred by “widespread fraud, errors or glitches.” There’s a consensus amongst experts and evidence showing that the 2020 election was not fraudulent and allowing content on the platform that spreads this misinformation seems counterintuitive to YouTube’s policies on misinformation.
When conducting research, I found YouTube’s misinformation policies to be much clearer than Twitter/X’s. Twitter/X’s policies don’t specifically mention ‘misinformation,’ you must read between the lines, whereas YouTube clearly states how they handle misinformation. Overall, I found YouTube’s policies to be better, but I think they should reverse their decision to not remove videos that claim the 2020 election was fraudulent.
Regarding why there’s conspiratorial content available on the platform, an article from Forbes describes how YouTube and Google must explain “the specific criteria its algorithms use to rank, recommend, and remove content—as well as how often and why those criteria change and how they are weighted relative to one another.”
The article also states how YouTube must improve upon and expand its content moderation system by adding more human reviewers. These recommendations would allow YouTube to catch misinformation content more easily, especially the harder-to-catch borderline content.