Categories
MCO 426

Blog Assignment: Future of Digital Media -Who Owns our Conversations?

Who owns our conversations when it comes to digital media such as social platforms? According to Copyright Laws, we own the content we post on social media, but we’ve given license to social platforms to use it as is spelled out in their terms and conditions. Social platforms can use our content as they see fit.

There are centralized and decentralized platforms. Centralized platforms are owned and controlled by one single entity. These include platforms such as Facebook, Twitter/X, Instagram, YouTube, etc. As described by CMR Berkeley, centralized platforms are stored on centralized servers where the platform retains ownership and control over user data and content.

They also describe the benefits of centralized platforms as having a well-established user interface and a large user base that allows for networking. Some drawbacks are that centralized platforms can be more susceptible to censorship and privacy is often a concern, as these platforms can use user data however they’d like.

I use one centralized social media platform, Twitter (now X). Elon Musk’s takeover has shown me that we cannot depend on the goodwill of the owners of these centralized social media platforms. Elon Musk made drastic changes that led to chaos, such as a paid blue check mark system.

Journalists, government organizations, etc. lost their verification status if they didn’t pay for it. This led to imposter accounts and made it harder to tell if information was coming from a reliable source. This showed me that my experience on a centralized social media platform is beholden to the current owner.

What can we do about this? I believe that government intervention in this brings concerns about censorship and overreach. I think the easiest and most beneficial change we could make is to join decentralized social media platforms.  Decentralized platforms are not controlled by a single entity, they are distributed across multiple independent servers. Users have more control over their data and privacy and decentralized platforms are more resistant to censorship.

However, decentralized platforms must become more appealing and widespread among users. People want to use social media networks that their friends are on. As of now, decentralized platforms do not have the user base that centralized giants such as Facebook and Twitter (X) have. Since many of these platforms are still in development, they can have limited features compared to centralized platforms.

Particularly after the recent election, I noticed many of the people and groups that I follow on Twitter (X) were moving to a decentralized social media platform called Bluesky. In November 2024, Bluesky reached 20 million users. I joined this platform and have liked the experience of using a decentralized network where I have more control over my privacy and data protection. The Conversation describes how Bluesky is set up like the old Twitter and allows users to host a server where they’re able to control and store their data.

Decentralized platforms have their drawbacks though. An article from Medium describes how decentralized social media faces challenges including scalability and tough competition from centralized giants. The blockchain application that decentralized platforms use can require a learning curve that can discourage new users and mass adoption.

I believe that if these challenges can be overcome, we can encourage more users to join decentralized social networks. As we have seen from Bluesky’s growth, there is a desire and need for users to join social networks that allow them more control. We should own our conversations and decide how our data is used.

Categories
MCO 426

Blog Assignment #5: Experiment with AI Text Generation

This week, I experimented with using AI text generation. I used ChatGPT and the prompt I gave it was “in 250 words, describe depression and its effects on a person.”  I chose to use depression as my topic for this assignment because it is something I struggle with in my own day-to-day life. I’ve done a lot of reading on this topic, trying to educate myself on it so that I know how to deal with it.

Screenshot of my prompt and the response given by ChatGPT.

As someone who is admittedly pessimistic about all things AI, I was surprised at how well the response was to my prompt. I noticed that the factual details were accurate, but the AI tended to generalize in the wording. For example, one of the lines it gave was, “The effects of depression are far-reaching…” I changed this to “…can be far-reaching.”

The AI text left out how depression is different for everyone. Not everyone has the same symptoms, intensity of symptoms, or responds to the same treatments. I included these in my revisions because just because you CAN have a certain symptom, it doesn’t mean you WILL have that symptom. Here’s the AI response and my revisions.

Screenshot of ChatGPT response and some of my revisions on Google Docs.

I also added context to my revisions by including key facts from credible sources. The sources I used include the World Health Organization, Cleveland Clinic, National Institute of Mental Health, and Healthline. The facts given in the AI response matched the information about depression from these sources.

From this experiment, I learned that it’s a good rule of thumb to scrutinize the material given in an AI response. I don’t trust the software enough to just accept what it has written because it can make mistakes. Even though the material in mine was correct, there is always the chance that the material could be wrong. I also learned how AI tends to generalize and that has made me more inclined to comb through an AI text response.

 At the end of my revisions, I also added a line about my struggle with depression. I felt that adding this provided a human element rather than sounding robotic like a computer wrote it. In the end, a computer or large language model can’t suffer from depression, it is a human experience.

Categories
MCO 426

Wikipedia Assignment

This week we were tasked with editing on Wikipedia. Since Wikipedia is an online encyclopedia that’s been around for a couple of decades now, I had a hard time finding something that I could contribute to. I tried to find something that I was interested in that was niche and maybe not covered as much on Wikipedia. One of my guilty pleasures is watching soap operas and I learned that Wikipedia has a vast amount of information on soap opera storylines, characters, actors, etc. Most U.S. soap operas have been on for 50+ years and Wikipedia comprises a lot of that history.

I started looking at articles about soap operas I watch, trying to see if there was anything that I could add. As someone who watches Days of Our Lives, I came across an article listing all the characters that have been added to the show in the 2020s. I noticed that a new character that recently appeared in October 2024 wasn’t on this list. Finally, I felt like I had something I could add to Wikipedia!

I knew I needed to find a relevant and credible source to back up this information. Wikipedia’s verifiability policy requires that any material added to Wikipedia be backed up with a credible source. This ensures that information added to Wikipedia is based on facts and not on opinions. I found an article from TV Insider. This is a website that covers news related to television. It backed up all the information I wanted to add; the character’s name, the character’s first air date, the character’s connection to other notable characters, the actor portraying the character, etc.

Next, I went to the talk page and added a new topic, explaining what I think should be added and linked to the TV Insider article. As of now, I haven’t received any replies, so I decided to do the editing. Even though Wikipedia doesn’t have firm rules, there are a lot of policies and ways that you’re supposed to do things. Trying to follow the policies and guidelines made it a bit confusing at first. I feared messing up the article. Following Wikipedia’s citation guidelines helped me and I feel like I got the hang of it.  Here is the article with my contribution added (it’s still there as of writing this).

Comment I added to the article’s talk page.
My contribution to the article is highlighted in blue.
The citation I used for my information is highlighted in blue in the article’s references list.

In school, I was told not to use Wikipedia because anyone could edit information on there, and that it wasn’t reliable. In theory, anyone can contribute to Wikipedia, but they must back up what they’re saying with a reliable source. My view of Wikipedia is more positive, and I now believe that Wikipedia is a great starting point for research. As Pete Forsyth explains, Wikipedia should be seen as a platform rather than a publisher.

Instead of citing Wikipedia as a source, Wikipedia should be used “as a guide to find more reliable sources, and then citing those sources directly.” This experience has helped me to think of Wikipedia as a reference aggregator. If I’m researching a topic, I can go to the article for that topic on Wikipedia and dig into the references used in the article at the bottom of the page to use in my research.

I believe that Wikipedia can be a useful media literacy tool. When checking the validity of a claim or source, we can use the SIFT method (Stop, Investigate the source, Find better coverage and supporting evidence, and Trace claims back to their original context). Wikipedia is a tool that can be used as part of the investigative step of SIFT. This was an interesting experience, although I don’t think I’ll become a Wikipedia editor. However, I will continue to be a consumer.

Categories
MCO 426

Blog Assignment #4: Post with Video

For this week’s video blog, I made a tutorial on how to get started with diamond painting. Diamond painting is a craft where you put different colored diamonds (also called drills) onto a sticky surface, which creates a beautiful image once you’re finished. I hope that you’re encouraged to try diamond painting or find a craft that you enjoy!

Diamond painting has been a rewarding hobby for me. Diamond Art Club describes the six benefits of diamond painting as minimizing stress and anxiety, stimulating creativity, boosting one’s artistic confidence, fine-tuning motor skills, the ability to join a community, and unplugging from technology.

Diamond painting (and crafting in general) is part of MBAT (mindfulness-based art therapy) according to Diamond Art Club. In a paper for the National Library of Medicine, authors Beerse, Lith, Pickett, and Stanwood say MBAT “combines mindfulness practices with art therapy to promote health, wellness, and adaptive responses to stress.”

Heartful Diamonds also describes how the repetitive and rhythmic movements of diamond paintings can have therapeutic benefits since focusing on the task allows you to relax which “can help to create a sense of calm and tranquility, while also allowing you to tap into your creative side and create a beautiful work of art.”

Categories
MCO 426

Blog Assignment #3: Post with Pictures

Why Photography is Better Than AI-Generated Images:

This week I looked at three different sources of images. With my subject as desert sunsets, the first photo is the one I took on my iPhone from my backyard. The second is a stock photograph and the third is an AI-generated image that tries to follow the same subject. This experience has led me to conclude that human images/photographs are better than ones that are generated by AI.

A photo of a sunset taken from my backyard. It includes light blue and orange colors. On the bottom is dark silhouettes of houses, trees, and a brick wall.
“Backyard Sunset” Photo taken by me (Katelyn Davidson) using iPhone 13 on 9 Nov. 2024.

There is a kind of relationship between an artist or photographer and their work. With the photograph I took, I had to physically go to where I wanted to take the picture, plan what time and angle, etc. I assume the photographer of the stock photograph did this as well.

Stock image of a sunset that's blue and orange. On the bottom, is a dark silhouette of a Joshua Tree and other desert plants.
glowing high desert sunset-001” by NancyFry is licensed under CC BY 2.0.

But I didn’t have this personal relationship with the AI-generated photo. I typed in a prompt, and it spit out an image. I had a hard time getting the AI-generated photo to look like mine and the stock photo and I still found the result to look fake and not like an actual photograph.

This is because AI cannot mimic the process of photography. In his blog, Mindscape FX, David Wilson describes how photographs are created by light falling onto a photosensitive surface which is converted to an image either digitally or chemically.

AI-generated image of a sunset with clouds. Includes blue, pink, orange, yellow, and purple colors. There's a silhouette of houses, mountains, and plants at the bottom.
“A photograph of a sunset of a clear sky with light blue, light yellow, and light orange with a black silhouette of houses on the bottom” prompt, Nightcafe, 9 Nov. 2024, https://creator.nightcafe.studio/ (AI-generated image)

In an article from Wild Eye, Justin Black explains that one of the things that makes photography better than AI is that it encourages us to see and explore the world. Photography captures the memories of the places we’ve been to and the experiences we’ve had. Another of Black’s points is that photography allows us to add our creative vision from start to finish. But no matter how much we add to an AI-image generator prompt, the final image is the product of AI and its training.

Natalie Zepp, who runs a photography blog and business, sums it up best by saying, “While AI can enhance certain aspects of photography, it lacks the intuition, emotion, and creative insight that are integral to the human experience. Photography is not solely about capturing images but also about conveying stories, emotions, and perspectives that are deeply rooted in the human condition.” Even as AI-generated images improve, they still won’t have the human elements that make photographs special.

Categories
MCO 427

We Got This! Misinformation Education Creation Activity

Infographic by Katelyn Davidson.

For my misinformation education activity, I’ve chosen to explain how an aspect of digital technology works to a specific audience. The topic I’m covering and teaching my audience about is how social media algorithms are optimized for engagement. This topic is important because social media algorithms being optimized for engagement play a big role in why social media is addictive. The way these algorithms are constructed is also why we see certain content but not others.

My target audience is teenagers of all genders aged 13-17. I’ve chosen this age range because social media companies require that children be at least 13 years old to create an account. This complies with the Children’s Online Privacy Protection Act (COPPA).

Social media is extremely popular with teenagers, and I think that many of them aren’t educated on how the algorithms work and how they are designed for constant engagement. Knowing this information could help teenagers be aware of how this makes social media addictive and affects what content they do and don’t see.

I created two infographics on Canva that explain how social media algorithms are optimized for engagement. As a visual learner myself, I find infographics to be useful in how they break down information in an organized way that is visually appealing to look at/read. An example of this that I took inspiration from is an infographic from the Cornell University Library that explains how to spot fake news.

An infographic can also be effective as it can be shared on social media platforms. Since there’s a limited amount of space on an infographic, I included the information I thought was most important for my audience to know. I will provide more background context in this blog post.

Social media algorithms are optimized for engagement as the algorithm is trained to recommend content that is like what the user has interacted with. An article from Sprout Social explains that social media algorithms use signals, data, and rules to control the platform’s operation. The algorithm determines how content is selected, filtered, ranked, and recommended to users.

Sprout Social explains how the algorithms match users with relevant content by collecting user engagement signals such as likes, comments, and shares. These actions tell the platform what content a user finds interesting and relevant. The algorithms are AI-driven and filter content for users “based on explicit (follows and likes) and implicit (video-watching time) details to personalize content recommendations.”

The first infographic I created (above) is a guide for teenagers on how social media algorithms are optimized for engagement. Every social media platform has its unique algorithm that we only know so much about. This is because social media companies often keep the details of their algorithms secret. After all, they consider it to be the ‘secret sauce’ of their business.

Even though each of the platform’s algorithms is different, they generally work the same way. An article from Hoot Suite explains how all the algorithms use and are based on machine learning and a set of factors called ranking signals. These signals rank every piece of content’s value for each user at a certain point in time.

Despite using advanced technological systems, their overall function is to “scan the entire pool of content available, then score and rank it to determine what appears in a user’s feed.” All social media algorithms view liking, commenting, or sharing a piece of content to be positive signals of what a user wants to see. Negative signals including stopping a video, blocking and/or hiding a post, are used by the algorithm to indicate what a user does not want to see as explained in an article by Quick Frame.

For my second infographic (below), I focused on one social media platform. I chose to specifically use this infographic to educate teens on how TikTok’s algorithm is optimized for user engagement. A study from Statista found that TikTok was used by 63% of U.S. teens aged 13-17 (my target audience) in 2023.

I also chose TikTok for the second infographic because as described in an article by Buffer, its algorithm is different compared to other platforms such as YouTube and Instagram. Its recommendation system serves up a mix of quality content by creators the user follows as well as new content. This has helped give TikTok its addictive characteristics.

By educating teens about social media algorithms and how they are optimized for engagement, they can have a better understanding of how this makes social media addictive and affects what they do and don’t see on their feeds. The more users engage with the platform, the better it is for the platform’s bottom line.

Infographic by Katelyn Davidson.

Resources Used For Infographics:

Categories
MCO 427

Assessing Platforms’ Current Attempts to Curb Misinformation

1st Platform: Twitter/X

For this week’s blog post, I’ve decided to research two social media sites that I use, Twitter/X and YouTube. Researching X’s attempts to curb misinformation was interesting because I found a lot of information showing that since Elon Musk’s takeover, many of the misinformation policies the platform used to have were rolled back. The current policies related to misinformation seem to fall under the ‘Authenticity’ category of X’s rules page.

One of the ‘Authenticity’ subcategories that X uses to curb misinformation is Civic Integrity, which states “You may not use X’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process.”

The civic process includes political elections, censuses, major referenda, and ballot initiatives. Violations of this policy include misleading information about how to participate, suppression, intimidation, and false or misleading affiliation. For example, if someone tweeted the false claim that people can vote by text message, that would be violating these rules. Posts that partake in this can be downranked, excluded from search results, be restricted from likes, replies, etc.

X also restricts misleading and deceptive identities stating, “You may not misappropriate the identity of individuals, groups, or organizations or use a fake identity to deceive others.” Parody, fan accounts, and commentary are not in violation of this. Consequences of this can include suspension and/or your profile being modified.

Another policy that helps to curb misinformation on X is the policy on synthetic and manipulated media, which states, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”). In addition, we may label posts containing misleading media to help people understand their authenticity and to provide additional context.” An example of this would be inauthentic or fictional media being presented as reality or fact. Consequences include post deletion, labeling, or the account being locked.

These policies deserve merit because in my experience, I’ve never come across voting misinformation and any manipulated video/images I’ve seen are usually labeled as such. But since Musk’s takeover several policies have been rolled back. In November 2022, Twitter/X stated it would no longer enforce its COVID-19 misinformation policy. This policy removed posts making false claims about COVID-19 vaccines. In my view, X should have a policy where all false claims relating to health are removed so that it stops the spread of health misinformation.

In September 2023, X removed a feature that allowed users to self-report political misinformation. This seems to be in contradiction of X’s civic integrity policies. If users come across a post that includes false information meant to suppress voting, they have no place to report it.

Despite these rollbacks, I believe that one of the positive things Musk has instituted is the Community Notes feature. Contributors write notes that can add context and/or provide evidence disapproving a claim made in a post. In my own Twitter/X experience, I’ve come across posts that I might have believed if not for the Community Notes feature.

The screenshot below is of a post that includes a picture claiming to be from the Solar Eclipse on April 8th, 2024. But users added a Community Note describing how this is an AI-generated image. A drawback of Community Notes is that a note will only be shown on a post if it’s rated helpful by enough people. In my experience, Community Notes are seen on posts that go viral or are seen by a lot of people, so many posts slip through the cracks.

Screenshot of post from @the_moon_lovers that shows the use of Community Notes.

I think that one of the things X can do to improve upon their existing efforts is to reinstate the policies they’ve rolled back. How can X have a policy that prohibits misinformation regarding civic integrity but removes a feature that allows users to self-report misinformation they come across?

Another recommendation would be to remove repeat offenders instead of allowing them back on the platform, which Musk has done. It would also be helpful to ‘nudge’ users to think about accuracy with an accuracy prompt. X has made some good attempts at curbing misinformation, but they could do better.

2nd Platform: YouTube

Unlike Twitter/X, YouTube has multiple pages that specifically describe its efforts to combat misinformation. In 2022, YouTube announced how they had invested in a program they call the 4 Rs of Responsibility, which combines humans and machine learning to combat misinformation on the platform. The 4 Rs are “remove violative content quickly, raise up authoritative sources, reduce the spread of problematic content,” and reward content creators that follow the rules.

YouTube’s misinformation policies page states that they use a clear set of facts to base their misinformation policies. For example, they rely on expert consensus from local and international health organizations regarding COVID-19 medical misinformation. With newer misinformation that doesn’t have a consensus of facts, it is harder to detect this kind of misinformation.

This typically falls in the category of “borderline content,” which is content that “comes close to – but doesn’t quite cross the line – or violates community standards.” Borderline content is not recommended to users on the platform, helping to prevent its spread.

When researching YouTube, an article from The Guardian described how a group called Doctors for Truth had videos on YouTube about health and election misinformation. I went onto YouTube and researched this channel/group, and I didn’t find these videos. I think this shows that YouTube takes its misinformation policies seriously because these videos were either removed from the platform or are prevented from coming up in search or recommendations.

Another example of YouTube enforcing these policies is that it announced in August 2023 that they had begun a mass takedown of videos spreading cancer misinformation. YouTube stated that they would specifically remove videos that encouraged people to not seek professional medical treatment and/or promoted cancer treatments that have been proven to be ineffective or harmful. I think this is a positive step for YouTube in ensuring that health misinformation isn’t being spread on their platform.

Screenshot of how it can look to a user that tries to access a video that has been removed from YouTube for violating their rules.

Like Twitter/X, YouTube has specific policies regarding misinformation about elections, COVID-19, and vaccines. But in June 2023, YouTube announced that they would “stop removing content that falsely claims the 2020 election or other past U.S. presidential elections were marred by “widespread fraud, errors or glitches.” There’s a consensus amongst experts and evidence showing that the 2020 election was not fraudulent and allowing content on the platform that spreads this misinformation seems counterintuitive to YouTube’s policies on misinformation.

When conducting research, I found YouTube’s misinformation policies to be much clearer than Twitter/X’s. Twitter/X’s policies don’t specifically mention ‘misinformation,’ you must read between the lines, whereas YouTube clearly states how they handle misinformation. Overall, I found YouTube’s policies to be better, but I think they should reverse their decision to not remove videos that claim the 2020 election was fraudulent.

Regarding why there’s conspiratorial content available on the platform, an article from Forbes describes how YouTube and Google must explain “the specific criteria its algorithms use to rank, recommend, and remove content—as well as how often and why those criteria change and how they are weighted relative to one another.”

The article also states how YouTube must improve upon and expand its content moderation system by adding more human reviewers. These recommendations would allow YouTube to catch misinformation content more easily, especially the harder-to-catch borderline content.

Categories
MCO 427

Claim Analysis

Claim: There was a claim going around on social media over Easter weekend that President Biden was replacing the Easter holiday with Transgender Day of Visibility.

Screenshot of Tweet from @MehekCooke

To analyze this claim, I would recommend using the SIFT method developed by Mike Caulfield and lateral reading, where you open up different tabs to “verify what you’re reading as you’re reading it”.

Step #1: Stop (SIFT). I am immediately skeptical of this claim because it’s coming from social media where anyone can post anything. It’s not written by a journalist for an article and journalistic practices and ethics aren’t required for social media posts. A big red flag about this claim is that it’s a bold and partisan claim. An article from Tech Policy reports that Twitter use causes increases in political polarization, outrage, and a sense of belonging.

Step #2: Investigate the source (SIFT). I’m unfamiliar with the user that Tweeted, so I open a new tab and Google searched for them. Per Mehek Cooke’s website, she is a Republican lawyer and political strategist/consultant. This gives me some insight as to why this person would share this claim, they have a vested interest in the Republican party.

Screenshot of Google search of Mehek Cooke, which brings up her personal website.

Politicians and partisan commentators on social media tend to purposely post these kinds of claims because it causes instant emotions. Forbes reported how outrage can spread faster and lead to more engagement on social media. This is why it’s important to investigate claims made on social media because it could be information taken out of context to spread a certain political narrative.

Step #3: Find better coverage (SIFT). Doing a Google search, I don’t see any articles from factual news agencies that report that President Biden has replaced Easter with Transgender Day of Visibility.

Step #4: Trace the claims (SIFT). In her Tweet, Cooke screenshotted President Biden’s 2024 Proclamation that March 31st is Transgender Day of Visibility. I Googled this proclamation and found it. President Biden proclaims that March 31st, 2024, is Transgender Day of Visibility but there’s no mention of this replacing Easter, and Easter isn’t mentioned at all in the proclamation.

Step #5: On the same White House website, I find that President Biden issued a separate statement for Easter, where he sent warm wishes to Christians on March 31st, 2024. This statement doesn’t mention Transgender Day of Visibility and there’s no mention of Easter being replaced as a holiday.

Screenshot of President Biden’s Easter Statement on whitehouse.gov

Step #6: I search “Transgender Day of Visibility” on Google and go to the Wikipedia page for it. Wikipedia is a useful resource for analyzing a claim and getting more background knowledge on a source. The Wikipedia page tells me that this is an international event that was created by Rachel Crandall Crocker in 2009.

Screenshot of Wikipedia page for International Transgender Day of Visibility.

Step #7: Next, I conducted a Google search on Rachel Crandall Crocker and found an NPR article detailing Rachel and the holiday. There is no mention of Easter or the replacement of Easter in this article. Rather, NPR describes how Rachel created this event in 2009 to celebrate the Transgender community. Since 2009, the date for the event has been March 31st because Rachel wanted a date that wasn’t too close to Transgender Day of Remembrance and Pride Month in June.

This image has an empty alt attribute; its file name is Screenshot-13-1024x533.png
Screenshot of NPR article about the Transgender Day of Visibility’s founder.

Step #8: I go back to the Wikipedia page for International Transgender Day of Visibility, I look at the references and see a Newsweek article reporting that President Biden was the first to issue a proclamation for Transgender Day of Visibility in 2021.

Screenshot of the Wikipedia references for International Transgender Day of Visibility. The highlighted one in blue is the article from Newsweek.
Screenshot of Newsweek article.

Step #9: Next, I Google searched the date of Easter for 2021 and find that it was on April 4th. The claim that President Biden was replacing Easter with Transgender Day of Visibility doesn’t make sense because he has been issuing proclamations for Transgender Day of Visibility since 2021 and in 2021, Easter was on April 4th. Transgender Day of Visibility always takes place on March 31st. Easter takes place on a different Sunday each year.

Screenshot of Google search for date of Easter 2021.

Step #10: I Google searched President Biden’s Transgender Day of Visibility proclamations and see that he has issued one every year since 2021. Transgender Day of Visibility and Easter happened to fall on the same date this year.

Screenshot of Google search for President Biden’s Trans Day of Visibility proclamations.

Conclusion/Verdict: My verdict of this claim is that it’s not true that President Biden replaced Easter with Transgender Day of Visibility. Easter is on a different Sunday each year and Transgender Day of Visibility is on March 31st every year. It was a coincidence that both fell on the same day this year. President Biden acknowledged both events.

When conducting a fact-check of something you think may be misinformation, use the SIFT method and lateral reading to trace the claim. Use Wikipedia and Google to get information about the source of this information. It’s important to take the time to investigate a claim, especially one seen on social media. Social media is used by politicians and partisan commentators to take information out of context and spread a particular political narrative that fits their agenda.

Categories
MCO 427

Evaluating Misinformation Education Tools

The first tool I’ll be evaluating is the News Literacy Project’s RumorGuard. This is an interactive tool that fact-checks viral rumors spreading online. RumorGuard will feature a screenshot of the rumor that’s circulating, and it will fact-check the rumor’s claims based on five factors. These factors include source, evidence, context, authenticity, and reasoning. There are also techniques for each of the five factors in the form of quizzes, lessons, videos, and infographics that show the user how they can use these techniques themselves.

On their homepage, click on one of their fact-checked rumors or you can search by topic such as #TikTok to see fact-checks of rumors circulating on TikTok. One of the most recent fact-checked items on RumorGuard is a video clip where Donald Trump appears to refer to his wife, Melania, as ‘Mercedes.’ Three out of the five factors (source, evidence, and context) were used to prove that this claim is false. This video was taken out of context, as Trump was referring to Mercedes Schlapp, his former White House Director of Strategic Communication, not his wife.

This tool is effective in teaching participants about misinformation because it fact-checks viral rumors in a simple but thorough way. An article from Mashable describes how this tool is different from others because it engages future learning and goes beyond simply debunking a claim. It gives the user a choice to look at the details of how the content failed the five-factor test. This tool both fact-checks viral claims seen on social media and provides users with the tools they need to conduct fact-checks of information they see on social media.

Screenshot of rumor about Trump from RumorGuard.
Screenshot of RumorGuard’s fact-check of a rumor about Trump.
Screenshot of 5 Factors analysis used by RumorGuard for rumor about Trump.
Screenshot of techniques for fact-checking under the rumor about Trump from RumorGuard.

The second tool I’ll be evaluating is a game called Bad News. In this game, the user is the fake news mogul. You begin with a small social media following that you build up by posting fake news content in the form of tweets, memes, etc. In this game, you’re the bad guy and you’re learning their tricks for how they spread fake news.

But you must maintain a balance between keeping your followers entertained, and earning more followers, but at the same time not posting things that are so outlandish that you lose credibility with your followers. Badges are earned for mastering each of the fake news techniques including impersonation, emotion, polarization, conspiracy, discredit, and trolling.

The game uses the ‘Inoculation Theory,’ a psychological theory where you’re persuading others to not be persuaded by others. An article from Science Alert compares this to a vaccine where “exposing people to a weak argument can help them develop a defense system, whereby stronger arguments are not so contagious or harmful in the future.” The goal of the game is to immunize users against misinformation tactics by exposing them to a weakened dose of it.

In a research essay from Misinformation Review, the game designers found that after playing the game, people became less susceptible to commonly used misinformation techniques, referred to as prebunking. The same study also found that “social impact games rooted in insights from social psychology can boost psychological immunity against online misinformation across a variety of cultural, linguistic, and political settings.”

Screenshot of Bad News gameplay, showing my follower count and credibility.
Screenshot of ‘Impersonation’ badge that I earned during gameplay of Bad News.

Interactive tools and games are a great way to teach people about misinformation and to eradicate it. Rather than reading an article about misinformation techniques and fake news, people can play a game or use an interactive tool. This is often more engaging and encourages motivation.

An article from Class Craft describes how using games for education in general can provide educational styles that fit different types of learners, teach collaboration, critical thinking, and problem-solving, and reinforce social-emotional learning.

Categories
MCO 427

24-Hour Media Diet: Spotting Misinformation

My 24-Hour Media Diet:

Monday March 18th, 2024

8:30 AM- I turn off the alarm on my phone. Still groggy from sleep, I check my ASU email and my personal AOL email. It’s my daily habit to check this every morning in case there’s anything that needs my immediate attention.

8:45 AM- I go downstairs and make some oatmeal for breakfast. Sitting down to eat, I turn the TV on and put it on one of my local news channels, channel 3TV. As I’m eating, I log onto Twitter/X on my phone and scroll what’s on my feed.

9:30 AM- After eating my breakfast and scrolling my feed for a while, I look at the trending topics. I like to do this to get an idea of anything important going on and it makes me feel connected with what’s going on in the world.

10:00 AM- One of the topics that’s caught my eye is #RoyalAnnouncement. Full disclosure, reading about the royals is a guilty pleasure of mine. I see tweets about how there will soon be a royal announcement made because apparently, King Charles died the previous day (Sunday, March 17th).

Tweet from @iam_aleeraza.
This image has an empty alt attribute; its file name is IMG_4914-701x1024.jpg
Tweet from @UAKSpeaks.

I’m instantly suspicious about the accuracy of this. If King Charles had died, it would have been major news everywhere and Twitter is the only place I’m hearing of this.

11:00 AM- I go on my laptop and peruse some of the headlines on the Washington Post’s home page. I clicked on a headline explaining that 81 million people in the U.S. will suffer from allergies this spring. As an allergy sufferer, I’m interested in this topic. The article cites a study showing that the warmer climate is contributing to a longer pollen season and because of that, we become more sensitized to allergens. Climate change is having a direct impact on our seasonal allergies.

12:00 PM- My mom texts me a link to an article from KTAR (an Arizona news radio station) about the murder suspects in the Preston Lord case. My family has been following this story as it happened in my local community. Last year, a gang of teenage boys and young men beat a 16-year-old boy to death and the suspects have recently been arrested.

12:45 PM- I make some lunch and sit down to eat. As I’m eating, I log onto YouTube on my laptop. I watched a video from Kyeluh about the Kate Middleton conspiracy theories. Kyeluh (the YouTuber) gives a rundown of the conspiracy theories that have been going around on social media.

There aren’t links in the description, but she tells which news sites she got some quotes and headlines from. She also includes a video from the BBC. Even though she says where she got some of the information, she is ultimately giving her own opinion on the matter and what she thinks is going on and finds suspicious. I don’t consider this news but rather commentary/opinion.

Screenshot of YouTube video from Kyeluh.

2:00 PM- My mom and I take a walk to our nearest Starbucks. After we get our drinks and sit down, I scroll through my Twitter feed again.

4:00 PM- After walking home, I get started on my homework and reading for my MCO 427 class.

6:30 PM- While sitting down to eat dinner with my family, we watch several episodes of The Office on Peacock until bedtime. My phone is almost dead, so I put it on the charger. I take this time to give myself a break from being on my phone.

10:00 PM- Back in my room relaxing for the night, I scroll some more on Twitter. When I click a tweet, I almost always get an ad for Liver King. This is an influencer who has claimed that he lives on raw meat and that is why he has a muscular build. But there is never any data he provides to back up these claims. The community notes below the tweet show that this has been flagged by others as misinformation because he uses steroids not just raw meat as he claims. There are also links provided that prove his use of steroids and that eating raw meat is unhealthy.

Tweet of an ad from @liverking.
2nd half of Tweet of an ad from @liverking with Community Notes.

11:30 PM- I get ready for bed and that’s my end of the day.

Flagging & Fact-Checking: I immediately found the tweets announcing King Charles’ death to be suspicious. The tweets were not coming from journalists or news outlets, just random users. When I was perusing the headlines from the Washington Post, I noticed that there was no mention of King Charles’ supposed death. This would be major world news and there would be something on the homepage if it had happened. I also went to the BBC’s website as this would be one of the first places where this news would break and there was no mention of it.

I would also flag the advertisements I get on Twitter for Liver King. A diet of raw meat sounds unhealthy, and Liver King doesn’t provide any data or evidence of it being effective. The community notes did my fact-checking for me. It provided me with the context that Liver King uses steroids to achieve his muscular build. Links were provided that proved this and one included a CDC article describing how eating raw meat is unhealthy.

Community notes are a great addition to Twitter because they can be used to stop the spread of misinformation and show context that’s missing in the tweet. But there are lots of tweets that slip through the cracks because for a tweet to receive community notes, users must rate the notes as helpful for them to appear. So, I am skeptical of any information I come across on Twitter.

Analysis: My media diet has shown me that I only use Twitter and YouTube when I’m bored and just want to pass the time. I am more inclined to fact-check content on Twitter because anyone can post anything. Whereas, when I’m reading an article from the Washington Post or a local news outlet, I trust those a lot more. For example, the article about allergies provided studies that show the evidence of the article’s claims.

Even though the Washington Post and local news outlets can make mistakes and get things wrong, I trust them because it is traditional journalism. There are journalistic ethics and procedures that went into these articles rather than someone just tweeting. I’m also more inclined to fact-check information on YouTube because a lot of it is commentary/opinion and again, anyone can post anything. It is because of this that I will continue to fact-check the information I see on social media.

The amount of questionable content I saw was what I expected because I have come across a lot of misinformation before on social media. I’m more inclined to fact-check social media because users don’t have to adhere to journalistic ethics and procedures. As we learned in one of the module 2 lecture videos, people share misinformation for reasons such as they believe it’s interesting, it’s interesting-if-true, for fun, or because they believe they are being helpful.