How Social Platforms Curb Misinformation - Facebook & TikTok
- kelseynwindham
- Apr 16, 2023
- 5 min read
Social media has become a significant part of most of our daily lives, connecting us like never before. With that being said, with this increased connectivity comes the spread of misinformation, which can have serious consequences.
In recent years, Facebook and TikTok, two of the most widely used social media platforms, have come under fire for their handling of misinformation. Both companies have taken steps to address this issue, but how effective have these attempts been? We are going to take a look at both of these platforms and look at the significant steps they have taken to combat this issue.

FACEBOOK:
We all know Facebook doesn't have a great name for itself when it comes to misinformation.
According to this blog through Meta, these are the three key areas they are looking to fight the spread of misinformation:
disrupting economic incentives because most false news is financially motivated;
building new products to curb the spread of false news; and
helping people make more informed decisions when they encounter false news.
First, let's look at the key area of Disrupting Economic Incentives.
According to the same blog, these are the steps they will be taking to remove the economic incentives:
Better identifying false news through our community and third-party fact-checking organizations so that we can limit its spread, which, in turn, makes it uneconomical.
Making it as difficult as possible for people posting false news to buy ads on our platform through strict enforcement of our policies.
Applying machine learning to assist our response teams in detecting fraud and enforcing our policies against inauthentic spam accounts.
Updating our detection of fake accounts on Facebook, which makes spamming at scale much harder.
Secondly, let's look at Building New Products
They are looking at ranking improvements, easier reporting, and working with partners.
Now let's talk about this for a second. They are looking at ranking in regards to how people are engaging with an article, but can that truly be a good indicator that something is misinformation?
Easier reporting makes it easier for users to report something as "fake news." This is concerning because who is to say that someone with a differing opinion wouldn't go and report an opposing article because they don't agree or like it?
Lastly in this area, they are going to work with partners such as third-party fact-checking organizations. Meta states "If the fact-checking organizations identify a story as false, it will get flagged as disputed and there will be a link to a corresponding article explaining why. Stories that have been disputed also appear lower in News Feed." -- This isn't really removing the misinformation which allows it to still get spread online.
Lastly, let's look at the area of Helping People Make More Informed Decisions:
I wanted to include this screenshot of the article because ironically enough Meta is working with the Walter Cronkite School of Journalism and Mass Communication which has been partnering with Meta to help decide what research and projects to fund. As a student in this school at ASU I found this very interesting and intriguing.
I think this aspect is neat, however, as a user of Facebook and in the Cronkite School of Journalism and Mass Communication this was completely unknown to me. I know the work is there and it's in the process but it does make you question was is actually being done and shared with everyone else unless you dig deep and research as I did today.
While these steps represent progress, Facebook still faces criticism for not doing enough to combat misinformation on its platform. There still isn't 100% transparency and how reliable can these third-party fact-checkers actually be?

TIKTOK:
TikTok, on the other hand, is a newer platform that has quickly grown in popularity, especially among today's younger generation. Like Facebook, TikTok has also faced criticism for its handling of misinformation. However, the company has taken several measures to address the issue. One of the most significant steps is partnering with third-party fact-checkers to review and flag false information. TikTok adds warning labels to videos that contain misinformation, and users are required to acknowledge that they have read the label before they can share the video.
According to this company article Combating Misinformation and Election Interference on TikTok, general manager Vanessa Pappas on August 5, 2020, addressed the concerns and the steps they are taking to combat misinformation. These steps are as shown below:
We're updating our policies on misleading content to provide better clarity on what is and isn't allowed on TikTok.
We're broadening our fact-checking partnerships to help verify election-related misinformation, and adding an in-app reporting option for election misinformation.
We're working with experts including the U.S. Department of Homeland Security to protect against foreign influence on our platform.
TikTok began to add warning labels to videos that contain misinformation, and users are required to acknowledge that they have read the label before they can share the video. TikTok has also launched educational campaigns to help users identify and avoid misinformation. The company has removed numerous videos that spread misinformation or violate its community standards.
Another thing that was mentioned in this article was that TikTok is "committed to being transparent about how we execute our policies and safeguard our platform. With our Transparency and Accountability Center now available for virtual tours, along with the release of regular Transparency Reports and our new Transparency webpage, we aim to give users, lawmakers, and experts unprecedented insight into how we work to keep TikTok safe and secure."
While these steps are taking place and there's talk on both platforms about their transparency, are they actually being transparent? The steps are going in the right direction, but do you have to research these things in order to be welcomed with this "Transparency" or can it be more visible to its users? That's the real question. Why are these platforms still trying to essentially hide this information? Yes, it is available to the public but it's not where you would think to look.
The effectiveness of the measures taken is difficult to truly measure, and it is likely that both companies will need to continue refining their approaches to combat misinformation effectively and consistently especially since things on the internet are ever-changing.
In conclusion, both Facebook and TikTok have taken pretty significant steps to reduce the spread of misinformation on their ever-growing platforms. However, there is still much work to be done, and showcasing their transparency will be crucial to building trust with their users. As social media platforms continue to grow, it will be essential for companies to stay consistent and continue developing new strategies to address the wildfire that is misinformation on the internet.




Comments