Our Approach to Preventing the Spread of False Information
As the world continues to battle the latest developments of the COVID-19 pandemic, it is more important than ever to ensure the public has access to accurate, credible information. The rapid spread of false information can pose serious threats to our institutions and public health, and we believe we’re in a moment in which companies, organizations, and individuals should take stock of their efforts to help prevent it.
In that spirit, we thought it would be helpful to walk through our long held approach to preventing the spread of false information on Snapchat, and the ways we are working to improve.
Our approach has always started with the architecture of our platform. Snapchat was originally built to help people talk to their close friends, rather than provide the opportunity to broadcast messages across the app. And we have always felt a deep responsibility to make sure that the news and information our community sees on Snapchat is credible, from trusted and clear sources.
These underlying principles have informed our product design and policy decisions as Snapchat has continued to evolve over the years.
Across our app, we don’t allow unvetted content the opportunity to ‘go viral.’ Snapchat does not offer an unmoderated open newsfeed where unvetted individuals or publishers can broadcast false information. Our content platform, Discover, only features content from vetted media publishers and content creators. Our entertainment platform, Spotlight, is proactively moderated before content can reach a large audience. We offer Group Chats, but they are limited in size, are not recommended by algorithms, and are not discoverable on our platform if you are not a member of that Group.
Our guidelines have long prohibited the spread of false information. Both our Community Guidelines, which apply equally to all Snapchatters, and our content guidelines, which apply to our Discover partners, prohibit the spread of misinformation that can cause harm, including conspiracy theories, denying the existence of tragic events, unsubstantiated medical claims, or undermining the integrity of civic processes. We regularly review and update our policies as new forms of misinformation become more prevalent: for example, ahead of the 2020 election, we updated our guidelines to make clear that manipulated media intended to mislead -- or deepfakes -- were prohibited.
Our approach to enforcing against content that includes false information is straightforward -- we don’t label it, we completely remove it. When we find content that violates our guidelines, our policy is to simply take it down, which immediately reduces the risk of it being shared more widely.
We evaluate the safety and privacy impacts of all new features during the front end of the product development process -- which includes examining potential vectors for misuse. We have internal measures in place to evaluate the potential impact of a new feature on the safety, privacy, and wellbeing of both Snapchatters, our individual users and society during the product development process -- and if we think it will become an avenue for bad actors to share false information, it doesn’t get released.
We use human review to fact check all political and advocacy ads. As with all content on Snapchat, we prohibit false information and deceptive practices in our advertising. All political ads, including election-related ads, issue advocacy ads, and issue ads, must include a transparent “paid for” message that discloses the sponsoring organization. We use human review to fact check all political ads, and provide information about all ads that pass our review in our Political Ads library.
We are committed to increasing transparency into our efforts to combat false information. Our most recent Transparency Report, which covered the second half of 2020, included several new elements, including data about our efforts to enforce against false information globally. During this period, we took action against 5,841 pieces of content and accounts for violations of our policies on false information -- and we plan to provide more detailed breakdowns of these violations in our future reports.
As we keep working to remove incentives for sharing false information, both through our product design choices and our policies, we’re also focused on partnering with experts to promote factual health and safety information. Since the beginning of the pandemic, we have worked closely with public health officials and agencies, including the World Health Organization and the Centers for Disease Control and Prevention, to publish regular safety updates, and our news partners around the world have produced constant coverage of the pandemic. Earlier this Spring, as vaccines became available for young people in the US, we launched a new effort with the White House to help Snapchatters answer common questions, and in July, we teamed up with the UK’s National Health Service on a similar effort.
Doing our part to help our community stay safe and healthy is an ongoing priority for us, and we will continue to explore innovative approaches to reach Snapchatters where they are, while strengthening our efforts to protect Snapchat from the false information epidemic.