As part of our new Safety School program, we are making an effort to share Meta's best practices more widely in a series of webinars, videos and more. With this series, you can learn more about the policies, resources, and specific tools available around improving account security and combating impersonation, bullying and harassment.
So far, we’ve connected with creators in more than 27 countries around these subjects, and we’re excited to share what we've learned. This four-part video series we’re sharing today is aimed at empowering creators and publishers to make the most of Meta’s safety and well-being tools. Each of these videos will dive into important tips that help you keep your communities safe. Stay tuned as we expand Safety School to more creators this year!
Hear from Corey, an Instagram Well-being Lead, about actionable ways you can use Instagram's safety tools. Learn about Hidden Words, which can help you proactively filter out unwanted comments and direct messages, and Limits, which can hide comments and messages from people who don't follow you (or who followed you recently). Watch a quick walkthrough of these tools as well as other moderation features that allow users to turn off commenting on a post or restrict interactions from specific accounts.
Familiarize yourself with account-level and moderation tools, and know when to use them.
Learn how to use them before you need them.
Plan out how you want to respond to unwanted interactions.
Learn more about the tools available to support the well-being of Facebook users, and combat bullying and harassment on the platform. Hear from Cayley, a Product Manager focused on well-being at Facebook, about tools that can be used to prevent harm, including response tools that can be used to hide and delete concerning comments and block users.
Create your own keyword filter. Facebook’s keyword filter allows users to add a custom list of keywords they do not want to see on their Facebook Page.
Enable Facebook’s profanity filter. The profanity filter is a curated list of commonly reported, obscene words. This is currently available for English words only with plans to support more languages later this year.
Familiarize yourself with Facebook’s post-level safety tools, which allow users to choose who can comment, delete, hide and report comments, as well as block and ban users.
In this episode, hear from Brett on the Partner Operations Team about how reporting content on Facebook and Instagram works. Learn how the Reporting tool can be used to flag content, accounts, messages, and posts that users believe violate Community Standards. Community Standards are the rules set in place that users must follow, including rules against violence and fraud on the platforms. Typically, these reports are sent to an operations team that reviews them and removes them from the platform if they are found violating. Reporting helps improve Meta’s own systems of detection and enforcement.
As Brett explains, Reporting is a tool available to every user, and it can help make individuals and the community at large safer.
Familiarize yourself with Facebook and Instagram’s Community Standards, which details what behavior is and isn’t allowed on those platforms.
Be intentional about reporting. If you believe a specific piece of content, comment, or message is violating, report that specific content (rather than the entire account that posted it).
In this video, hear insights from Adam on the Community Product Operations team about account compromise, how it happens, and how to help prevent it. Adam explains which behavioral and account vulnerabilities to look out for, and why.
Use a password manager to create strong passwords that you don’t have to remember yourself.
Enable 2-factor authentication on your account.
Scrutinize potential phishing emails and other messages from non-reputable sources, especially if they are asking you for account credentials.