Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organisations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies in the Facebook app and on Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional Internet restrictions that limit people's ability to access the Internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
Change log
Change log
Current version
In an effort to prevent and disrupt real-world harm, we do not allow organisations or individuals that proclaim a violent mission or are engaged in violence to have a presence on our platforms. We assess these entities based on their behaviour both online and offline – most significantly, their ties to violence. Under this policy, we designate individuals, organisations and networks of people. These designations are divided into two tiers that indicate the level of content enforcement, with Tier 1 resulting in the most extensive enforcement because we believe that these entities have the most direct ties to offline harm.
Tier 1 focuses on entities that engage in serious offline harm – including organising or advocating for violence against civilians, repeatedly dehumanising or advocating for harm against people based on protected characteristics, or engaging in systematic criminal operations. Tier 1 includes hate organisations, criminal organisations, including those designated by the United States government as specially designated narcotics trafficking kingpins (SDNTKs), and terrorist organisations, including entities and individuals designated by the United States government as foreign terrorist organisations (FTOs) or specially designated global terrorists (SDGTs). We remove Glorification, Support and Representation of Tier 1 entities, their leaders, founders or prominent members, as well as unclear references to them.
In addition, we do not allow content that glorifies, supports or represents events that Meta designates as violating violent events – including terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders or hate crimes. Nor do we allow (1) Glorification, Support or Representation of the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims. We also remove content that Glorifies, Supports or Represents ideologies that promote hate, such as nazism and white supremacy. We remove unclear references to these designated events or ideologies.
Tier 2 includes Violent Non-State Actors that engage in violence against state or military actors in an armed conflict but do not intentionally target civilians. It also includes Violence-Inducing Entities that are engaged in preparing or advocating for future violence but have not necessarily engaged in violence to date. These are also entities that may repeatedly engage in violations of our Hate Speech or Dangerous Organisations and Individuals policies on or off the platform. We remove glorification, material support and representation of these entities or of their leaders, founders or prominent members.
We recognise that users may share content that includes references to designated dangerous organisations and individuals in the context of social and political discourse. This includes content reporting on, neutrally discussing or condemning dangerous organisations and individuals or their activities.
News reporting includes information that is shared to raise awareness about local and global events in which designated dangerous organisations and individuals are involved.
Neutral discussion includes factual statements, commentary, questions and other information that do not express positive judgement around the designated dangerous organisation or individual and their behaviour.
Condemnation includes disapproval, disgust, rejection, criticism, mockery and other negative expressions about a designated dangerous organisation or individual and their behaviour.
Our policies are designed to allow room for these types of discussions while simultaneously limiting risks of potential offline harm. We thus require people to clearly indicate their intent when creating or sharing such content. If a user's intention is ambiguous or unclear, we default to removing content.
In line with international human rights law, our policies allow discussions about the human rights of designated individuals or members of designated dangerous entities, unless the content includes other glorification, support or representation of designated entities or other policy violations, such as incitement to violence.
Please see our Corporate Human Rights Policy for more information about our commitment to internationally recognised human rights.
We remove Glorification, Support and Representation of various dangerous organisations and individuals. These concepts apply to the organisations themselves, their activities and their members. These concepts do not proscribe peaceful advocacy for particular political outcomes.
Glorification, defined as any of the below:
We remove glorification of Tier 1 and Tier 2 entities, as well as designated events.
For Tier 1 and designated events, we may also remove unclear or contextless references if the user's intent was not clearly indicated. This includes unclear humour, captionless or positive references that do not glorify the designated entity's violence or hate.
Support, defined as any of the below:
We remove all Support of Tier 1 and Material Support of Tier 2.
Representation, defined as any of the below:
We remove Representation of Tier 1 and 2 Designated Organisations and designated events.
Tier 1: Terrorism, organised hate, large-scale criminal activity, attempted multiple-victim violence, multiple victim violence, serial murders and violating violent events
We do not allow individuals or organisations involved in organised crime, including those designated by the United States government as specially designated narcotics trafficking kingpins (SDNTKs), hate, or terrorism, including entities designated by the United States government as Foreign Terrorist Organisations (FTOs) or Specially Designated Global Terrorists (SDGTs), to have a presence on the platform. We also don't allow other people to represent these entities. We do not allow leaders or prominent members of these organisations to have a presence on the platform, symbols that represent them to be used on the platform, or content that glorifies them or their acts, including unclear references to them. In addition, we remove any support for these individuals and organisations.
We do not allow content that glorifies, supports or represents events that Meta designates as terrorist attacks, hate events, multiple-victim violence or attempted multiple-victim violence, serial murders, hate crimes or violating violent events. Nor do we allow (1) content that glorifies, supports or represents the perpetrator(s) of such attacks; (2) perpetrator-generated content relating to such attacks; or (3) third-party imagery depicting the moment of such attacks on visible victims.
We also do not allow Glorification, Support or Representation of designated hateful ideologies, as well as unclear references to them.
Terrorist organisations and individuals, defined as a non-state actor that:
Hate entity – defined as an organisation or individual that spreads and encourages hate against others based on their protected characteristics. The entity's activities are characterised by at least some of the following behaviours:
Criminal organisations, defined as an association of three or more people that:
Multiple-victim violence and serial murders
Hateful ideologies
Tier 2: Violent Non-State Actors and Violence-Inducing Entities
Organisations and individuals designated by Meta as Violent Non-state Actors or Violence-Inducing Entities are not allowed to have a presence on our platforms or have a presence maintained by others on their behalf. As these communities are actively engaged in violence against state or military actors in armed conflicts (Violent Non-State Actors) or are preparing, advocating for or creating conditions for future violence (Violence-Inducing Entities), material support of these entities is not allowed. We will also remove glorification of these entities.
Violent non-state actors, defined as any non-state actor that:
Violence-Inducing Entities are defined as follows:
A Violence-Inducing Entity (General) is a non-state actor that:
A Violence-Inducing Conspiracy Network is a non-state actor that:
A Hate Banned Entity is a non-state actor that:
See some examples of what enforcement looks like for people on Facebook, such as: what it looks like to report something that you don't think should be on Facebook, to be told that you've violated our Community Standards and to see a warning screen over certain content.
Note: We're always improving, so what you see here may be slightly outdated compared to what we currently use.
Percentage of times that people saw violating content
Number of pieces of violating content that we took action on
Percentage of violating content that we found before people reported it
Number of pieces of content that people appealed after we took action on it
Number of pieces of content that we restored after we originally took action on it
Percentage of times that people saw violating content
Number of pieces of violating content that we took action on
Percentage of violating content that we found before people reported it
Number of pieces of content that people appealed after we took action on it
Number of pieces of content that we restored after we originally took action on it
We have an option to report, whether it's on a post, a comment, a story, a message or something else.
We help people report things that they don't think should be on our platform.
We ask people to tell us more about what's wrong. This helps us send the report to the right place.
Make sure that the details are correct before you click Submit. It's important that the problem selected truly reflects what was posted.
After these steps, we submit the report. We also lay out what people should expect next.
We remove things if they go against our Community Standards, but you can also unfollow, block or unfriend to avoid seeing posts in future.
After we've reviewed the report, we'll send the reporting user a notification.
We'll share more details about our review decision in the Support Inbox. We'll notify people that this information is there and send them a link to it.
If people think we made the wrong decision, they can request another review.
We'll send a final response after we've re-reviewed the content, again to the Support Inbox.
When someone posts something that doesn't follow our rules, we'll tell them.
We'll also address common misperceptions and explain why we made the decision to enforce.
We'll give people easy-to-understand explanations about the relevant rule.
If people disagree with the decision, they can ask for another review and provide more information.
We set expectations about what will happen after the review has been submitted.
We have the same policies around the world, for everyone on Facebook.
Our global team of over 15,000 reviewers work every day to keep people on Facebook safe.
Outside experts, academics, NGOs and policymakers help inform the Facebook Community Standards.
Learn what you can do if you see something on Facebook that goes against our Community Standards.