United Kingdom Online Safety Information

 

Information from X

The X Rules help ensure all people can participate in the public conversation freely and safely. Visit our Help Center for tips on using X, resources to help you stay safe on X, and information about account safety and security. We also encourage you to review this getting started guide and know how to report potential violations of the X Rules.

If you need account support, submitting a support form ensures your request is routed to the appropriate team. If you ever need to appeal an enforcement decision, click here.

Content moderation practices

Our content moderation systems are designed and tailored to protect users without unnecessarily restricting the use of our service and fundamental rights, especially freedom of expression. Content moderation activities are implemented and anchored on principled policies and leverage a diverse set of interventions to ensure that our actions are reasonable, proportionate and effective. Our content moderation systems blend automated and human review paired with a robust reporting and appeals system that enables our users to quickly raise potential moderation anomalies or mistakes.


To enforce our Rules, we use a combination of machine learning and human review. Our systems are able to surface content to human moderators who use important context to make decisions about potential violations. This work is led by an international, cross-functional team with 24-hour coverage and the ability to cover multiple languages. We also have a complaints process for any potential errors that may occur.

Automated content enforcement

X employs a combination of heuristics and machine learning algorithms to automatically detect content that we believe violates the X Rules and policies enforced on our platform. We use combinations of natural language processing models, image processing models and other sophisticated machine learning methods to detect potentially violative content. These machine learning models vary in complexity and in the outputs they produce. For example, the machine learning model used to detect abuse on the platform is trained on abuse violations detected in the past. Heuristics are simple rule-based models (patterns of behaviours, text, or keywords that may be typical of a certain category of violations), typically utilised to enable X to react quickly to new forms of violative behavior that emerge on the platform. Content flagged by machine learning models and heuristics are either reviewed by human content reviewers before an action is taken or, in some cases, automatically actioned, based on the measured historical accuracy of the specific method.

User controls

In order to protect the user's experience, we provide tools designed to help you control what you see, so that you can personalize your experience on X. We make it easy for you to take action on a post. Tap the icon at the top of any post on your timeline to quickly access options such as but not limited to:

  • Following and unfollowing: Users can choose to either see or stop seeing someone’s posts on their timeline. 
  • Muting: Users can mute an account if they do not want to see that account’s posts. 
  • Blocking: Users can block accounts to prevent engagement by blocked users and to stop seeing the blocked user’s posts. 
  • Reporting: Users can report accounts or posts they think are in violation of the X Rules to X. 

We also provide tools to help you decide what others can see about you. This includes but is not limited to our discoverability settings, sharing your location in posts, and media settings which allow you to mark your posts as containing sensitive content.

Find more details here.

Information for parents and minors

X, as a service, is not primarily for children, but anyone above the age of 13 can sign up for the service. We recognise that minors are a more vulnerable group by virtue of their age. Learn more.

Accounts belonging to known minors will be restricted to receiving Direct Messages (DMs) from accounts they follow by default. If such accounts choose to disable the default setting, it will allow DMs from accounts they may not be following. This could potentially lead to unwanted or unsolicited messages from unknown accounts. However, users can still block accounts to prevent engagement by blocked users and to stop seeing the blocked user’s posts. 

Additionally, X provides in-app DM reporting functionality with specific and nuanced reporting options relevant to protection of minors. Having such reporting capability ensures that reports are immediately reviewed by relevant teams and prioritised enforcement.

Accounts belonging to known minors will be defaulted to ‘protected’ posts. This means that known minors will receive a request when new people want to follow them (which they can approve or deny), that their posts will only be visible to their followers, and that their posts will only be searchable by them and their followers (i.e. they will not appear in public searches). 

Further information

NOTE: Informational resources available on our Help Center are not intended for law enforcement use and such requests should be directed to X's Legal Request Submission site, available at t.co/lr or legalrequests.x.com.

Share this article