Facebook’s Remove, Reduce, Inform policy

If you haven’t already heard, Facebook has a remove, reduce, inform policy. This helps the social platform cut out any potentially damaging posts or accounts. It’s set up to avoid misleading information being spread.

Facebook have had a system in place for quite some time – Remove, Reduce, Inform. Throughout its time, it has had multiple updates to keep it performing to the standards required. The aim is to prevent any harmful content from appearing on the platform.

If it could potentially offend, or put any user in danger – then it has to go. However, if it isn’t harmful but could potentially be misleading, it needs to be reduced. The reduced idea comes down to limiting the amount of eyes seeing it.

Facebook previously said “This involves removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information”.

Remove

If any posts or accounts contain harmful content, then they can be removed. Facebook aren’t required to provide warning. Especially if the content puts others at risk. You can find the Remove strategy within Facebook’s Community Standards.

By adding in this section to their guidelines, they can refer users back to it. Therefore, if a post is removed and a user wants to complain, they will be directed to the guidelines. Also introduced – a Group Quality feature meaning users can see why groups have been flagged.

Reduce

In an aim to reduce the amount of harmful content that falls through the cracks, Facebook have expanded their third-party access checkers. These are used to review the sites and any pieces of software that are allowed access to Facebook accounts.

The company are also aiming to reduce the amount of groups posting false information. Clickbait has become the norm, and unfortunately this means many fall into various traps. Adding misleading information can be extremely damaging.

Inform

You’ve likely come across posts that say things such as “this content may be sensitive” and then you’re able to approve it and view it anyway, or you can ignore it. Facebook introduced this to warn users about content they’re about to view.

Often, posts might not go against their terms of service, and therefore are allowed to stay online. But, they might be triggering or hurtful to certain groups of users. Due to this, Facebook have found a way to flag potentially harmful content.

The tool will also hopefully help users avoid any spam accounts. There have been far too many scams that have happened through social sites like Facebook. Warning users before they continue means Facebook have tried to prevent anything that could be damaging from happening.

PUSH sign up for free GIF
Found this helpful? Share it with your friends!
Close Bitnami banner
Bitnami