Facebook expands on its campaign to prevent the platform from being used to spread dangerous misinformation, affirms it will remove bogus posts likely to incite hate and violence.
The new policy incorporated through the global social network was tested in Sri Lanka, which was recently rocked by inter-religious over false information posted on the Facebook.
“There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down,” a Facebook spokesman said after a briefing on the policy at the company´s campus in Silicon Valley.
“We will be begin implementing the policy during the coming months.”
For instance, Facebook may remove inaccurate or misleading content, such as doctored photos, created or shared to stir up to ignite volatile situations in the real world. Facebook said it is in collaborating with its partners and local authorities empowered at identifying when posts and nature of contents are likely to prompt hate or incite violence.
Misleading information removed in Sri Lanka under the newly incorporated policy; included content falsely contending, that Muslims were poisoning food given or sold to Buddhists, according to social networking service.
Hate discourse and treats considered violations of Facebook regulations and therefore were removed.
The new policy takes another step back, eliminating content that may not be explicitly violent but which seems likely to encourage such behaviour.
Facebook has been lambasted for allowing rumours or blatantly false information to circulate that may have contributed to violence.
Many deem Facebook as being used as an instrument for spreading false information, hate speech, and for inciting violence in recent years.
Facebook has implemented a series of changes aimed at fighting the use of the social network to spread misinformation, from fabrications that incite violence to untruths that sway elections.