Twitter’s Head of Safety and Integrity, Yoel Roth, wrote in a blog post that Twitter is rolling out a new “crisis disinformation” policy designed to address “armed conflict situations, public health emergencies, and large-scale natural disasters.” The announcement of the new policy comes even as Twitter enters into an acquisition deal with Tesla chief Elon Musk, who has made clear his views on “modifying content” across several tweets and posts. Musk also insisted that the deal with Twitter could not go ahead until the platform confirmed the number of bots or fake users; Twitter pegs the figure at 5 percent, a claim that Musk doesn’t buy.
Twitter defines “crises as situations in which there is a widespread threat to life, physical integrity, health, or basic subsistence.” In order to determine whether an allegation is misleading, he adds, it will rely on “checking several reliable and publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open source investigators, journalists, and more.”
This will be a global policy that will “help ensure that viral misinformation is not amplified or recommended” by the platform during crises, the blog adds. The post notes that once Twitter has evidence “that a claim may be misleading, we will not amplify or recommend such content” across the platform.
This includes showing it in the timeline section of the home page and searching and exploring in the app or website. Twitter will also “prioritize adding warning notices to tweets and highly visible tweets from high-profile accounts, such as state media accounts, and verified and official government accounts”, which contain such misinformation.
Tweets that contain content that violates the Crisis Misinformation Policy will be placed behind a warning notice that reads: “This Tweet violated the Twitter Rules on sharing false or misleading information that could harm populations affected by the crisis. However, to preserve this content for accountability purposes, Twitter has determined that this The tweet should still be available.”
To be clear, Twitter will not delete information that may be misleading, only limiting its access.
According to the blog post, examples of content that include a Twitter warning about false content or misinformation include:
- false coverage or reporting of events, or information that misrepresents conditions on the ground as the conflict develops;
- false allegations about the use of force, an incursion into territorial sovereignty, or about the use of weapons;
- manifestly false or misleading allegations of war crimes or mass atrocities against certain populations;
- False information about the international community’s response, sanctions, defensive measures, or humanitarian operations.
- Strong comments and efforts to debunk or fact-check personal anecdotes or first-person accounts do not fall within the scope of the policy.
So what happens when Twitter adds a warning notice to a piece of misinformation? Well, users will still be able to see after clicking on the warning notification. But the content will not be “amplified or recommended via the Service”. Furthermore, Twitter will disable the option to like, retweet, or share that particular piece of content.
“We have found that not amplifying or recommending specific content, adding context through ratings, and in severe cases, disabling interaction with tweets, are effective ways to mitigate damage, while preserving speech and records of important global events,” adds a blog post.
The first iteration of this policy focuses on international armed conflict, beginning with the war in Ukraine, and Twitter plans to “update and expand the policy to include additional forms of crisis.” “The policy will complement our existing work that has been deployed during other global crises, such as Afghanistan, Ethiopia and India,” the company said.