Watch: Bear wanders through neighborhood in Los Angeles

Twitter suspends 377,000 accounts for pro-terrorism content

This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated.

LONDON, ENGLAND - NOVEMBER 07: In this photo illustration, The Twitter logo is displayed on a mobile device as the company announced it's initial public offering and debut on the New York Stock Exchange on November 7, 2013 in London, England. Twitter went public on the NYSE opening at USD 26 per share, valuing the company's worth at an estimated USD 18 billion. (Photo by Bethany Clarke/Getty Images)

SAN FRANCISCO — Twitter suspended nearly 377,000 accounts in the last six months of 2016 for promoting terrorism.

The company made the announcement Tuesday in its twice-annual transparency report, which publishes data on requests Twitter has received from the government and other legal entities to police content from its platform.

Of the 376,890 accounts Twitter suspended for posting terrorism-related content, just two percent were the result of government requests to remove data. Twitter said 74% of extremist accounts were found by “internal, proprietary spam-fighting tools.”

This marks the first time Twitter included its efforts to combat violent extremism in its transparency posts since the company began publishing the reports in 2012. And the company said it plans to continue including that information in future reports.

Twitter first announced efforts to combat extremism in 2015 and doubled down on those efforts last year, announcing in February 2016 several initiatives including partnering with outside organizations, training its policy team and attending government-sponsored summits.

In total, Twitter has suspended 636,248 accounts for extremism between August 2015 and December 2016.

Social media companies are tackling violent pro-terrorism content in a variety of ways as it proliferates on social networks. Facebook and Google use automated tools to identify and remove extremist videos, for example. Facebook also encourages “counter-speech” on its platform, or creating and distributing content that contradicts hate speech messaging.