Wednesday, 22 March 2017

Twitter Suspended More Than 635, 000 Terrorism Accounts Between August 2015 And December 2016

Social media giant - Twitter has revealed it suspended a total of 376,890 accounts between July 1 through December 31 last year for violations related to promotion of terrorism. It says the majority of the suspensions (74%) were surfaced via its own “internal, proprietary spam-fighting tools”.

The figures are revealed in a new section of its biannual Transparency Report which also details government requests to remove content deemed to be promoting terrorism and thus in violation of Twitter’s Terms of Service.

Government TOS requests pertaining to terrorism represented less than 2% of all account suspensions in the reported time period, with Twitter saying it received 716 reports, covering 5,929 accounts, and deemed 85% to be in violation.

Twitter also notes that it suspended a total of 636, 248 accounts over the period of August 1, 2015 through December 31, 2016, and adds that it will be sharing “future updates on our efforts to combat violent extremism by including them in this new section of our transparency report”.

Prior to this latest information, Twitter earlier opined it suspended 125,000 accounts for promoting terrorism between mid-2015 and early 2016 — and a further 235,000 suspensions were made for this reason in the six months after that, to August 2016.

Recall that In December 2016, Twitter, Microsoft, Facebook and YouTube announced a collaboration on a shared industry database aimed at identifying terrorist content spreading across their respective platforms to speed up takedowns.

But it’s fair to say that the issue of terrorist takedowns is just the tip of the political iceberg that has crashed into social media giants in recent times.

Twitter and Facebook, for example, have come under increasing pressure to do more to combat trolling, ‘fake news’ and hate speech circulating on their platforms, especially in the wake of the US election last year — when commentators criticized social media companies of skewing political discourse by incentivizing the sharing of misinformation and enabling the propagation of far right extremist views.

Twitter’s response to criticism of how it handles accounts that are doling out abuse in tweet-form has so far included updating its abuse policy and adding more muting/filtering tools for users. Although last month it ended up rolling back some additional anti-abuse measures after they were criticized by users.

It also continues to be called out for failing to combat botnets — such as those working to amplify far right political views via the use of automated spam tactics.

On the issue of political bot-troll armies, it’s fair to say Twitter appears far more agnostic / less interested in taking a stand. A study by two US universities last month suggested 15 per cent of Twitter accounts are bots — which gives the veteran ‘pro-free speech’ company a pretty sizable reason to tread carefully here.

And with user growth an ongoing problem for Twitter, it’s hardly going to be keen to nix millions of robot accounts. (Plus of course not all bots are seeking to subvert democracy — even if some demonstrably are.)
When one 350,000-strong botnet was discovered in January the company told the BBC it has a clear policy on automation that is “strictly enforced”. Although it also said it relies on user reports to combat spam.

This month it has also said it’s working on building tools to identify accounts engaging in abusive behavior even where the accounts haven’t been reported by users — such as identifying accounts that are repeatedly tweeting without solicitation at non-followers, or engaging in patterns of abusive behavior.

Automating fighting bots is likely the only approach that will scale to meet the noise of spammers, political or otherwise — yet Twitter is clearly conflicted, with a spokesperson arguing earlier this month that “many bot accounts are extremely beneficial”.

Safe to say the company has shown no appetite for wanting to wade into making fine-grained judgements about what constitutes “beneficial” vs malicious spam. Yet user reporting clearly will not scale to meet the spam challenge. Which means Twitter is going to continue to have a political bot-troll problem.

Additionally, in this — its tenth — Transparency Report, Twitter has added another new section to the legal removals section of the report that covers requests to remove content from verified journalists and other media/news outlets.

“Given the concerning global trend of various governments cracking down on press freedom, we want to shine a brighter light on these requests,” it writes. “During this reporting period, Twitter received 88 legal requests from around the world directing us to remove content posted by verified journalists or news outlet accounts.

“We did not take any action on the majority of these requests, with limited exceptions in Germany and Turkey, the latter of which accounted for 88% of all requests of this nature. For example, we were compelled to withhold Tweets sharing graphic imagery following terror attacks in Turkey in response to a court order.”

Culled from: TechCrunch

No comments: