Wednesday, 10 December 2014

Anti-terror Measures: How Technology Helps Fight The Counter-Terrorism War

Davey Winder examines how technology can
help and hinder in the fight against terrorism. Science is being used to counter "technically aware terrorists", as part of a wider technology push for countering international terror threats, according to the UK government's recent Protecting the UK Against Terrorism policy document.




Because of the nature of the counter-terrorism beast, exactly what technology is being used and how it is being implemented is not in the public domain. That doesn't mean, however, we are unaware that communication monitoring techniques are at the very heart of the surveillance and interception policy and have been for many years.

Indeed, the 1991 Intelligence Services Act and the 2000 Regulation of Investigatory Powers Act give law enforcement and security agencies fairly sweeping authority to intercept and monitor everything from mobile phone calls to email and social media usage.
There's a very fine line between counter-terrorism and wholesale State monitoring of citizens. Which is why, post-Snowden, the big players have woken up to encryption and the newly-enlightened public's desire to embrace it.
I'm not going to cover old ground, on the basis that everyone knows about the Edward Snowden revelations by now. However, only a fool would think spies are going to stop spying; it's what they do, and when 'the threat' could be any one of us they will spy on all of us.

I don't like it and there's a very fine line between counter-terrorism and wholesale State monitoring of citizens. Which is why, post-Snowden, the big players have woken up to encryption and the newly-enlightened public's desire to embrace it.

It's also why Robert Hannigan, the UK spymaster general at GCHQ, has accused companies of enabling platforms that have become "the command and control networks of choice" for terrorist groups like ISIS.

Breaking encryption is, one would assume and if you'll excuse the pun, another key to counter-terrorism success. Which is why during a recent visit to Professor Andrew Blyth, director of the Information Security Research Group based at the University of South Wales, I wasn't surprised to learn government agencies have already expressed an interest in his lab's ability to break the device encryption employed by iOS 8.

Intercepting communications will be at the heart of every counter-terrorist investigation, just as it has always been, because as one intelligence officer so aptly put it "terrorists have to communicate." How they communicate, of course, is also part of the growing problem for the counter-terrorism guys.

Looking beyond the cloak and dagger, encoded messages and dark corner conversations, some terrorist communication is much more open. Use of the web, via YouTube videos and social media has become de rigueur when it comes to the distribution of extremist material, propaganda and misinformation alike.

All of which are powerful weapons in the terrorist arsenal, and which have been used with devastating effect recently by ISIS for both showcasing executions and recruiting new fighters for its cause. In an attempt to proactively defend against such tactics, the Metropolitan Police established a dedicated Counter Terrorism Internet Referral Unit (CTIRU) in 2010 that deals with public reports of online content "of a violent extremist or terrorist nature." Since it started, CTIRU has removed some 55,000 pieces of content and 34,000 of those have been in the last year alone.

More controversially, the UK government is putting pressure on Internet Service Providers to block 'extremist' content at source, so that customers would not be able to see it. This blocking would, if successful, take the form of optional filtering such as is already in place for pornographic content, for example.

Quite how effective this might be is one legitimate question being raised by both the ISPs themselves and internet rights groups, with opt-in filters not proving that popular with the public and methods of circumventing them being readily available for anyone who cares to Google for it.

Another legitimate question is who determines what content is extremist, and how is that determination reached? Whenever we talk of political censorship we have to be very careful to be transparent and open, otherwise it's a very slippery slope leading further down the road to state-controlled media.

I appreciate such a view will, no doubt, have some readers on the verge of an aneurysm but there has to be a method of knowing who is blocked and why, and an appeals mechanism for when the system inevitably screws up.

According to CTIRU, examples of what is currently considered extremist include speeches or essays calling for racial or religious violence, videos of violence with messages of ‘glorification’ or praise for terrorists, postings inciting people to commit acts of terrorism or violent extremism and messages intended to generate hatred against any religious or ethnic group.

Of course, even if such a method of filtering online material were to be agreed it would need to be implemented by every UK-based ISP to be effective. And there lies the next stumbling block, and one the likes of CTIRU has to deal with: most social media organisations are not based in the UK and are not obligated to remove content when asked to by UK law enforcers.

I'm not saying they won't or don't, but the process is an entirely voluntary one and that's why definitions of extremism and terrorism have to be front and centre of the counter-terrorism tech debate. The international element also raises another concern over counter-terrorism measures when they flow into the social media and online realm, namely who gets to participate in the takedown process? Free speech is a two-way street and with social media being a global game there are players whose politics and definitions of extremism and terrorism differ from ours.

Social media is no longer a two-horse race with just Facebook and Twitter to deal with, smaller players quickly gain traction with the young (who are the main targets of extremist propaganda, remember) and may be less amenable to kicking the freedom of speech ball out of play. Especially in a post-Snowden world where tech companies have seen the backlash when any hint of being in bed with the spymasters becomes public.

Source:
itpro.co.uk

No comments: