AMBER RUDD has unveiled an AI-powered tool to block Jihadist content from the internet.
And if you’ve got a mental picture of the Home Secretary in a top hat pulling the cloth off a Heath-Robinsonesque machine that blows bubbles, then welcome to our world.
Ms Rudd has warned that she hasn’t ruled out making tech firms use it. The news comes as she visits tech giants in the US to discuss the idea. However, the government claims it is primarily aimed at smaller content providers who could not afford policing of this type.
The government spent £600,000 on the tool, which was trained by its designers, ASI Data Science to recognise content related to IS, which would then be flagged up to a human who would decide if it should pass or not.
The company says that in an average day it would flag 250 IS videos for deletion, with an error rate of 0.005 per cent false-positives.
To get an idea of how the system works, imagine the AI is Alex in a Clockwork Orange – only forced to watch IS videos until the site of one makes him wretch.
But as with Burgess’s dystopian vision, there’s questions over whether this type of power would circumvent free will. Indeed, if the government has a filter for the internet, what would stop it from being misused to quash opposition? Essentially, it is becoming a Great Firewall like that which envelopes China.
Rudd has said that she doesn’t plan to force the tool on anyone if it isn’t necessary, but then what does it define as necessary?
Jim Killock, Executive Director of the Open Rights Group has his doubts, not least because AI has a very poor sense of irony:
“While tools may be helpful, context is everything. Is a video being used for propaganda, news or satire? Computers will find it very hard to know for a very long time. There will always be mistakes.
“Amber Rudd’s job is to ensure the law is followed. Decisions must be accountable and subject to independent appeals. At the moment, police decisions to remove content are completely unaccountable. This project risks making the same assumption that mistakes won’t be made, or that it doesn’t matter if they are.”
There are currently no details on a timeline to roll out the service. µ
Source : Inquirer