The European Commission is done with waiting for social platforms to voluntarily fix the problem of extremist content spreading via their technologies. On Sunday, the Financial Times reported that the EC’s going to follow through on threats to fine companies like Twitter, Facebook and YouTube for not deleting flagged content post-haste.
The commission is still drawing up the details, but a senior EU official told the FT that the final form of the legislation will likely impose a limit of one hour for platforms to delete material flagged as terrorist content by police and law enforcement bodies.
The EC first floated the one-hour rule in March, but it was just a recommendation at that point: something that the EC let companies implement voluntarily to the best of their abilities.
Or not, as the case may be. Although the one-hour rule was only a recommendation at the time, companies and member states still had requirements they needed to meet, including submitting data on terrorist content within three months and on other illegal content within six months.
Whatever tech companies have done to satisfy those requirements, the EC isn’t happy with it. Julian King, the EU’s commissioner for security, told the Financial Times that Brussels hasn’t “seen enough progress” from the platforms and that it would “take stronger action in order to better protect our citizens”.
We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon.
The recommendations that came in March followed the commission having promised, in September, to monitor progress in tackling illegal content online and to assess whether additional measures were needed to ensure such content gets detected and removed quickly. Besides terrorist posts, illegal content includes hate speech, material inciting violence, child sexual abuse material, counterfeit products and copyright infringement.
Voluntary industry measures to deal with terrorist content, hate speech and counterfeit goods have already achieved results, the EC said in March. But when it comes to “the most urgent issue of terrorist content,” which “presents serious security risks”, the EC said procedures for getting it offline could be stronger.
Rules for flagging content should be easy to follow and faster, for example. There could be fast-tracking for “trusted flaggers,” for one. To avoid false flags, content providers should be told about decisions and given the chance to contest content removal.
As far as the one-hour rule goes, the EC said in March that the brevity of the takedown window is necessary given that “terrorist content is most harmful in the first hours of its appearance online.”
The proposed legislation will have to be approved by the European Parliament and a majority of EU member states before being finalized as law. King told the FT that the new law will help to create legal certainty and would apply for all websites, big or small:
The difference in size and resources means platforms have differing capabilities to act against terrorist content, and their policies for doing so are not always transparent. All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform.
The tech companies have protested the one-hour rule, saying it could do more harm than good. In fact, the FT reports, some parts of the commission believe that self-regulation has been a success on the platforms that terrorists most like to use to spread their messages.
In April, Google pointed to success in artificial intelligence (AI) -enabled automatic content takedown: during its earnings call, Google CEO Sundar Pichai said in prepared remarks that automatic flagging and removal of violent, hate-filled, extremist, fake-news and/or other violative videos was having good results on YouTube.
At the same time, YouTube released details in its first-ever quarterly report on videos removed by both automatic flagging and human intervention.
There were big numbers in that report: between October and December 2017, YouTube removed a total of 8,284,039 videos. Of those, 6.7 million were first flagged for review by machines rather than humans, and 76% of those machine-flagged videos were removed before they received a single view.
Back in March, EdiMA, a European trade association whose members include internet bigwigs such as Google, Twitter, Facebook, Apple and Microsoft, acknowledged the importance of the issues raised by the EC but said it was “dismayed” by its recommendations. EdiMA described it as “a missed opportunity for evidence-based policy making”.
Our sector accepts the urgency but needs to balance the responsibility to protect users while upholding fundamental rights – a one-hour turn-around time in such cases could harm the effectiveness of service providers’ take-down systems rather than help.
The trade group also pointed out that it’s already shown leadership through the Global Internet Forum to Counter Terrorism and that collaboration is underway via the Hash Sharing Database.
Here’s what Facebook told TechCrunch at the time:
We share the goal of the European Commission to fight all forms of illegal content. There is no place for hate speech or content that promotes violence or terrorism on Facebook.
As the latest figures show, we have already made good progress removing various forms of illegal content. We continue to work hard to remove hate speech and terrorist content while making sure that Facebook remains a platform for all ideas.
One EU official told the FT that the EC’s push for an EU-wide law targeting terrorist content reflected concern that “European governments would take unilateral action.”
German lawmakers last year OKed huge fines on social media companies if they don’t take down “obviously illegal” content in a timely fashion. The new German law gave them 24 hours to take down hate speech or other illegal content and imposed a fine of €50m ($61.6 million) if they don’t.
The German law targets anything from fake news to racist content. But the FT reports that with the one-hour rule, the EU is specifically targeting terrorist content, leaving it up to the platforms to determine which content violates the rules when it comes to areas that are less black and white, including hate speech and fake news.
Source : Naked Security