This is the argument that Facebook tried to make in the case of a nude photo of a 14-year-old girl that was repeatedly published on a “shame page”: yes, Facebook said, that photo was published, but every time it was reported, we took it down.
Her lawyers’ response: Why was reposting possible at all? …given that there’s technology that can assign hashes to known child abuse imagery and prevent them from being reposted?
That’s a good question, and it well might have helped the Northern Ireland teenager and her legal team to prevail in out-of-court negotiations with Facebook.
On Tuesday, the BBC reported that the girl, who can’t be named, has agreed to a confidential settlement with Facebook that included her legal costs.
The teen also sued the man who posted the image in 2014 and 2016, claiming that he got the photo through blackmail. Before the settlement, Facebook had been facing charges of alleged misuse of private information, negligence and breach of the Data Protection Act.
This is what her lawyer told the High Court in Belfast on Tuesday, according to the BBC:
I’m very happy to be able to inform Your Lordship that the case has been settled.
I’m happy too. I’ll be happier when the alleged sextortionist is brought to justice. And I’m extremely happy that this case, or at least cases like it, undoubtedly pushed Facebook into adopting what sounds like photo hashing in order to stop this type of abuse.
In November 2017, Facebook asked people to upload their nude photos if they were concerned about revenge porn. It didn’t give many details at the time, but it sounded like it was planning to use hashes of our nude images, just like law enforcement uses hashes of known child abuse imagery.
A hash is created by feeding a photo into a hashing function. What comes out the other end is a digital fingerprint that looks like a short jumble of letters and numbers. You can’t turn the hash back into the photo, but the same photo, or identical copies of it, will always create the same hash.
So, a hash of your most intimate picture is no more revealing than this:
Since 2008, the National Center for Missing & Exploited Children (NCMEC) has made available a list of hash values for known child sexual abuse images, provided by ISPs, that enables companies to check large volumes of files for matches without those companies themselves having to keep copies of offending images or to actually pry open people’s private messages.
PhotoDNA creates a unique signature for an image by converting it to black and white, resizing it, and breaking it into a grid. In each grid cell, the technology finds a histogram of intensity gradients or edges from which it derives its so-called DNA. Images with similar DNA can then be matched.
Given that the amount of data in the DNA is small, large data sets can be scanned quickly, enabling companies including Microsoft, Google, Verizon, Twitter, Facebook and Yahoo to find needles in haystacks and sniff out illegal child abuse imagery. It works even if the images have been resized or cropped.
Why so much detail on hashing? Because there was a lot of victim-blaming when the girl’s case first came to light. Hashing technology seems to be a far more productive approach than blaming victimized children who are under the age of consent for getting talked into nude photos.
It’s shocking to think of a 14-year-old being subjected to sextortion, but kids even younger – we’ve heard of those as young as 11 – have been victims of revenge porn.
As far as keeping your kids safe when they’re online goes, there are tools that can help us do it. These include parental controls that let you set your children’s privacy settings, control whether they can install new apps, enforce ratings restrictions on what they can buy on iTunes, and even limit what type of app they can use.
We’ve got more tips to keep your kids safe online here.
And if you’re not even sure what your kids are up to online, this could help.
Source : Naked Security