F. and E. were sexually abused as children. A digital trail of the crimes continues to haunt the sisters a decade later. Kholood Eid for The New York Times

Child Abusers Run Rampant as Tech Companies Look the Other Way

Though platforms bar child sexual abuse imagery on the web, criminals are exploiting gaps. Victims are caught in a living nightmare, confronting images again and again.

The two sisters live in fear of being recognized. One grew out her bangs and took to wearing hoodies. The other dyed her hair black. Both avoid looking the way they did as children.

Ten years ago, their father did the unthinkable: He posted explicit photos and videos on the internet of them, just 7 and 11 at the time. Many captured violent assaults in their Midwestern home, including him and another man drugging and raping the 7-year-old.

The men are now in prison, but in a cruel consequence of the digital era, their crimes are finding new audiences. The two sisters are among the first generation of child sexual abuse victims whose anguish has been preserved on the internet, seemingly forever.

Exploited

Articles in this series examine the explosion in online photos and videos of children being sexually abused. They include graphic descriptions of some instances of the abuse.

This year alone, photos and videos of the sisters were found in over 130 child sexual abuse investigations involving mobile phones, computers and cloud storage accounts.

The digital trail of abuse — often stored on Google Drive, Dropbox and Microsoft OneDrive — haunts the sisters relentlessly, they say, as does the fear of a predator recognizing them from the images.

“That’s in my head all the time — knowing those pictures are out there,” said E., the older sister, who is being identified only by her first initial to protect her privacy. “Because of the way the internet works, that’s not something that’s going to go away.”

[Read an interview with the two sisters.]

Horrific experiences like theirs are being recirculated across the internet because search engines, social networks and cloud storage are rife with opportunities for criminals to exploit.

The scope of the problem is only starting to be understood because the tech industry has been more diligent in recent years in identifying online child sexual abuse material, with a record 45 million photos and videos flagged last year.

But the same industry has consistently failed to take aggressive steps to shut it down, an investigation by The New York Times found. Approaches by tech companies are inconsistent, largely unilateral and pursued in secret, often leaving pedophiles and other criminals who traffic in the material with the upper hand.

To report online child sexual abuse or find resources for those in need of help, contact the National Center for Missing and Exploited Children at 1-800-843-5678.

The companies have the technical tools to stop the recirculation of abuse imagery by matching newly detected images against databases of the material. Yet the industry does not take full advantage of the tools.

Amazon, whose cloud storage services handle millions of uploads and downloads every second, does not even look for the imagery. Apple does not scan its cloud storage, according to federal authorities, and encrypts its messaging app, making detection virtually impossible. Dropbox, Google and Microsoft’s consumer products scan for illegal images, but only when someone shares them, not when they are uploaded.

And other companies, including Snapchat and Yahoo, look for photos but not videos, even though illicit video content has been exploding for years. (When asked about its video scanning, a Dropbox spokeswoman in July said it was not a “top priority.” On Thursday, the company said it had begun scanning some videos last month.)

The largest social network in the world, Facebook, thoroughly scans its platforms, accounting for over 90 percent of the imagery flagged by tech companies last year, but the company is not using all available databases to detect the material. And Facebook has announced that the main source of the imagery, Facebook Messenger, will eventually be encrypted, vastly limiting detection.

“Each company is coming up with their own balance of privacy versus safety, and they don’t want to do so in public,” said Alex Stamos, who served as chief of information security at both Facebook and Yahoo. “These decisions actually have a humongous impact on children’s safety.”

Tech companies are far more likely to review photos and videos and other files on their platforms for facial recognition, malware detection and copyright enforcement. But some businesses say looking for abuse content is different because it can raise significant privacy concerns.

Tech companies are loath to be seen going through someone’s photos and videos, and imagery flagged in automated scans is almost always reviewed by a person later.

“On the one hand, there is an important imperative to protect personal information,” said Sujit Raman, an associate deputy attorney general in the Justice Department. “On the other hand, there is so much stuff on the internet that is very damaging.”

The main method for detecting the illegal imagery was created in 2009 by Microsoft and Hany Farid, now a professor at the University of California, Berkeley. The software, known as PhotoDNA, can use computers to recognize photos, even altered ones, and compare them against databases of known illegal images. Almost none of the photos and videos detected last year would have been caught without systems like PhotoDNA.

But this technique is limited because no single authoritative list of known illegal material exists, allowing countless images to slip through the cracks. The most commonly used database is kept by a federally designated clearinghouse, which compiles digital fingerprints of images reported by American tech companies. Other organizations around the world maintain their own.

Even if there were a single list, however, it would not solve the problems of newly created imagery flooding the internet, or the surge in live-streaming abuse.

How PhotoDNA Works

The uploaded image — in this instance a photograph of Dr. Farid — is turned into a square and colors are removed, making the process faster and consistent across images.

An algorithm finds the edges in the image, which are key to identifying unique features.

The result is split into a grid …

… and each square in the grid is assigned a value based on its visual features to generate the image’s fingerprint. The values shown here are for illustration purposes.

The system compares the newly generated fingerprint against those of known illegal images.

If two fingerprints are similar enough, the system reports a match. PhotoDNA is able to account for subtle differences between images, such as color changes, resizing and compression.

The process is near-instantaneous, allowing for millions of comparisons per second.

By Rich Harris

For victims like E. and her sister, the trauma of the constantly recirculating photos and videos can have devastating effects. Their mother said both sisters had been hospitalized for suicidal thoughts.

“Every hope and dream that I worked towards raising my children — completely gone,” she said. “When you’re dealing with that, you’re not worried about what somebody got on a college-entrance exam. You just want to make sure they can survive high school, or survive the day.”

And because online offenders are known to seek out abused children, even into adulthood, the sisters do not speak publicly about the crimes against them. Their emotional conversations with The Times were the first time they’ve spoken publicly about the abuse.

“You get your voice taken away,” E. said. “Because of those images, I don’t get to talk as myself. It’s just like, Jane Doe.”

Searching for Abuse

Joshua Gonzalez, a computer technician in Texas, was arrested this year with over 400 images of child sexual abuse on his computer, including some of E. and her sister.

Mr. Gonzalez told the authorities that he had used Microsoft’s search engine, Bing, to find some of the illegal photos and videos, according to court documents.

Microsoft had long been at the forefront of combating abuse imagery, even creating the PhotoDNA detection tool a decade ago. But many criminals have turned to Bing as a reliable tool of their own.

A report in January commissioned by TechCrunch found explicit images of children on Bing using search terms like “porn kids.” In response to the report, Microsoft said it would ban results using that term and similar ones.

The Times created a computer program that scoured Bing and other search engines. The automated script repeatedly found images — dozens in all — that Microsoft’s own PhotoDNA service flagged as known illicit content. Bing even recommended other search terms when a known child abuse website was entered into the search box.

Finding Illegal Images

nyt_laptop: |

 

The Times wrote a computer program that used an invisible browser to check search engines for child sexual abuse material. It scanned for images without downloading or displaying them.

The program searched more than three dozen terms related to child sexual abuse, including terms suggested by the search engines.

While all images were blocked from reaching the browser, the program captured their web addresses.

The web addresses were sent to Microsoft’s PhotoDNA service, which is used by technology companies to identify known abuse imagery.

PhotoDNA compared the results found in The Times’s searches with fingerprints of known illegal images. Many of them matched.

By Rich Harris

While The Times did not view the images, they were reported to the National Center for Missing and Exploited Children and the Canadian Center for Child Protection, which work to combat online child sexual abuse.

One of the images, the Canadian center said, showed a naked girl on her back spreading her legs “in an extreme manner.” The girl, about 13, was recognized by the center’s analysts, who regularly review thousands of explicit images to help identify and rescue exploited children and scrub footage from the internet. The analysts said the authorities had already removed the girl from danger.

Similar searches by The Times on DuckDuckGo and Yahoo, which use Bing results, also returned known abuse imagery. In all, The Times found 75 images of abuse material across the three search engines before stopping the computer program.

Both DuckDuckGo and Yahoo said they relied on Microsoft to filter out illegal content.

After reviewing The Times’s findings, Microsoft said it uncovered a flaw in its scanning practices and was re-examining its search results. But subsequent runs of the program found even more.

A spokesman for Microsoft described the problem as a “moving target.”

“Since the NYT brought this matter to our attention, we have found and fixed some issues in our algorithms to detect unlawful images,” the spokesman said.

Hemanshu Nigam, who served as a director overseeing child safety at Microsoft from 2000 to 2006, called the findings a “major failing,” and added that “it looks like they’re not using their own tools.” Mr. Nigam said it showed the company was seemingly unaware of how its platforms could be manipulated by criminals.

Child abusers are well aware of Bing’s vulnerabilities, according to court records and interviews with law enforcement officials. Going back years, pedophiles have used Bing to find illegal imagery and have also deployed the site’s “reverse image search” feature, which retrieves pictures based on a sample photo.

The same computer program, when run by The Times on Google’s search engine, did not return abuse content. But separate documentation provided by the Canadian center showed that images of child sexual abuse had also been found on Google and that the company had sometimes resisted removing them.

One image captured the midsections of two children, believed to be under 12, forced into explicit acts with each other. It is part of a known series of photos showing the children being sexually exploited.

The Canadian center asked Google to take down the image in August last year, but Google said it did not meet its threshold for removal, the documents show. The analysts pressed for nine months until Google relented.

Another image, found in September 2018, depicts a woman touching the genitals of a naked 2-year-old girl. Google declined to take down the photo, stating in an email to the Canadian analysts that while it amounted to pedophilia, “it’s not illegal in the United States.”

When The Times later asked Google about the image and others identified by the Canadians, a spokesman acknowledged that they should have been removed, and they subsequently were. The spokesman also said that the company did not believe any form of pedophilia was legal, and that it had been a mistake to suggest otherwise.