- There was a sharp increase in child sexual abuse imagery online in 2020, data shows.
- Facebook said it detected 13 million images from July to September alone.
- Coronavirus lockdowns and livestreamed abuse fueled the increase, an expert told Insider.
- Visit Business Insider’s homepage for more stories.
There was a sharp increase in child sex abuse imagery being posted and shared online during the coronavirus pandemic, much of it hosted on Facebook and Instagram, according to data shared exclusively with Insider.
Figures from the National Center for Missing and Exploited Children (NCMEC) showed a 31% increase in the number of images of child sexual abuse reported to them in 2020.
The figure was up by around 5 million, from 16 million reports in 2020 to 21 million in 2021, said Yiota Souras, the lead counsel at the NCMEC.
In 2019, Facebook recorded more child sexual abuse material than any other tech company, and was responsible for around 99% of all reports to the NCMEC.
Though a breakdown of the 2020 NCMEC figures is not yet available, Facebook said that it detected 13 million images on Facebook and Instagram from July to September alone. The figure indicates that the problem is still rampant, and may be worsening.
The vast majority of material is hosted on Facebook's platforms
The NCMEC data comes from its CyberTipLine, which collects reports of child abuse images, videos, and other material found online.
Some reports are one-offs by members of the public. Others are sent in bulk by tech platforms that have agreed to join in to fight the proliferation of sex abuse imagery on their platforms.
The NCMEC shares the data with law enforcement agencies to find those who make and share the material, both of which are crimes.
The NCMEC's 2019 data showed that the vast majority of the images, videos, and other child sexual exploitation material reported to them was hosted by Facebook, with 15.9 of the 16 million cases.
Souras said that the NCMEC expects to see the same levels of material on Facebook in 2020, a product both of the scale of its platforms, and its proactive efforts to find and remove such material.
Google followed, with 450,000 cases. The 2020 dataset is due to be published in February.
In a statement to Insider, a Google spokesperson said that the company uses "cutting-edge technology, supported by specialized teams of human reviewers, to detect, remove, and report such content to authorities."
Facebook proactively scans for abuse images but other tech companies do not
In August, Facebook told the UK's Sky News that remote working during the coronavirus pandemic had diminished its capacity to identify and remove child sexual exploitation content.
A source at the company told Insider that between July and September alone it had identified and removed 13 million child sexual exploitation images, 99% of which were found by the platform's moderators rather than members of the public.
The company uses technology that scans its platforms for known child sexual abuse images, which are then removed, the source said. It also uses separate technology to proactively monitor the site for new imagery as it's uploaded.
Other companies do not proactively scan their platforms for the content.
Facebook said it employs 15,000 reviewers as well as using automated systems.
In a statement to Insider, a Facebook spokesperson said: "Content which sexually exploits or endangers children is not allowed on our platforms. Using industry-leading technology, over 99% of child exploitation content we remove from Facebook and Instagram is found and taken down before it's reported to us.
"Our safety and security team of over 35,000 people investigate reports from our community and work to keep our platforms safe. Our teams also work closely with child protection experts and law enforcement, reporting content directly to specialists including UK Child Exploitation and Online protection Command and NCMEC."
Livestreamed abuse fuels spike in reports
Souras, the NCMEC lead counsel, said that coronavirus lockdowns and an increase in livestreamed abuse were behind the surge.
The pandemic "created real increases in the victimization of children online and their vulnerability, because they are online a lot more, often unattended, often at a much earlier age than their parents anticipated, putting them online for hours a day," she said.
Souras said that children are often abused by someone close to them, and the lockdown rules introduced to slow the spread of the coronavirus had restricted the ability of some children to get help.
"We know from looking at some of the analysis behind the reports we receive that [victims] often have a familial or close adult relationship with their abuser.
Most content is created by the caretakers of children
"So it might be a family member in the house or it might be a spouse or partner, or a roommate, a coach, a babysitter," said Souras. "That is the vast majority of creation of content. It is someone in the most trusting relationship a child can have: a caretaker that is also an abuser."
She said that a new trend fuelling the increase was abuse livestreamed via webcam, rather than pre-recorded content.
"A relatively newer part of the child exploitation spectrum we see which is around livestreaming. With individuals in some countries paying for children to be abused via livestreaming - often countries like the Philippines and Southeast Asia. And they will pay for that, they will pay for acts to be done to them, they will pay for the kind of abuse they want to view," said Souras.
"This is on really any one of multiple companies that enable people to connect via video," said Souras.
Mainstream platforms hosting vast numbers of images
In December, New York Times writer Nicholas Kristof reported that the pornography website Pornhub was monetizing content depicting minors being sexually abused.
In response, leading credit card companies severed ties with it, and Pornhub purged vast numbers of videos from its site.
Insider last year reported that lockdown restrictions had resulted in an increase in "revenge porn," where sexually explicit material of a person is shared online without their permission.
There is currently no legal requirement in the US for tech companies to proactively seek out child sexual abuse material on their platforms, though some do it anyway. When such material is found, companies are legally obliged to report it to NCMEC and remove it.
Though many pedophiles find and distribute material on the dark web, a part of the internet not accessible with normal search engines, Souras said the vast majority of reports were of material found on platforms used by millions every day, such as Facebook and Google.
"People often say "it's on the dark web, [which is] not where I go," but you look at our list of companies, and you've probably been on half of these platforms. They are social media platforms and gaming, and other companies that provide business services," she said.
The findings confirm trends reported by the UK's Internet Watch Foundation, which logs reports of online child sexual abuse content in the UK.
In December, the Guardian reported that the charity had received a record number of reports in 2020.
One US bill proposes stripping companies of legal protection if they are found hosting child abuse content
Souras said that in the US, tighter legislation was needed to compel platforms to proactively detect and remove child sexual abuse images.
She said the NCMEC backed the EARN IT act, introduced to Congress last year by Sen. Lindsey Graham.
The act aims to roll back the legal protections that tech companies have for hosting third-party content, allowing them to be sued for hosting child sex abuse material.
The bill has been criticized by some as a concealed bid to pursue the Trump administration's vendetta against tech platforms. But it earned enough bipartisan support for an amended version to pass scrutiny in Senate committees last year.
Toby Tyler, a spokesperson for Graham's office, said that the bill would be introduced in the new Congress that was sworn in on January 3.