Telegraph e-paper

UK’s first line of defence against child abuse images faces AI threat

By James Titcomb Analysis: Page 35

BRITAIN’S first line of defence against child abuse images has warned that it risks being overwhelmed by a flood of fake content generated by artificial intelligence.

The Internet Watch Foundation (IWF), which monitors cases of child sexual abuse material and works to block illegal content, said real victims would be put at increased risk if its staff were deluged by industrialised production of fake images.

Police and politicians have warned that lifelike pictures created by paedophiles using AI image generation programs are a growing threat. The IWF is responsible for tracking down thousands of illegal images of child abuse each week, alerting police when it suspects that a child could be in danger and alerting overseas counterparts when an image is hosted abroad.

Dan Sexton, the organisation’s chief technical officer, said: “Our focus is on protecting children. If a child can be identified and safeguarded, that is always a priority for analysts.

“If AI imagery of child sexual abuse does become indistinguishable from real imagery, there is a danger that IWF analysts could waste precious time attempting to identify and help law enforcement protect children who do not exist, to the detriment of real victims. AI generated imagery is an emerging technology which we are keeping a very close eye on. We know criminals will, and do, abuse any technology they can to distribute and make imagery of child sexual abuse.

“Regardless of how it is created we would still want to find and remove it from the internet. Far from being a victimless crime, this imagery serves to normalise and ingrain the sexual abuse of children in the minds of offenders.”

Use of AI image generation tools have exploded in recent months, letting users create photo-realistic images in seconds with just a few written instructions. While the most prominent programs have introduced restrictions on ‘There is a danger that analysts could waste time attempting to identify children who do not exist’ generating illegal or pornographic material, users have shared guides on bypassing these controls or turned to “open source” alternatives without such restrictions.

ActiveFence, a company that monitors online forums, said it had identified 68 sets of AI-created child abuse images on one website in the first four months of this year, against 25 in the last four months of 2022. There have also been fears that illegal content may have featured in the datasets of billions of photos used to “train” the software. Fake child abuse images are illegal to own and distribute in Britain.

One content moderation executive said that AI-generated images created a huge problem for investigators because they would not be recognised by software that is used to tell if an illegal image had been reported before. This would mean a number of fake images could be treated as new, leading to false concerns that a child is in danger.

A spokesman for the National Crime Agency said: “We constantly review the impact that new technologies, such as synthetic media [including that generated using AI], can have on the child sexual abuse threat.”

Baroness Kidron, the child safety campaigner, has pushed for AI-generated abuse to be included in the Government’s Online Safety Bill, which will introduce fines for companies that fail to tackle harmful content. She said she introduced an amendment after police told her they had seen an explosion in AI-generated abuse images this year.

AI companies said this week that policymakers should focus on an “extinction” threat on the scale of pandemics and nuclear war, which critics said distract from current issues. Rishi Sunak is expected to discuss AI with President Joe Biden in Washington next week.

Business

en-gb

2023-06-03T07:00:00.0000000Z

2023-06-03T07:00:00.0000000Z

https://dailytelegraph.pressreader.com/article/282338274261671

Daily Telegraph