An organization that monitors content on the Internet has warned of the danger of using artificial intelligence to produce visual materials that appear very realistic showing alleged sexual positions on children.
And the Internet Watch Foundation (IWF) confirmed that it had already found amazingly realistic images, produced by artificial intelligence, so that many could not distinguish them from real images, according to the British “Sky News” network, on Tuesday.
The organization investigated pages on the Internet, some of which were reported by users that showed pictures of children as young as 3 years old, indicating that this matter puts real children in a state of danger.
The IWF defines itself as an organization concerned with finding and removing content on the Internet that shows child sexual abuse.
For her part, the organization's chief executive, Susie Hargreaves, called on British Prime Minister, Rishi Sunak, to deal with the issue as a top priority, when Britain hosts a global summit on artificial intelligence later this year.
She said that the numbers of these images are not large at the present time, but it is clear that there is a possibility for criminals to produce images that show children in sexual positions that violate their rights.
And considered that this matter would be devastating for the security of the Internet and the security of children on the network.
Producing images showing children in obscene sexual positions is prohibited under British law.
The organization says that artificial intelligence technology in this field is developing rapidly and making it easier to access, which means that dealing with the problem will be difficult for the law.
Britain's National Crime Agency says this risk is increasing, stressing that it takes the matter very seriously.