The Internet Watch Foundation (IWF) has reported the discovery of “criminal imagery” featuring girls aged between 11 and 13, which appears to have been generated using Grok, an artificial intelligence tool developed by Elon Musk‘s firm, xAI. This alarming finding underscores the potential misuse of AI technology for creating illegal content, raising significant concerns regarding child safety online.
The IWF’s analysts identified “sexualised and topless imagery of girls” on a dark web forum. Users on that platform claimed to have utilized Grok to produce the content. The images were not found on the social media platform X, where Grok can also be accessed via its website and app. X and xAI have been approached for comments regarding this serious issue.
Concerns Over AI Technology and Child Safety
Ngaire Alexander, an IWF representative, expressed deep concern over how tools like Grok could potentially mainstream the creation of sexual AI imagery involving children. According to UK law, the material would be classified as Category C, which denotes the lowest severity of criminal content. Nevertheless, Alexander noted that the user responsible for uploading the imagery utilized a different AI tool, not developed by xAI, which resulted in the creation of a Category A image—the most serious classification of child sexual abuse material (CSAM).
“We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material,” Alexander stated. The IWF, dedicated to removing such material from the internet, operates a hotline for reporting suspected CSAM and employs analysts who evaluate the legality and severity of such content.
The IWF’s analysts made their discoveries on the dark web and emphasized that the images were not sourced from X. However, the organization has previously received reports of similar content circulating on the social media platform. In particular, users have been seen requesting the chatbot to alter real images, making women appear in bikinis or placing them in sexualized scenarios without their consent.
Regulatory Response and Future Implications
Following concerns that Grok could be used to create “sexualised images of children,” the UK communications regulator, Ofcom, reached out to X and xAI. The IWF has acknowledged the presence of reports concerning such images on X, yet these instances have not been assessed as meeting the legal definition of CSAM to date.
In a prior statement, X emphasized its commitment to combating illegal content, including CSAM, stating, “We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” The platform further noted, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
The emergence of AI tools like Grok raises urgent questions about the intersection of technology and child safety, highlighting the need for effective regulation and vigilance in monitoring the potential misuse of such innovations. As the digital landscape evolves, the IWF and other organizations remain on the front lines, working tirelessly to protect vulnerable individuals from exploitation and abuse.
