The rollout of software capable of generating non-consensual sexual images and potential child sexual abuse imagery (CSAM) on the platform X (formerly Twitter) has brought widespread condemnation, drawing further attention to the use cases and misuses of generative AI image technology. The gravity of this issue raises serious questions for the Irish government’s continued use of the platform for official communications.
Over the Christmas period, Grok, the integrated AI software for the platform X, appears to have been updated to allow users to generate or manipulate photographs in a highly sexualised manner. Creating such images without explicit consent from the individual is referred to as Image-Based Sexual Violence. These images are often created with the explicit intention of humiliating, degrading and bullying the subject, as has happened to Ashley St Clair, the mother of one of Elon Musk’s children, who this week reported that sexualised images of her as a child were created by Grok and remained on the platform for hours, only being removed after media attention was drawn to the matter.
Photos of a recently deceased holocaust survivor were manipulated to replace their clothes with a “swastika bikini”. Other users are reporting that Grok’s “imagine” tool is allowing users to create hardcore pornography.
Examples of photos of Irish politicians and public figures, whose images have been manipulated to appear more sexually suggestive or to bully and humiliate the subjects, are already available on X, and researchers have already collated data sets that demonstrate the sexual nature of the requests by users and the sexualised images generated by the software.
X’s response to the volume of IBSV and potential CSAM generated by its software tool Grok has been woeful. The company’s owner, Elon Musk, reportedly posted laugh-cry emojis in response to some of the images generated. While X has now publicly pledged to suspend users who generate Child Sexual Abuse Imagery using its tool, the images are still being generated.
These images are being generated by a piece of software created, developed and deployed by X. Grok is not sentient. It is a piece of software. The decision to make this product available to the general public was made by humans, and humans should be held responsible.
The Harassment, Harmful Communications And Related Offences Act 2020(CoCos Law) makes it an offence to distribute, publish intimate images without consent, which includes images generated or manipulated by software such as Grok. Under section 6 of this act, corporate bodies (like X) can also be held criminally liable for wilful negligence if their product is used to distribute or publish intimate images without consent.
The Irish Online Safety Code classifies potential Child Sexual Abuse Material as “restricted indissociable user-generated content”, and places expectations on platforms to remove content that “bullies or humiliates another person,” which is the intent of most non-consensual image-based sexual violence. The Digital Services Act places expectations that platforms remove illegal content. X/Twitter is in clear breach of Irish law and European Regulations and must be held accountable.
News that both Coimisiún na Meán and the European Commission are now engaging with X regarding the generation of non-consensual sexual imagery and potential CSAM is welcome. However, this issue is not confined to one company and one piece of software. Both Google’s Nano Banana Pro and OpenAI's ChatGPT’s image generation software also appear to easily allow users to alter images of someone else’s likeness in a sexually suggestive or explicit manner.
This is an endemic problem within the generative AI imagery sector and requires a systemic response from regulators and governments. FuJo believes CnaM should initiate investigations into non-compliance with EU and Irish Regulations regarding the generation of non-consensual image-based sexual imagery into X/Twitter, Google, and OpenAI, as all three companies’ European HQs are based in Dublin. At present, there is no central authority in Ireland responsible for enforcing the EU AI Act. In our view, CnaM should engage with the generative AI image sector ahead of the formation of the National AI Office in August 2026.
Against a background of previous issues, this development raises serious ethical, moral and legal questions regarding this government’s continued use of X’s platform. This government has made it clear that this year it intends to enact contentious legislation mandating age verificationfor social media users. The Minister for Communications, Patrick O’Donovan, recently stated, “There is no other right that trumps the right of a child to be protected, and no amount of convincing me that data protection is more important than child protection is ever going to win out".
How can a government justify the institution of a mass surveillance regime supposedly necessary for the sake of child safety, while continuing to remain on a social media platform that offers a feature allowing its users to generate CSAM?
This is not an issue of free speech; no definition of free speech includes the right to distribute CSAM and IBSV. X is a private company; the government has no obligation to remain on the platform. Remaining on X is a decision the government must be asked to justify.
Statement prepared for the Institute for Media, Democracy and Society by Aidan O'Brien