But to what extent can technology also be used to prevent this explosion in the generation and sharing of deepfake content of real people, without their knowledge or consent? Modern AI image generators are typically built using diffusion models, which are trained by taking real images and gradually adding random visual distortion, known as noise, until the original image is no longer recognisable. But retrospective alignment does not remove capability; it simply limits what the AI image generator is allowed to output. Research by Nana Nwachukwu, a PhD candidate at Trinity College Dublin’s Centre for AI-Driven Digital Content Technology, highlighted the frequency of requests for sexualised images on Grok. In early 2024, non-consensual AI-generated sexual images of Taylor Swift, produced using publicly available tools, spread widely on X before being removed because of a combination of legal risk, platform policy enforcement and reputational pressure.
Source: Irish Examiner January 16, 2026 01:00 UTC