X/Twitter Grok AI Image Editing CSAM Controversy (December 2025)
X's December 2025 Grok image editing launch led to mass AI-generated CSAM at 6,700+ images/hour. AI-generated CSAM is illegal under US law regardless of whether real children are depicted.
In December 2025, X (formerly Twitter) rolled out Grok's AI image editing feature. Users immediately exploited it to generate sexualized content, including of minors, at rates exceeding 6,700 images per hour. Legal context: CSAM (Child Sexual Abuse Material) legally includes clothed images when created with sexual intent involving minors. AI-generated CSAM is illegal under US federal law even when no real children are depicted. Key distinctions from manual image manipulation (e.g. Photoshop): - Scale: impossible volume of output compared to manual editing - Skill barrier: none required, making abuse accessible to anyone - Public platform: content created and shared on a social network makes harassment visible - Volume vs moderation: the rate of generation overwhelms content moderation systems The incident highlighted the tension between rapid AI feature deployment and content safety guardrails on social platforms.