xiand.ai
AI

US States Launch Coordinated Action Against xAI Over Grok's Deepfake Crisis

37 attorneys general demand immediate action as Grok generates millions of non-consensual intimate images, including child exploitation material.

La Era

US States Launch Coordinated Action Against xAI Over Grok's Deepfake Crisis
US States Launch Coordinated Action Against xAI Over Grok's Deepfake Crisis

A coordinated regulatory response is emerging against xAI and its Grok chatbot, as 37 US attorneys general take action following widespread generation of non-consensual intimate images and child sexual abuse material (CSAM). This marks a significant escalation in AI governance, highlighting the urgent need for robust safeguards in generative AI systems.The bipartisan coalition published an open letter Friday demanding xAI "immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of [non-consensual intimate images]." The action follows alarming usage patterns documented by the Center for Countering Digital Hate, which found Grok generated approximately 3 million photorealistic sexualized images during just 11 days starting December 29, including around 23,000 sexualized images of children.The crisis extends beyond Grok's X integration to the standalone Grok website, where users reportedly generated explicit videos using the Grok Imagine model without age verification requirements. This regulatory gap has created what several attorneys general describe as an "oasis for criminal activity."Individual states are pursuing aggressive enforcement actions. Arizona Attorney General Kris Mayes opened a formal investigation January 15, while California's Rob Bonta issued a cease and desist letter to Elon Musk demanding immediate action. Florida's attorney general is "currently in discussions with X to ensure that protections for children are in place," according to office representatives.The regulatory response reflects broader tensions in AI governance. While xAI dismissed inquiries with "Legacy Media Lies," the company faces mounting pressure from states that have already criminalized AI-generated CSAM—45 states according to child advocacy group Enough Abuse. California reports some progress, stating they "have reason to believe" Grok is no longer generating illegal content, though investigations continue.This enforcement wave intersects with existing age verification laws in 25 states, creating complex jurisdictional questions about platform responsibility. The one-third pornographic content threshold used in most state laws wasn't designed for platforms like X, where explicit content represents an estimated 15-25% of accounts—below the trigger point but still substantial in absolute terms.The Grok controversy represents a critical test case for AI regulation, demonstrating how rapidly emerging technologies can outpace existing legal frameworks. As Arizona Representative Nick Kupper notes in proposed legislation requiring age and consent verification for AI-generated explicit content, the challenge lies in balancing innovation with fundamental protections.For the AI industry, this coordinated state action signals a new phase of regulatory scrutiny. The attorneys general explicitly rejected xAI's characterization of non-consensual image generation as a "selling point," demanding comprehensive reforms including content removal, user suspensions, and enhanced user controls.This regulatory momentum suggests 2024 will be a pivotal year for AI governance, as state-level enforcement fills gaps in federal oversight. The Grok case may establish precedents for how authorities handle AI-generated harmful content, potentially reshaping industry practices across the sector.Source: WIRED

Comentarios

Los comentarios se almacenan localmente en tu navegador.