March 12, 2026
AI & Privacy
By Elena Marchetti
The Take It Down Act: what platforms need to know
The 2025 federal law took effect quietly. Its enforcement landscape is anything but quiet — and the early questions are not about removal timelines but about scope.
The Take It Down Act, signed into law in May 2025, established the first federal framework in the United States addressing non-consensual intimate imagery, including content generated by artificial intelligence. The statute requires covered platforms to remove flagged content within forty-eight hours of a valid report, and imposes meaningful civil penalties for non-compliance. The law's text is short. Its practical reach is broader than many platform operators initially recognized.
The early questions raised by clients across our practice fall into three categories. First, scope: which services qualify as "covered platforms" under the Act, particularly where platforms host user-generated AI tools rather than user-generated content directly. Second, evidentiary standards: what level of substantiation a takedown request must meet, and what documentation a platform should retain. Third, the interaction between the federal regime and existing state laws — California's AB 602, Virginia's deepfake statute, and the patchwork of revenge-pornography statutes that pre-date the Act.
For platforms that intermediate user-generated content, the compliance posture is reasonably clear: implement a notice-and-removal workflow that meets the 48-hour standard, train moderation staff on AI-generated content indicators, and maintain audit logs sufficient to demonstrate good-faith engagement. For platforms that provide AI tools capable of generating intimate imagery, the analysis is materially different — and the question of whether the platform itself, rather than its users, is the proximate cause of the violation is unresolved.
We expect the first significant enforcement actions in late 2026 to focus on the second category. Operators of image-modification services that lack meaningful safeguards against the generation of non-consensual content should not assume that user terms of service alone constitute a defense.
By Elena Marchetti · Partner, AI & Privacy
February 24, 2026
Privacy
By Elena Marchetti
GDPR Article 9 and image-based AI services
Biometric data has a specific definition under European law. Many AI services that process facial or bodily images may satisfy it without realizing they have crossed the line.
Article 9 of the General Data Protection Regulation prohibits the processing of "special categories" of personal data — biometric data among them — except under narrowly defined conditions. The provision is not new. What is new is the volume of online services that, often inadvertently, conduct exactly this kind of processing as part of their core functionality.
An AI service that ingests user-uploaded photographs and modifies them based on detected facial or bodily features almost certainly processes biometric data under the GDPR's definition. Whether such processing is lawful depends on the legal basis claimed. Explicit consent is the most commonly cited basis, but consent must be specific, informed, and freely given — and consent obtained through a generic terms-of-service checkbox does not meet that standard. Critically, where the service processes images of third parties (individuals other than the user uploading the image), no consent has been obtained from the data subjects whose biometric data is actually being processed.
This is the structural problem with so-called undress AI services and similar image-modification platforms. The user uploading an image cannot validly consent on behalf of the person depicted in it. The platform's legal basis for processing the depicted person's biometric data is, in most cases, simply absent. Where the depicted person is a minor, the analysis becomes more concerning still — and intersects with separate frameworks under the Digital Services Act and national child-protection law.
Platforms hosting reviews or listings of such services face their own questions. The DSA's transparency obligations and the broader expectation that intermediaries do not facilitate clearly unlawful conduct create genuine exposure where a platform retains a business profile for a service whose underlying activity is, on a conservative reading, unlawful in multiple jurisdictions.
By Elena Marchetti · Partner, AI & Privacy