Online Harms & Digital Compliance

Working against the services
that operate outside the law.

An independent compliance practice focused on online services that operate in violation of consent, privacy, and consumer-protection law — and the intermediaries whose participation keeps them online.

Wilmington, Delaware United States

HL Compliance Group is an independent compliance practice based in Wilmington, Delaware. The practice works on online conduct that falls outside applicable consent, privacy, and consumer-protection frameworks — together with the intermediaries whose participation enables that conduct to reach users and clear payments.

Our work centres on matters involving these services rather than a roster of client engagements. We surface them, document the conduct, and pursue action through the channels that actually move: registrar and hosting abuse reports, payment-processor complaints, app-store and review-platform escalations, and engagement with regulators where the underlying conduct crosses statutory lines — non-consensual intimate imagery, biometric-data processing without lawful basis, consumer-protection violations, child-safety frameworks.

The focus is narrow by design. Working a small set of matters end-to-end — across multiple complaint pathways and for as long as it takes — produces outcomes that broader, lighter-touch engagement does not. A single registrar complaint, payment-processor report, or regulator notice rarely closes a service down on its own; the combination, pursued with persistence, often does.

What we do

Four pathways. One outcome.

01

Intermediary
& Platform Complaints

Structured complaints to registrars, hosting providers, payment processors, app stores, and review platforms — the intermediaries whose participation enables abusive services to reach users. We build evidence packages that match each provider's policy framework and follow them through to enforcement.

02

Consumer Protection
& Online Harms

Work grounded in deceptive-practice and consumer-protection frameworks — the legal basis for action against services that mislead users, generate non-consensual content, or operate outside disclosed terms. Engagement with FTC, state AG offices, and equivalent EU and UK consumer authorities.

03

Privacy & Biometric
Violations

Matters involving processing of personal and biometric data without lawful basis — particularly where services ingest images of third parties who have given no consent. GDPR Article 9, BIPA, US state privacy frameworks, and the cross-border data-transfer questions that follow.

04

AI & Synthetic
Media

Matters arising under emerging AI and synthetic-media frameworks — the Take It Down Act, the EU AI Act, the DSA's intermediary obligations, and child-safety regimes where the depicted person is a minor. Application across both the underlying services and the platforms that host or list them.

How we work

From signal to shutdown.

01

Surface

Matters reach us through tips, referrals from partner organisations, and our own monitoring of services known to operate outside disclosed terms. Every signal goes through an initial assessment for jurisdiction, applicable frameworks, and the realistic prospect of action.

02

Document

We build evidence packages that match each downstream recipient's policy framework — what a payment processor accepts as proof of unlawful conduct differs from what a regulator requires, and both differ from what a hosting provider's abuse desk will act on.

03

Action

Complaints filed across the channels the case calls for — intermediaries, payment infrastructure, app stores, review platforms, and regulators where the conduct crosses statutory lines. Multiple pathways pursued in parallel, not in sequence.

04

Follow-through

Most action is closed not by the first complaint but by persistence — re-filing on new grounds, escalating when the initial channel returns nothing, and tracking whether the underlying service has actually gone dark or simply moved.

Information shared with the practice — including from tip submissions and referrals — is handled with care for the privacy of those affected. We do not republish the content of reports, and we share material with downstream recipients only where doing so is necessary to advance the matter and lawful under the applicable framework.

Evidentiary work is built around verifiable, source-attributable documentation rather than aggregation or inference. Where a claim cannot be substantiated to that standard, we don't make it.

Four specialists. One brief.

Each member of the team leads a specific focus area. We work together on every matter, with one lead taking primary responsibility — no junior hand-offs, no committees, no diluted attention.

Robert Bennett

Robert Bennett

Lead — Escalation & Regulator Engagement

Robert works matters end-to-end alongside the rest of the team — initial review, complaints to the providers and platforms involved, and the follow-through when standard channels stall. He carries the load on escalation: regulator engagement, formal notices, and the cases that need pressure applied through more than one pathway before a service actually goes offline.

David Tanaka

David Tanaka

Lead — Platform & Intermediary Accountability

David's focus is the legal and procedural grounds that compel intermediaries to act — the arguments that turn an evidence package into actual enforcement, whether the recipient is a host, a payment provider, an app store, or a review platform. He carries matters end-to-end and is the team's go-to on the cases where the abusive service itself is unreachable but the providers around it are not.

Elena Marchetti

Elena Marchetti

Lead — Privacy & Data Protection

Elena leads on the privacy-based angle — the GDPR Article 9 and BIPA arguments that apply where services process personal or facial data of third parties without lawful basis. She carries matters end-to-end and runs that framing through whichever channels the case calls for, from intermediary-level complaints to data-protection authority engagement. The privacy lens is often decisive for image-modification platforms and similar services, where the depicted person has given no consent and the operator's legal basis is, on examination, simply absent.

James Whitfield

James Whitfield

Specialist — Consumer Protection & Online Harms

James works the consumer-protection and deceptive-practice angle — the framing that applies where services mislead users, generate non-consensual content, or operate outside disclosed terms. He carries matters end-to-end, applying that lens across the full set of channels the team uses: intermediary and platform-level complaints, and engagement with FTC, state AG, and equivalent EU and UK consumer authorities. Consumer-harm grounds often move faster than parallel privacy or IP claims, and James leads on the cases where that framing is the cleanest route to action.

Insights

Notes from the practice.

March 2026 The Take It Down Act: what platforms need to know AI & Privacy February 2026 GDPR Article 9 and image-based AI services Privacy

The Take It Down Act: what platforms need to know

The 2025 federal law took effect quietly. Its enforcement landscape is anything but quiet — and the early questions are not about removal timelines but about scope.

The Take It Down Act, signed into law in May 2025, established the first federal framework in the United States addressing non-consensual intimate imagery, including content generated by artificial intelligence. The statute requires covered platforms to remove flagged content within forty-eight hours of a valid report, and imposes meaningful civil penalties for non-compliance. The law's text is short. Its practical reach is broader than many platform operators initially recognized.

The early questions surfacing across the practice fall into three categories. First, scope: which services qualify as "covered platforms" under the Act, particularly where platforms host user-generated AI tools rather than user-generated content directly. Second, evidentiary standards: what level of substantiation a takedown request must meet, and what documentation a platform should retain. Third, the interaction between the federal regime and existing state laws — California's AB 602, Virginia's deepfake statute, and the patchwork of revenge-pornography statutes that pre-date the Act.

For platforms that intermediate user-generated content, the compliance posture is reasonably clear: implement a notice-and-removal workflow that meets the 48-hour standard, train moderation staff on AI-generated content indicators, and maintain audit logs sufficient to demonstrate good-faith engagement. For platforms that provide AI tools capable of generating intimate imagery, the analysis is materially different — and the question of whether the platform itself, rather than its users, is the proximate cause of the violation is unresolved.

We expect the first significant enforcement actions in late 2026 to focus on the second category. Operators of image-modification services that lack meaningful safeguards against the generation of non-consensual content should not assume that user terms of service alone constitute a defense.

GDPR Article 9 and image-based AI services

Biometric data has a specific definition under European law. Many AI services that process facial or bodily images may satisfy it without realizing they have crossed the line.

Article 9 of the General Data Protection Regulation prohibits the processing of "special categories" of personal data — biometric data among them — except under narrowly defined conditions. The provision is not new. What is new is the volume of online services that, often inadvertently, conduct exactly this kind of processing as part of their core functionality.

An AI service that ingests user-uploaded photographs and modifies them based on detected facial or bodily features almost certainly processes biometric data under the GDPR's definition. Whether such processing is lawful depends on the legal basis claimed. Explicit consent is the most commonly cited basis, but consent must be specific, informed, and freely given — and consent obtained through a generic terms-of-service checkbox does not meet that standard. Critically, where the service processes images of third parties (individuals other than the user uploading the image), no consent has been obtained from the data subjects whose biometric data is actually being processed.

This is the structural problem with so-called undress AI services and similar image-modification platforms. The user uploading an image cannot validly consent on behalf of the person depicted in it. The platform's legal basis for processing the depicted person's biometric data is, in most cases, simply absent. Where the depicted person is a minor, the analysis becomes more concerning still — and intersects with separate frameworks under the Digital Services Act and national child-protection law.

Platforms hosting reviews or listings of such services face their own questions. The DSA's transparency obligations and the broader expectation that intermediaries do not facilitate clearly unlawful conduct create genuine exposure where a platform retains a business profile for a service whose underlying activity is, on a conservative reading, unlawful in multiple jurisdictions.

Frameworks we work under

The statutes and instruments that structure the work.

United States Take It Down Act (2025) Federal European Union EU AI Act (Regulation 2024/1689) EU European Union Digital Services Act (Regulation 2022/2065) EU European Union GDPR Article 9 (Special categories of personal data) EU United States Biometric Information Privacy Act (BIPA, Illinois) State United Kingdom Online Safety Act (2023) UK
If you need help

Victim support and reporting resources.

We are not a victim-support hotline. If you or someone you know has been targeted by image-based abuse, the organisations below provide direct support, content removal, and reporting pathways.

Worldwide StopNCII.org — hash-based removal of intimate imagery Removal United States Cyber Civil Rights Initiative — image-based abuse helpline and resources Support United Kingdom Revenge Porn Helpline — UK-based victim support and removal assistance Support United States NCMEC CyberTipline — reporting child sexual exploitation Child safety United Kingdom Internet Watch Foundation — reporting CSAM Child safety

Get in touch.

Tips and referrals received around the clock. Press and general correspondence reviewed during working days.

Tips & referrals tips@hllegalgroup.com For sending information about services, abuse reports, or matters you'd like us to look into.
Press press@hllegalgroup.com Journalists, researchers, and other media enquiries.
Phone +1 (302) 496-4062 Voicemail monitored regularly. For time-sensitive matters, please use tips@hllegalgroup.com.
Jurisdictions United States · European Union · United Kingdom
Address 221 W 9th Street
Wilmington, Delaware 19801
United States