Europol Raids X Over Crimes That Don't Exist
A Macron loyalist filed a complaint because he didn't like X's algorithm, and now Europol is calling it a child exploitation investigation.
The European Union wants you to believe Elon Musk is running a child exploitation ring, and that is the implication when Europol announces they are "supporting" an investigation into "child sexual abuse material" on X. The press release mentions CSAM, deepfakes, and "content contesting crimes against humanity," which sounds serious until you actually read what they are investigating and realize the whole thing is political retaliation with a law enforcement facade.
On February 3rd, 2026, French authorities raided X's Paris offices with Europol support, and the brief press release they issued contains zero evidence, zero victims, zero specific incidents, and zero actual crimes described. It lists categories of bad things and announces that an investigation exists, which tells you everything you need to know about what this actually is.
The investigation started in January 2025, and the original complaint had nothing to do with child abuse. According to CNN, it began when "a lawmaker alleging that biased algorithms in X were likely to have distorted the operation of an automated data processing system." The lawmaker was Eric Bothorel, a member of Macron's Renaissance party, and his complaint specifically cited Musk's "personal interventions" in how the platform operates and "reduced diversity of voices" in what the algorithm showed users.
The original crime was a billionaire running his own platform the way he wants to run it.
Click to Reveal What They Actually Found
Four charges. Zero evidence.
They spent ten months adding charges after opening the investigation. In November 2025, Grok generated a response to a question about Auschwitz that questioned aspects of the official narrative, and the Auschwitz Memorial condemned it as "disgraceful." France has laws making it criminal to "contest crimes against humanity," and no other genocide in history has this legal protection. You can question the Armenian genocide, the Rwandan genocide, the Cambodian genocide, but asking questions about this one specific event is illegal in France. The charge exists because an AI raised questions that are illegal to raise.
The CSAM allegation is even more absurd. French prosecutors claim X's reports to the National Center for Missing and Exploited Children dropped 81.4% between June and October 2025, and they are using this as evidence of "complicity" in CSAM distribution. If X was actually hosting child abuse material, reports would increase as users found it and reported it, but prosecutors are claiming a drop in reports proves guilt. A drop could mean X's prevention improved, or that CSAM on the platform decreased, or that methodology changed, but prosecutors are assuming the worst interpretation with zero evidence to support it.
Techdirt reported yesterday that six months of "AI CSAM crisis" headlines were based on completely misunderstood statistics, with 78% of the alarming numbers coming from Amazon flagging known CSAM in training data rather than new AI-generated material. NCMEC's own executive admitted actual AI-generated CSAM comes in "really, really small volumes," but that context never makes it into the headlines.
The "deepfakes" charge centers on Grok's image editing feature, which could generate images of women in revealing clothing. CBS reported the controversy involved "revealing clothing such as bikinis," meaning bikini photos, and the "3 million sexualized images" statistic comes from the Centre for Countering Digital Hate, an advocacy organization with a documented history of targeting Musk specifically. They counted bikini edits as "sexualized deepfakes" to inflate the numbers, and every other AI image tool from Stable Diffusion to Midjourney can do the same thing or worse without facing Europol investigations.
Now look at the timeline. In January 2025, Bothorel files his "biased algorithm" complaint. The investigation opens for speech crimes. In November 2025, Grok hallucinates wrong historical info, so they add Holocaust denial. That same month, they add "deepfakes" for the bikini editing feature and pile on CSAM allegations based on fewer reports. In December 2025, the EU fines X 120 million euros for "transparency violations," and Musk publicly mocks European regulators. Two months later, Europol raids the Paris office and summons Musk for questioning.
How France Built a Case From Nothing
From "biased algorithms" to CSAM allegations in ten months
Original Complaint Filed
MP Eric Bothorel files complaint about X's "biased algorithms" showing "reduced diversity of voices." The crime: a billionaire running his platform how he wants.
Speech CrimeGrok Questions Narrative
AI raises questions about Auschwitz. France adds "contesting crimes against humanity" - a charge that applies to no other genocide in history.
Illegal InquiryCSAM + Deepfakes Added
Prosecutors claim 81.4% drop in NCMEC reports proves "complicity." Bikini photos generated by Grok become "deepfakes." Charges pile up with zero evidence.
Manufactured ChargesEU Fines X €120 Million
European Commission hits X with fine for "transparency violations." Musk publicly mocks European regulators. The fine goes unpaid.
Financial PressureEuropol Raids Paris Office
French authorities raid X with Europol support. Musk summoned for questioning. Headlines read "child abuse investigation" even though zero CSAM has been found.
Reputation LaunderingAfter the raid, Bothorel posted on X: "Glad to see that my complaint from January 2025 is yielding results! In Europe, and particularly in France, the Rule of Law means that no one is above the law." He is openly celebrating using state power to punish a social media company that showed the wrong content to the wrong people.
The CSAM allegation exists so that headlines read "Musk investigated for child abuse" instead of "EU retaliates against platform that refused to censor." Attach the most serious possible crime to a political persecution and anyone who questions it looks like they are defending child exploitation, which is exactly the point. It is reputation laundering through law enforcement.
Europol's press release says nothing. An investigation "concerns the online platform X" in relation to "illegal content," they "deployed an analyst on the ground in Paris," and they "stand ready to continue supporting." Zero victims identified, zero evidence cited, zero specific crimes described. That is a press operation.
X called the investigation "politically motivated" and stated it "distorts French law to serve a political agenda and ultimately restrict freedom of expression." Given that the whole thing started because a Macron loyalist wanted different content in his feed, that assessment seems accurate.