The White House Just Drew a Line in the Sand
I woke up to a bombshell this morning. Michael Kratsios, President Trump’s chief science and technology adviser, dropped a memo that’s about to reshape how American AI companies protect their models — and it names China directly.
The accusation is blunt: foreign entities “principally based in China” are running “deliberate, industrial-scale campaigns” to extract capabilities from leading U.S. AI systems. Not hacking. Not stealing source code. Something arguably more insidious — model distillation.
What Exactly Is Model Distillation — And Why Should You Care?
Here’s the thing most people miss about this story. We’re not talking about traditional intellectual property theft — no one is stealing GPT-5’s source code or breaking into Anthropic’s servers.
Model distillation is subtler and arguably more dangerous. It works like this: a competitor systematically queries a powerful AI model — say, Claude or ChatGPT — with thousands or millions of carefully crafted prompts. They record the outputs, then use those input-output pairs to train their own smaller model. The result? A model that mimics the original’s capabilities at “a fraction of the time, and at a fraction of the cost,” as the White House memo puts it.
Think of it like this: instead of spending $10 billion training a model from scratch on massive GPU clusters, you spend maybe $50 million having a smarter model essentially teach your dumber one. That’s what the administration says Chinese companies are doing at industrial scale.
OpenAI and Anthropic Are Naming Names
What surprised me most about this story isn’t the White House action — it’s how aggressive the AI companies themselves are getting.
OpenAI publicly stated that China shouldn’t be allowed to advance “autocratic AI” by “appropriating and repackaging American innovation.” That’s extraordinarily direct language from a company that typically keeps its geopolitical commentary measured.
Anthropic went further. Back in February, the Claude chatbot maker accused DeepSeek and two other China-based AI laboratories of engaging in campaigns to “illicitly extract Claude’s capabilities to improve their own models” using distillation. Those are specific, named accusations from one of the most technically sophisticated AI labs in the world.

Congress Is Moving Fast — With Bipartisan Support
Here’s where it gets really interesting. The House Foreign Affairs Committee didn’t just nod along — they offered unanimous, bipartisan support for a bill targeting model extraction. In a Congress that can’t agree on much these days, that’s telling.
The legislation would set up a formal process to identify foreign actors that extract “key technical features” of closed-source, U.S.-owned AI models. The enforcement teeth? Sanctions. We’re talking about potentially cutting off companies and individuals from the U.S. financial system — the same playbook used against nation-state threats.
Rep. Bill Huizenga put it bluntly: “Model extraction attacks are the latest frontier of Chinese economic coercion.”
What This Means for You
If you’re using ChatGPT, Claude, or any major U.S. AI model, here’s the practical impact. The companies behind these tools are about to get a lot more aggressive about detecting and blocking bulk API usage that looks like distillation. Expect tighter rate limits, more sophisticated usage monitoring, and potentially new terms of service that explicitly ban distillation-style usage patterns.
For the broader industry, this marks a turning point. The U.S. government is signaling that AI model capabilities are now treated like strategic assets — not just commercial products. That’s a fundamental shift in how Washington views the AI industry.
My Quick Take
I’ve been covering this space long enough to know that technical solutions alone won’t stop distillation — there are always workarounds. But combining technical defenses with sanctions enforcement? That’s a different ball game entirely. When the penalty for model extraction shifts from “your API key gets revoked” to “your company gets sanctioned by the U.S. Treasury,” the calculus changes dramatically.
The question now is execution. The memo outlines intent, but enforcement is where things get complicated. Still, the bipartisan congressional support suggests this isn’t going away anytime soon.
FAQ
What is AI model distillation?
Model distillation is a technique where a smaller, less capable AI model is trained using the outputs of a larger, more powerful model. By systematically querying the larger model and recording its responses, developers can create a cheaper model that approximates the original’s capabilities without building it from scratch.
Which Chinese companies are accused of distilling U.S. AI models?
Anthropic specifically named DeepSeek and two other China-based AI laboratories in February 2026 as conducting distillation campaigns against Claude. OpenAI has also accused DeepSeek of copying their models. The White House memo references “foreign entities principally based in China” without naming specific companies.
How will the proposed sanctions work?
The House bill would establish a formal process to identify foreign actors extracting capabilities from closed-source U.S. AI models. Penalties would include financial sanctions — potentially cutting offenders off from the U.S. financial system. The White House will also work directly with AI companies to build technical defenses against extraction.




