Welcome back! Anthropic has accused three major Chinese labs of running a massive, organized operation to siphon intelligence from Claude to train their own models. We also have a look at how the biggest players are securing their futures. Meta just signed a deal to buy so many chips that they effectively became a part-owner of AMD. Meanwhile, OpenAI is realizing that selling intelligence is hard, so they are hiring the world's biggest consulting firms to do the integration work for them.

In today’s Generative AI Newsletter:

  • Anthropic catches three Chinese labs stealing model data.

  • OpenAI hires consulting giants to deploy agents.

  • Meta signs a $100B chip deal with AMD.

  • GPT-Realtime-1.5 launches for live voice agents.

Latest Developments

Anthropic Report: 16M Exchanges Stolen By Chinese AI Lab Bots

Anthropic dropped a massive report alleging that DeepSeek, Moonshot AI, and MiniMax ran a coordinated operation to strip mine the reasoning capabilities of Claude. Using over 24,000 fake accounts and 16 million exchanges, these labs used "distillation" to force the AI to explain its own homework until their models could mimic it. It is essentially the digital version of corporate espionage. Instead of stealing physical blueprints, they are tracing the ghost in the machine to leapfrog American research.

How to copy a brain at scale:

  • DeepSeek's Strategy: Used 150K exchanges to reverse engineer Claude’s reasoning and create censorship safe alternatives for sensitive queries.

  • MiniMax’s Blitz: Ran the largest operation with 13M exchanges and pivoted to attack new models within 24 hours of their release.

  • Moonshot’s Target: Focused 3.4M exchanges on extracting agentic reasoning, coding logic, and computer vision through hundreds of proxies.

  • National Security: Anthropic warns that these stolen models often have safety guardrails stripped for use in military or surveillance systems.

The timing is particularly spicy because DeepSeek V4 is rumored to launch this Wednesday. Anthropic researcher Alek Dimitriev was blunt: "Leapfrogging happens through innovation, not distillation". However, this opens a messy legal and ethical Pandora's box. If a developer uses Claude Code to write functions for a public repo, does that count as illicit distillation when another lab crawls that repo?

Special highlight from our network

Clawdbot is going viral for hiring humans to finish its tasks. Meanwhile, Claude Opus 4.6 is analyzing huge codebases and producing near-ready financial models.

If you want to keep up, learn the workflows that make AI useful.

Outskill is hosting a free 2-day LIVE AI Mastermind: build automations, create personal agents, and sharpen your edge.
🧠 Saturday & Sunday | 🕜 10 AM–7 PM EST

Show up and unlock bonuses: Prompt Bible, AI monetization roadmap, toolkit builder.
Free for the next 72 hours. Seats are limited.

OpenAI Enlists Consulting Giants for Frontier Agents

OpenAI officially launched its Frontier Alliance this week, forming multi-year partnerships with McKinsey, BCG, Accenture, and Capgemini. This move is designed to solve the "last mile" problem of AI: companies have the models, but they don't know where to put them. By enlisting the world’s most powerful consulting firms, OpenAI is ensuring that its autonomous agents aren't just cool demos, but integrated new hires that live within existing corporate tech stacks.

Deploying the silicon workforce:

  • The Frontier Platform: A new enterprise hub that allows businesses to manage and deploy AI agents across their entire infrastructure.

  • Certified Integration: Partner firms are building dedicated teams to work alongside OpenAI engineers to embed agents into specific business workflows.

  • Corporate Training: Accenture has already begun massive training programs to get its staff "AI-certified" for enterprise-level deployment.

  • The Irony: In a race to automate white-collar work, OpenAI has hired the world’s most famous white-collar workers to help facilitate the transition.

This alliance marks the transition of AI from a chat feature to a fundamental piece of business infrastructure. OpenAI is no longer just selling a brain; they are selling a workforce, and the consulting giants are the ones who will write the job descriptions for it. While startups struggle to find moats, OpenAI is building a massive wall of corporate partnerships that makes their tech the default choice for the Fortune 500. We are moving toward a future where hiring an AI is a standard procurement process handled by a consultant.

Meta Bets $100B on AMD to Build Personal Superintelligence

Meta announced a multiyear deal to purchase up to $100 billion in AMD chips on Tuesday. The order is large enough to power six gigawatts of data center demand. As part of this unusual agreement, AMD gave Meta a warrant for 160 million shares at one cent each. This effectively makes Meta a major owner of its own supplier if performance milestones are hit. Zuckerberg is calling this the next step toward personal superintelligence, which is his term for AI that understands your life better than you do.

Buying the hardware moat:

  • GPU and CPU Mix: Meta will buy MI540 GPUs and the latest CPUs to escape Nvidia's premium pricing.

  • Equity for Silicon: The deal includes a performance-based stock award that vests as Meta hits specific buying targets.

  • Infrastructure Spend: Meta plans to spend $135 billion on AI hardware in 2026 alone to keep up with the scaling race.

  • Diversification: This follows a similar deal with Nvidia and shows Meta is desperate to avoid being locked into a single chip vendor.

The strategy reveals that personal superintelligence is less about a breakthrough in logic and more about a breakthrough in sheer electrical volume. Meta is essentially trading shareholder equity for the privilege of keeping its massive data centers warm. By becoming a part-owner of AMD, Zuckerberg is making sure that even if the AI fails to understand us, he still owns the shovel used to dig the digital grave. We are watching the transition from social media to a heavy industry business model built on gas-powered campuses and billion-dollar hardware swaps.

GPT-Realtime-1.5: OpenAI’s Low Latency Voice Model for Production Agents

gpt-realtime-1.5 is OpenAI’s latest flagship model for the Realtime API, built specifically for live voice agents and multimodal applications. It replaces earlier realtime previews and snapshots with stronger reasoning, more stable conversations, and improved tool execution.

Core functions (and how to use them):

  • Native audio input and output: Stream microphone audio directly and receive spoken responses without building a separate speech pipeline.

  • Improved reasoning and transcription: Higher accuracy in spelling names, reading numbers, and following complex spoken instructions.

  • Reliable tool calling: Define external tools such as booking systems or databases in the session so the model can trigger them mid conversation.

  • Multilingual switching: Provide instructions in one language and switch fluidly during live dialogue.

  • Turn detection: Enable server side voice activity detection so the system automatically knows when a user stops speaking.

Try this yourself:

Update the model string to gpt-realtime-1.5, connect via WebRTC or WebSockets, define session modalities as audio and text, set a voice, and stream live input. Add one simple tool and test whether the agent can call it during conversation.

Reply

Avatar

or to participate

Keep Reading