
Welcome back! Elon Musk took the stand in California yesterday and admitted under oath that xAI trained Grok by distilling OpenAI's models, the very practice xAI's own terms of service would prohibit a competitor from doing to it. On the same day, OpenAI opened access to its strongest models for cybersecurity defenders and Anthropic moved Claude Opus 4.7 into a public-beta security product.
In today’s Generative AI Newsletter:
Musk on the stand: Did xAI use OpenAI's own models to build Grok, and does it matter?
OpenAI cyber: What changes when OpenAI opens its strongest models to security teams and governments?
Anthropic security: Can Claude Opus 4.7 really find the bugs your security team keeps missing?
The $700B bill: How long can Big Tech keep raising the AI capex number before investors let go?
Latest Developments
Musk testifies Grok is trained on OpenAI models

Elon Musk took the stand in a California federal court yesterday and testified that xAI used distillation techniques on OpenAI's models to train Grok. The admission came in the long-running litigation between Musk and OpenAI over the company's transition to a for-profit structure.
The details:
Distillation in plain English: A smaller model is trained on the outputs of a larger one, a practice that OpenAI's terms of service explicitly prohibit for commercial competitors.
What forced the answer: Cross-examination in Musk's case against OpenAI, which argues the company has betrayed its founding non-profit mission.
The wider picture: OpenAI banned several Chinese labs from its API last year on similar grounds, and DeepSeek faced public scrutiny over distillation. xAI has now publicly joined that list.
None of which slows Grok: xAI shipped Grok 4.3 Beta on April 17 with slide-creation features locked behind a $300 per month tier.
Distillation is an open secret in the model-training world. The interesting question is whether OpenAI will move against xAI now that Musk has admitted to it under oath. Probably not, given both companies have bigger fights to pick. The cleaner takeaway is that the moral high ground in this lawsuit is no longer Musk's to claim, and the case has quietly shifted from a mission dispute to a fight over money.
Special highlight from our network
Coding agents are fast, capable, and completely context-blind.
MCPs give agents access to information, not understanding. The teams pulling ahead are using a context engine to surface exactly what agents need for every task, so they stay on track without the set up tax or the correction loops.
Join Unblocked in a live session on May 6 to see how a context engine solves for quality, efficiency, and cost.
OpenAI opens its frontier models to cybersecurity defenders

OpenAI said yesterday it is expanding access to its most advanced models for cybersecurity work, betting that putting frontier capabilities in the hands of defenders is the only way to keep pace with attackers using the same tools. The announcement targets businesses and government agencies that previously could not access those tiers without bespoke arrangements.
The details:
Who gets access: Enterprise security, fraud and trust teams, plus a wider set of US and allied government agencies. OpenAI did not specify which models qualify but framed it as opening up its strongest systems.
The strategic logic: OpenAI wants to be embedded in the defensive infrastructure of the institutions already buying its models, the same play that built the Microsoft Azure relationship in enterprise.
The timing is not coincidental: The announcement landed the same day Anthropic moved Claude Security into public beta. Both labs see the same opening at the same moment.
What this is not: OpenAI did not announce a dedicated security product. The move is about access and licensing, not a new SKU.
OpenAI is planting a flag. The lab that spent two years convincing enterprises to use ChatGPT is now telling those same buyers it is the only credible defense against the threats its tools are creating. The interesting concession is that OpenAI has stopped trying to be the one model for everything. It is now competing on verticals, and security is the one vertical where the buyer intent and the budgets both already exist.
Anthropic puts Claude Opus 4.7 in charge of your codebase security

Anthropic moved Claude Security into public beta for all Claude Enterprise customers yesterday, a dedicated defensive product that scans codebases for vulnerabilities and writes patches. The system runs on Opus 4.7 and ships with a partner roster that reads like a Gartner quadrant: CrowdStrike, Microsoft Security, Palo Alto Networks, SentinelOne and Wiz on the technology side, with Accenture, BCG, Deloitte, Infosys and PwC handling deployment.
The details:
What it actually does: Reasons across an entire codebase the way a security researcher would, tracing data flows, reading source and inspecting how components interact rather than pattern-matching against a CVE list.
New since the research preview: Scheduled scans for ongoing coverage, the ability to dismiss findings with documented reasoning for future reviewers and CSV plus Markdown exports for existing audit pipelines.
Where it lives: Inside Claude.ai under a new Security section at claude.ai/security, available now for Enterprise plans with Team and Max access coming shortly.
Why CISOs care: Opus 4.7 already powers vulnerability features inside Microsoft Security, CrowdStrike and Wiz, so getting it directly closes the loop between the vendor stack and in-house code review.
Anthropic is positioning Opus 4.7 as the defensive mechanism underneath the existing security stack. The implication for security leaders is uncomfortable. If your competitors are running this and you are not, your codebase is being reviewed by a worse reasoner than theirs is and that gap compounds every week.
Big Tech's 2026 AI capex bill clears $700 billion

Microsoft, Amazon, Alphabet and Meta have now committed somewhere between $630 billion and $650 billion in AI capital spending for 2026, with combined Q1 capex above $130 billion. The full hyperscaler total is on track to clear $700 billion before the year is out. Meta's $145 billion guidance got the worst reaction. The stock dropped 7% after-hours.
The details:
The headline number: Combined 2026 AI capex commitments now top $630 billion across the four largest hyperscalers, the largest infrastructure spending cycle in industry history.
The Meta swing: Meta posted 33% revenue growth, its fastest since 2021, but the $145 billion capex forecast wiped out the rally and pushed shares down 7% in after-hours trading.
Where the money goes: Data centers, GPUs and power. Mistral added $830 million in debt for an NVIDIA-powered Paris site. Starcloud just became a unicorn by promising to put data centers in low Earth orbit.
What investors keep asking: When the spending starts producing operating leverage. The answer from most of the hyperscalers is still not yet.
This is the second consecutive quarter where the AI capex number went up and the market mood went down. Investors believed the AI spending story for two years on faith and they have started asking for receipts. The hyperscalers can keep raising the number for a few more quarters because they have the cash flow to do it, but the patience window is closing. Either AI revenue starts compounding faster than capex inside 2027 or the trade gets repriced from growth to value.
Tool of the Day: Whisk

Whisk is a Google Labs experiment that lets you generate images using other images as the prompt. Drop in a reference for the subject, the style and the scene, and Whisk fuses them into a new image. It skips the prompt-engineering step that usually slows down image generation, which makes it useful for anyone who knows what they want visually but cannot describe it in words.
Try this yourself: open Whisk, upload three images (a subject, a style and a scene) and hit generate. Use it to mock up a hero image for an upcoming launch, a slide thumbnail or a quick placeholder for a product page. Five minutes is enough to tell you whether it changes how you brief designers.
Light Bytes
Microsoft ships Agent 365 at GA today: Microsoft pushed Microsoft 365 E7 and Agent 365 to general availability, giving IT admins the same identity, permissions and audit-log controls for AI workers that they already use for employees.
Mistral takes on $830M in debt: All of it earmarked for an NVIDIA-powered data center outside Paris.
Starcloud is a unicorn at 17 months: $170M to put AI data centers in low Earth orbit.
IBM ships Bob: A new AI coding platform with multi-model routing and human checkpoints baked in.
SoftBank spins up Roze AI: A robotics startup aimed at automating US data center construction.
Gemini lands in 4M GM cars: Cadillac, Chevrolet, Buick and GMC, model year 2022 and newer.




