
Welcome back {{firstname}}! The frontier lab oligopoly is looking more contested than it has in months. An open-source model just caught the frontier. A Google co-founder has come out of retirement to chase Anthropic. The infrastructure bets underneath it all keep getting bigger.
In today’s Generative AI Newsletter:
Open-source: Which lab just made every frontier model look overpriced?
Google: Why has a retired co-founder come back to personally run a coding team?
Anthropic: What does five gigawatts of compute actually buy?
Apple: Can a hardware engineer fix a software problem in time?
Latest Developments
The best coding model on earth is free

Moonshot AI has open-sourced Kimi K2.6, and it matches or beats GPT-5.4, Claude Opus 4.6 and Gemini 3.1 Pro on the benchmarks that define the frontier. It runs autonomously for 12 hours, spins up 300 parallel agents and is free to download tonight.
The details:
Benchmarks: K2.6 outperforms GPT-5.4, Opus 4.6 and Gemini 3.1 Pro on Humanity's Last Exam with tools and on SWE-Bench Pro. These are the reasoning and coding tests the frontier labs use to rank themselves.
Autonomy: Long-horizon runs of 12 hours plus with more than 4,000 tool calls. Moonshot demonstrated Kimi refactoring an eight-year-old codebase from scratch in a live clip, and reports one internal agent that ran unattended for five days.
Agent swarms: Ships with native support for OpenClaw and Hermes. Swarms can now deploy 300 sub-agents on a single task, triple what K2.5 managed six months ago.
Licence and cost: Apache-style licence, free to self-host. A meaningful share of the price pressure on Claude, GPT and Gemini now comes from models anyone with a GPU cluster can run.
Dario Amodei told the Financial Times yesterday that open-source and Chinese labs were 6-12 months behind the frontier. K2.6 arrived 24 hours later. For CTOs picking a model to run agentic coding in production, the relevant gap closed this week, on cost and on capability.
Special highlight from our network
Companies are going from one AI tool to dozens of agents… and things start to fall apart. Different answers to the same question, none of them accurate.
Why?
No shared context. No business understanding
AI doesn’t know which metrics matter, what “customer” means and which data is outdated.
In 2026, the fix is the context layer: a structured source of truth for AI.
Atlan is unpacking this at Atlan Activate 2026 (Apr 29), a 60-minute live session where their founders show a live demo of how teams set this up and use it daily.
Register now
Special highlight from our network
Most people try to keep up with AI by consuming more content. Outskill flips it: one weekend, 16 live hours, and you leave with workflows and automations you can actually run on Monday.
Saturday and Sunday, 10 AM to 7 PM EST. Attendees also get a prompt library, an AI monetization roadmap, and a personalized toolkit builder (bonuses valued at $5,000+).
Free for the next 48 hours.
Google's retired co-founder is back, and he's gunning for Claude

Sergey Brin is personally running a new DeepMind strike team to close the internal coding gap between Gemini and Claude, according to The Information. Brin's pitch to staff is that code is the capability that gets Gemini to AI systems that train their own successors.
The details:
The team: Led by research engineer Sebastian Borgeaud, previously in charge of DeepMind pretraining. Reports to CTO Koray Kavukcuoglu and Brin himself.
The admission: DeepMind engineers rate Claude's code output above Gemini's internally. The strike team exists because Anthropic has already won that internal vote.
The leaderboard: Gemini engineers now have to use Google's own agent tools on complex tasks. Usage is tracked on a company leaderboard called Jetski.
The goal: In an internal memo, Brin frames the prize as AI that trains the next AI. Coding is the hinge capability that gets Google there.
Founders come out of retirement when the franchise is under threat. Brin deciding this needs his personal attention, and letting the internal leaderboard be branded Jetski rather than buried, tells you more about where DeepMind thinks it sits than any keynote Sundar gives this year.
Anthropic locks in 5GW of compute

Anthropic has secured up to 5 gigawatts of capacity for training and deploying future models under an expanded infrastructure agreement with its cloud partner. That partner is investing up to $25 billion on top of existing commitments. Anthropic has committed to spending more than $100 billion on cloud services over the next decade.
The details:
The scale: 5GW is roughly a sixth of the total power drawn by every AI data centre on Earth today. It is the largest single compute commitment Anthropic has ever signed.
The money: $5 billion lands with Anthropic immediately. A further $20 billion is tied to commercial milestones over the life of the agreement.
The lock-in: The $100 billion in Anthropic compute spend runs over 10 years, tying the lab's economic fate to a single infrastructure partner for the whole decade.
The context: OpenAI's own contracted compute obligations with Microsoft, Oracle and SoftBank already cross $1 trillion. Anthropic has now moved into the same weight class.
Every frontier lab is now a financial instrument wrapped around long-duration power contracts. The number that determines which labs survive the next three years is gigawatts committed. Anthropic just added five.
A hardware engineer now has to fix Apple's AI problem

John Ternus will replace Tim Cook as CEO of Apple on 1 September, ending a 25-year climb from mechanical engineering intern to the corner office. Cook becomes executive chairman. Ternus inherits a company that is arguably the best hardware operator on earth and visibly last in the AI race between the major platforms.
The details:
The background: Ternus is a mechanical engineer who most recently ran hardware engineering across every Apple product line. He is the first CEO in the company's modern era to come from that discipline.
The timing: Apple's Siri overhaul has slipped repeatedly. iOS 26.4 is now set to ship a Gemini-powered Siri built on Google's custom 1.2 trillion parameter model. Apple could not build it in-house.
The reorg: Hardware chief Johny Srouji is restructuring the division around silicon, platform architecture, advanced technologies, project management and hardware engineering. The goal is to make Apple chips AI-native rather than AI-adjacent.
The stakes: Wedbush is calling 2026 a make-or-break year for Apple's AI credibility. Google securing the AI layer on 1.5 billion iPhones is already worth billions to Alphabet.
Apple has survived the AI cycle so far by owning the device. Ternus inherits a company where the AI layer now belongs to a competitor. His first year will tell us whether Apple's vertical integration story still holds up, or whether the Google deal marks the moment Apple stopped being a platform and started being a distribution channel.
Tool of the Day: X-Pilot

X-Pilot turns PDFs, PowerPoint files and text documents into full video courses. Upload source material and the tool generates narration, scene breakdowns, on-screen visuals and a lesson flow that mirrors the structure of the original document. Output is ready to embed in a learning management system or share directly with colleagues.
Try this yourself:
Go to x-pilot.ai and upload a PDF you know well, like an internal onboarding doc or a training deck
Pick a narrator voice and a pacing preset, then let it render the first module
Cross-check the output against the source to see where it over-summarises and where it actually adds value
Start with internal training material before using it on marketing content. That is where it earns its keep fastest.
Light Bytes
Adobe goes agentic: Adobe unveiled CX Enterprise at Adobe Summit, a platform that orchestrates marketing, content and customer engagement through networks of agents. Marketing Agent plugs directly into ChatGPT, Claude, Gemini and Copilot.
Meta capex reset: Meta is laying off around 8,000 staff (roughly 10% of headcount) while raising 2026 AI capital expenditure to $115-135 billion, nearly double last year. The saved payroll is being redirected straight into data centres.
OpenAI Chronicle: OpenAI shipped Chronicle, a Codex preview feature that uses background agents to capture screen context and build persistent memory. Pro users on Mac only for now.
Recursive Superintelligence: A four-month-old lab backed by Google Ventures and Nvidia raised $500 million at a $4 billion valuation to build AI that automates AI research. Founders come from OpenAI and DeepMind.
Altman's World lands on Tinder and Zoom: Both platforms are rolling out proof-of-humanity badges via iris scans, designed to filter out bots and deepfakes ahead of a wave of generative-AI impersonation.
GitHub pauses Copilot signups: GitHub has halted new signups for Copilot Pro, Pro+ and Student plans citing soaring usage and rising inference costs. Existing subscribers keep access.





