
Welcome back! OpenAI started running Palantir's playbook on enterprise customers while Anthropic moved its entire Claude Platform inside AWS, which is a different read on the same enterprise grab. Google had to disclose that someone has already used AI to write a working zero-day against two-factor authentication in production. Lovable, on a smaller stage, decided the look of an app is the product.
In today's GenAI newsletter:
OpenAI starts running Palantir's playbook on the enterprise.
Google disclosed a third-party attack by an unknown threat actor.
Anthropic moves the full Claude Platform inside AWS.
Lovable bets the taste of an app is the product.
Latest Developments

The simplest read of OpenAI's new deployment unit is that Sam Altman just hired Palantir's go-to-market team without paying the acquisition premium.
The OpenAI Deployment Company launched Monday to do one thing, install OpenAI models inside enterprise and government customers in-house.
Palantir spent the better part of two decades building itself into a public-markets giant on this exact mechanic.
Their engineers spend months wiring software into a customer's documents, code, customer records and approval chains. If you try to rip the system out, it becomes an operational crisis the procurement team cannot solve.
OpenAI now wants to run that play with one asset Palantir never had, the model layer everyone is already trying to license.
The part that should sit with you longer than the launch wrapper is what this implies about the next decade.
Forward-deployed AI vendors with access to the documents, decisions and code of every Fortune 500 and every government department they touch is a different industry structure than "AI sold by the API call."
It looks more like a utility the rest of the economy plugs into, owned by three or four companies. The question is, are you comfortable with CEOs like Sam Altman and Elon Musk owning that crucial infrastructure governments and enterprises rely on?
We’ll have to wait and see how this plays out.
Special highlight from our network
Plenty of teams are experimenting with AI. Very few are seeing real returns. An MIT NANDA study found 95% of AI pilots never scale or show measurable ROI.
It comes down to:
Strong AI skills development
Faster path from prototype → production
Organization-wide support
From Udemy Business, this snapshot breaks down where and how AI investments are actually paying off.
Special highlight from our network
Anthropic ships something new almost every week. Cowork, Skills, Connectors, Design. Most people are still using Claude like it's just a chatbot.
The first Claude-a-thon is a 2-day live workshop on what Claude can actually do in 2026. You'll run real research, build artifacts and dashboards, design full presentations, and set up Connectors that automate things like job searches and inbox triage.
Saturday and Sunday, 10 AM to 7 PM EST. Free for the next 48 hours, 1,000 seats only.
If you use AI coding tools with ad-hoc prompts, you know the pattern: the code looks right, then breaks on integration. APIs get hallucinated. Architecture drifts.
Spec-Driven Development fixes this. You convert your idea into a structured spec first and let AI execute from that context.
Ship Reliable Software Faster with AI is a 2-week cohort with Sergio Pereira, CTO and Software Architect. You ship a real product by Session 4.
AI Wrote Its First Working 2FA Bypass, and Mythos Has Not Even Shipped Yet

It’s just Tuesday but it’s been a hellish week to be on a security team.
Google's Threat Intelligence Group disclosed Monday that a Python script bypassing two-factor authentication on a popular open-source admin tool, found in active attack traffic, was likely developed with an artificial intelligence system, making it the first publicly attributed AI-built zero-day in the wild.
Add in the NPM ecosystem still digesting a separate supply-chain breach announced yesterday, and the mood inside SOCs this week is somewhere between "do we still have jobs after the next reorg" and "are we already being replaced by both sides at once."
The flaw Google disclosed was a hard-coded trust assumption in the admin tool, a class of semantic logic bug language models are unusually good at spotting.
(And btw, Google has explicitly said there is no evidence Gemini was used. Make of that what you will.)
This all happened before Mythos has shipped. Whoever inside the White House has been pushing to keep Anthropic's next preview model on a controlled runway got a much easier morning.
If that is the floor for AI-driven offense without the top-shelf model in circulation, the case for keeping the ceiling locked for another quarter is more defensible this morning than it was last week.
And the fact that OpenAI also picked this week to launch Daybreak, a defensive cybersecurity product offering free vulnerability scans and a "GPT-5.5-Cyber" tier for authorized red teamers, tells you which side of the offense-defense ledger every lab is now planning around.

Anthropic flipped Claude Platform on AWS to general availability Monday, and the framing matters more than the press release suggests. AWS customers can now buy the entire Claude stack through their existing AWS billing and IAM, with no second-class status.
What's in the box:
Managed Agents, Skills, advisor strategy, MCP connector, Files, code execution, prompt caching, citations and batch.
AWS IAM auth, CloudTrail audit, single AWS invoice retiring existing commitments.
Opus 4.7, Sonnet 4.6 and Haiku 4.5 live from day one.
New features land same-day as the native Claude API.
The catch, and it is the whole story, is that Anthropic operates the service and data leaves the AWS boundary. Bedrock keeps inference inside the AWS perimeter, while Claude Platform on AWS hands data processing back to Anthropic.
Teams have spent a lot of time insisting AI inference stay inside that AWS boundary because that is what their auditors signed off on. Anthropic put a button next to it that says "or you can have everything."
The next negotiation between Anthropic and AWS about who keeps the data and the margin is going to be more interesting than the one before it.

Lovable shipped its Aesthetics Update this week. The pitch is that AI builders cracked function months ago and have been losing to design ever since.
What changes:
Component styling, typography, color, layout and spacing all reset.
An editorial pass Lovable is calling its house aesthetic.
Users no longer have to describe what "modern" looks like in design language.
Every Lovable app ships looking distinctly Lovable.
Figma is still where most teams hand-craft the look of a product and Lovable is arguing that step is the one that should disappear.
The bet underneath is that good taste sitting in the default layer beats a prompt-by-prompt fight for it.
Vercel did a version with v0 and the Shadcn aesthetic. Lovable is going one step harder, betting the taste is the product. The metric to watch when retention data comes back in a couple months is whether users stay inside the house style or fight it from the first prompt.
Lindy is an AI work assistant that runs your inbox, calendar and meetings across the apps you already use. You delegate by text or in the web app, and it plugs into Gmail, Outlook, Slack, Notion, HubSpot and Salesforce. Around 400,000 professionals are already on it.
The part to test is the meeting loop. Lindy preps you before a call, joins the meeting, takes notes, drafts the follow-up email and updates your CRM, without making you open a second app.
Try it yourself:
Start the 7-day free trial at chat.lindy.ai/signup.
Connect Google or Outlook email and calendar.
Add your phone number to enable iMessage and SMS delegation.
Text Lindy "join my next meeting and send me notes" to put the loop on autopilot.
A Stockholm cafe has staffed itself with AI agents handling daily operations, billed as a test of how far autonomous systems can run a small consumer business.
Generative AI is starting to push live animal testing out of early drug toxicity research as models grow accurate enough to handle the first screening pass.
The Cannes Film Festival opened this week with AI and the future of Hollywood labor at the center of the off-screen conversation.
The World Council of Churches is calling for AI governance built on human rights and equal access.
OpenAI's new Daybreak product offers free vulnerability scans to enterprise security teams.







