Welcome back! The pivot is absolute. One CEO has officially gutted his virtual reality dream to bet the entire company's future on physical power plants. It is a massive shift from software to heavy industry. While the giants go big, the labs are trying to get small. Another leading firm just launched a "startup within a startup" to stop acting like a corporation and start tinkering again. Meanwhile, regulators across the Atlantic finally agreed on a rulebook, and a new tool lets you run AI without ever sending data to the cloud.

In today’s Generative AI Newsletter:

  • Meta cuts VR teams to fund a massive nuclear infrastructure push.

  • Anthropic launches a "skunkworks" lab to chase risky ideas.

  • FDA and EU align on 10 shared principles for AI in medicine.

  • NativeMind runs a private AI assistant directly in your browser.

Latest Developments

Mark Zuckerberg has officially liquidated the metaverse. Reports confirm a major course correction at Meta, involving 1,000 VR job cuts and the closure of key studios to fund a staggering $70B AI pivot. Through the newly launched Meta Compute initiative, the company is committing to $600B in U.S. infrastructure spending by 2028. It is a total gutting of Meta's former identity to secure the nuclear power and gigawatts of electricity needed to survive the AGI race.

Here’s what Meta is putting in motion:

  • Leadership Shift: Infrastructure chief Santosh Janardhan will co-lead with Daniel Gross, recruited from AI safety startup SSI to bring frontier focus.

  • Capital Commitment: Meta has pledged $600B in US infrastructure spending by 2028, including long term nuclear power contracts for data centers.

  • Political Layer: New president Dina Powell McCormick will manage government relationships tied to energy, financing, and permitting.

  • Internal Tradeoff: The announcement follows deep cuts to Reality Labs, with roughly 10% layoffs expected as VR ambitions shrink.

Meta now operates in a visible paradox. While Alexandr Wang’s Superintelligence Labs prepares flagship models like Mango and Avocado, the company’s near-term advantage depends less on research velocity than on physical execution. Compute scale is no longer theoretical. It is constrained by electricity, land and public consent. Models can be trained quickly. Power plants cannot. The outcome will depend on whether Meta can align engineers, regulators and communities before those limits harden. 

Special highlight from our network

ChatGPT 5.2 is the shift from novelty to execution. This is where AI gets practical.

🗓️ Join us January 20
🔍 What you’ll learn:

  • Operations: How 5.2 upgrades your daily workflow

  • Accuracy: Connecting GPT to your data using eRAG

  • ROI: Turning AI reasoning into real results

Stop refreshing the headlines. Start building the strategy. 

Special highlight from our network

Mode was Deloitte’s fastest-growing software company in North America, for one reason. They’re turning phones into a tech-driven income engine.

With Mode, everyday mobile behaviour turns into real cash flow to optimise rewards, spending and engagement. 50M+ of users, and over $325 millions earned or saved.

Now the model itself is investable, only if you are accredited.

Anthropic is formalizing its tinkering phase by launching Anthropic Labs, a dedicated incubation unit for high-risk frontier products. The move transfers Mike Krieger, the co-founder of Instagram, into a hands-on building role to focus on "high-risk, high-reward" projects. This new division is designed to move faster than the main company, acting more like a startup within a startup.  It is a clear recognition that the most successful AI tools are often discovered through rapid prototyping and early user feedback rather than top-down corporate engineering.

The move toward experimentation:

  • Leadership Shuffle: Mike Krieger moves from the executive suite to the lab floor, while Ami Vora takes over the day-to-day product strategy.

  • The "Claude Code" Effect: The lab aims to replicate the success of recent experimental tools that became massive hits with very little marketing.

  • Separation of Powers: By splitting research from operations, the company can keep its core service stable while still chasing difficult breakthroughs.

  • Real-World Testing: The team will release early versions of tools to a small group of users to see what actually works before a global launch.

By isolating the discovery phase, the company avoids the inertia that often kills innovation in large-scale tech organizations. This structure ensures that while the core Claude experience remains stable for enterprise clients, the next breakthrough is already being built in the shadows.The creation of Anthropic Labs shows a company trying to avoid the “big tech” trap of becoming too slow to innovate. By giving its best builders a sandbox to play in, Anthropic is trying to ensure it doesn't just improve what already exists, but invents whatever comes next.

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) published 10 shared good AI practice principles for how drugmakers use AI across a medicine’s life. Regulators rarely openly synchronize unless they repeatedly encounter issues like submissions with exaggerated claims, lack of evidence, and inadequate documentation. As per EU Health Commissioner Olivér Várhelyi, the first step of a renewed EU-US cooperation is built to keep patient safety intact while speeding access to medicines.

Here are four details that matter most:

  • Scope: It covers trial, manufacturing, and post-market safety evidence generation and monitoring.

  • Standard: A risk-based approach is applied, requiring heavier validation and oversight for higher-stakes AI.

  • Paperwork: Data governance, traceability, and “context of use” should be enforced without random AI claims.

  • Lifecycle: Monitoring for drift shows regulators expect models to change in the real world.

This is similar to medical-device AI regulation, where requirements are set first. The positive aspect is the establishment of clearer rules across the EU and US. This enhances predictability in cross-border expectations, reducing the need for rework and expediting approval processes. On the negative side, there is an increase in compliance-related tasks and a higher likelihood of companies asserting alignment with the principles. The true assessment occurs when principles are translated into guidance that is enforced at a regulatory level.

NativeMind is a browser extension that lets you run an AI assistant on your own computer using local models through Ollama. It summarises articles, rewrites text in-place, and asks questions about what's on your screen without sending page content to a cloud chatbot.

Core functions (and how to use them):

  • Website summaries: Create a skimmable summary, main takeaways and open questions from any large article or documentation page.

  • On-page Q&A: Ask the assistant, “What are the main claims?” on-page. Ask, “What's the pricing and fine print?” or “What should I double-check?” and get page-specific responses.

  • In-place rewriting: Highlight text in Gmail, Google Docs, Notion, or any editor, rewrite it for clarity, tone, or structure and paste it back in.

  • Full-page translation: For cross-language research and sourcing, translate a webpage while preserving its layout.

  • Visible-tab research: Search, open results in tabs, pull pertinent bits, and construct a summary you can verify by examining the sources.

Try this yourself:
Install Ollama, download a 7B or smaller lightweight model, and install NativeMind. Run this prompt on a tool's pricing page: “Summarize the plan tiers in a table. List every limit and hidden rule. Then write 5 questions I should ask before paying.” You can verify the exact lines on the page and act on it instantly.

Reply

Avatar

or to participate

Keep Reading