Welcome back! Meta is renegotiating its dependence on NVIDIA by flirting with Google’s silicon. MIT is compressing years of wet lab work into algorithms that behave like engineers. The UN is warning that a handful of companies now command power rivaling states. AI is stepping out of the lab and into the institutions that decide what gets built, approved, and believed.

In today’s Generative AI Newsletter:

• Meta considers a massive TPU deal that challenges NVIDIA’s dominance
MIT’s new model designs drugs by learning molecular geometry
UN chief warns AI could become a “modern-day Frankenstein’s monster”
Tool of the Day: Andrew Ng’s Agentic Reviewer delivers instant paper feedback

Latest Developments

Meta is "in talks to spend billions" on Google's AI chips starting in 2027 and that it could begin renting these chips from Google Cloud as early as next year. For years, Meta relied on NVIDIA’s GPUs, which has played a role in turning NVIDIA into a multi-trillion dollar company. Meta’s plan to use Google's TPUs would give Meta a second source, so it is less exposed to NVIDIA's pricing, delivery schedules and supply crunches. This matters because Meta would be shifting more of its AI operations to NVIDIA’s rival data centers, betting that Google’s chip roadmap can keep pace with NVIDIA’s ‘generation ahead’ promise.

Here is what actually changes if Meta follows through:

  • Cheaper chips: Meta spends less on each AI query and can afford to run many more models and experiments.

  • New risk: More of Meta’s infrastructure gets rebuilt around Google’s hardware and software stack, making future changes slower and more expensive.

  • Nvidia hit: There's less commitment with one of NVIDIA's biggest supporters causing a small but real challenge to its lead in AI revenue.

  • User side effect: Cheaper compute makes it easier to flood feeds with AI-generated content, without any guarantee that what you see gets better.

It is a repeating pattern: big tech once stuck with a single cloud provider, then shifted to multiple clouds to regain bargaining power over prices and outages. Benefits such as lower prices, increased competition and fewer obstructions are clear, but the downside is quieter and tougher to measure. A small group of companies will decide which chips are used to train and run the models that shape what billions of people see, click on and believe. The main question is whether this choice among suppliers really helps users or mostly gives the biggest buyers a stronger hand in the same old game.

Special highlight from our network

Transform Enterprise Knowledge into Trusted, Verifiable Answers

A lot of teams still rely on search.
The next step is turning scattered enterprise knowledge into instant, trustworthy answers.

Progress Agentic RAG moves you from “here’s a document” to “here’s the next best action.”
With traceable sources, governed data, and modular RAG pipelines that you can scale.

Join us on Nov 27 at 12:00 PM CET to see:

  • What Progress Agentic RAG is and where it fits

  • Why LLMs alone cannot fix data fragmentation or governance

  • How to structure enterprise data for quality, traceability and auditability

Want your AI to give trusted, verifiable answers instead of guesswork.
Register for the live session here

A team at MIT changed the slow and sensitive drug discovery process to make it more like software development. This started by developing a model that analyzes the 3D shape of a protein and predicts how well a potential drug will attach to it. Next, a generative model was introduced that recommends new binders for targets that chemists usually think are ‘hopeless’ or ‘undruggable’. This method is designed to speed up the treatments for serious or fatal diseases. Practically, it makes you wonder if we can just print biology whenever we need it.

Here is how this new stack actually behaves in practice:

  • Design: The generator works at the level of single atoms and creates proteins and peptides that easily fit around specific targets.

  • Results: Initial lab tests have shown positive results with challenging targets after trying only a small number of designs.

  • Control: You can guide the system using rules about structure, binding sites and sequences.

  • Access: Complete package with code and model weights. Makes it easy for any lab with a GPU  to gain insight into molecular engineering.

Silicon Valley has figured out what happens when powerful LLMs are widely available and easy to use. Biology is now entering the same phase, replacing text with molecules. Drug discovery platforms that claim they provide unique solutions don't look as impressive when a public model is capable of taking over a lot of critical duties. The greatest benefits shift to private data, lab automation, and regulation. This is beneficial for scientific advancement, a bit concerning for certain business models, and somewhat alarming if you're thinking about what might happen when prompt engineering moves past chatbots and starts impacting living systems.

UN’s top official issued a rare public rebuke of the AI industry. Volker Turk, the UN High Commissioner for Human Rights, told governments and companies that generative AI is developing in ways that threaten core civil liberties, and that the world is drifting toward a moment where the technology escapes meaningful oversight.

Why the UN thinks AI is a humans rights issue:

  • Human rights at risk: Clear threats to privacy, free expression, political participation, and work as AI spreads.

  • Dangerous incentives: Tech giants shaping AI for political and economic gain in ways that can manipulate, distort, and distract.

  • Corporate concentration: A handful of firms now hold power and wealth rivaling entire countries, raising concerns about unchecked influence.

  • Urgent regulation: Turk urged governments to act together before generative AI evolves into what he called a “modern-day Frankenstein’s monster.”

More than a novelty, the regulators now see AI as technology shaped by the incentives of a few firms with immense reach. AI systems are being deployed at global scale with minimal oversight. The tension mirrors the cautionary arc of the original Frankenstein story. The danger is not that the creation becomes evil. It is that its makers lose control while insisting everything is under control.

Special highlight from our network

How GenAI, Data and Crowd Wisdom Change Business Planning

Most plans are out of date the moment they are approved.
Smarter planning starts when you can see what is coming next.

On December 9, join GigaSpaces for a live session on eRAG.
See how teams are combining GenAI, live operational data and crowdsourced forecasts to plan in real time.
Learn how organizations use this to move faster, make sharper decisions and stay ahead of change.

📅 LinkedIn Live · December 9 · 10 AM ET

Agentic Reviewer is a new tool from Andrew Ng designed to give researchers quick, high-quality paper reviews. It uses an agent that reads your paper, searches arXiv for context, and produces structured feedback. Early tests show that its review quality correlates with human reviewers at near-human levels on ICLR data.

Core functions:

Rapid review cycles: Delivers feedback in minutes instead of months, helping researchers iterate quickly.
Human-level alignment: Achieves Spearman correlation of 0.42 with human reviewers, matching the 0.41 correlation between two humans.
Context-aware analysis: Searches arXiv to ground comments in relevant literature, especially useful for AI research.
Structured suggestions: Highlights weaknesses, missed citations, unclear arguments, and areas for revision.
Simple workflow: Upload the PDF, get an email, and view the full AI review on the site.

Try this yourself:

Upload a draft paper or class project to papereview.ai. Compare the AI’s comments with your advisor’s or your own notes. Look for missed citations, unclear sections, or alternative framing suggestions. Use the feedback to make a stronger next version.

Reply

or to participate

Keep Reading

No posts found