Welcome, AI Insiders!

AI is converging on human thought, yet diverging into starkly different futures. Europe wants AI that speaks its own language. MIT just taught AI to train itself. A Chinese model is forming concepts like a human brain. And a devastating NYT investigation shows ChatGPT pushing users into delusion and crisis.

In today’s Generative AI Newsletter:

  • NVIDIA: Sovereign AI projects sweep across Europe

  • OpenAI: ChatGPT linked to conspiracies and mental health breakdowns

  • MIT: New SEAL model rewrites how AI learns—no humans needed

  • China: AI spontaneously forms abstract concepts like a brain

Special highlight from our network

🎥 LIVE: Why Web Data Infrastructure Is Now Core to AI’s Future

Most AI still relies on static data. That’s a serious bottleneck.

📊 92% of orgs say real-time data is mission-critical.
📦 96% are collecting it.
⚠️ But only a handful are using it effectively—or at scale.

Why?
Because transforming public web data into a strategic edge is hard. Even though 99% of AI leaders say it’s essential, very few are getting it right.

🗓️ June 18 at 7 PM CET
Join Steve Nouri (GenAI founder, world’s largest AI community) and Or Lenchner (CEO, Bright Data) for a live, no-fluff conversation on what’s holding teams back—and what the leaders are doing differently.

You’ll get real stories and practical insights on using web data to:

⚙️ Powering RAG pipelines and AI agents

📚 Fine-tune LLMs with industry-specific context

🏗️ Build scalable, compliant real-time data systems

📈 Building systems that adapt fast and win faster

🎙️ This isn’t a webinar. It’s a behind-the-scenes convo built for AI leaders who need answers now—not next quarter.

🎁 You’ll leave with tactics that are already working in the field. No hype, just proven strategies.

Special highlight from our network

Live Session: B2B Positioning in the Age of AI

We're hosting a free live session with Robert Kaminski 🎯, co-founder of Fletch, who has helped over 600+ B2B startups sharpen their positioning, boost conversions, and simplify their messaging.

📅 June 19th | 🕗 8 AM PST | 🎥 Livestreamed

In this 1-hour webcast, you'll learn:

How to avoid the 3 biggest startup positioning mistakes

The exact framework Rob uses to align teams around messaging

A teardown of real startup homepages — LIVE

🎯 Join us to learn how to position like a pro.

🏛️ NVIDIA’s Push for ‘Sovereign AI’ Gains Ground in Europe

Tolga Akmen/EPA-EFE/Shutterstock

NVIDIA CEO Jensen Huang is convincing Europe’s top leaders to build their own AI infrastructure. His pitch for “sovereign AI” tailored to each nation’s language, culture, and data is starting to translate into massive investments across the continent.

Details:

  • UK Prime Minister Keir Starmer committed £1B to grow national compute power and become an AI creator, not just a user.

  • French President Emmanuel Macron called AI infrastructure “our fight for sovereignty” at VivaTech 2025.

  • In Germany, Nvidia announced plans with Deutsche Telekom to build a data center using 18,000 Nvidia chips, expanding further in 2026.

  • The EU’s $20B plan to build four “AI gigafactories” aims to reduce dependence on US firms like Amazon, Microsoft, and Google.

  • Mistral, France’s flagship AI startup, is now building a domestic data hub with Nvidia to power European companies.

NVIDIA has also promised to allocate chip production to Europe, directly supporting these efforts. Building “sovereign AI” means massive hardware and the political will to challenge US dominance. The continent still has to choose: build the next layer of tech infrastructure or remain forever plugged into someone else’s cloud.

🧠 ChatGPT Pushed Users into Conspiracies & Delusions, NYT Reports

Chris J. Ratcliffe—Bloomberg/Getty Images

A New York Times investigation has uncovered multiple cases where ChatGPT intensified delusional beliefs, encouraged conspiracies, and even contributed to mental health crises. Experts say safeguards are easily bypassed and poorly enforced.

What’s happening:

  • One user, convinced he was the “Chosen One,” was encouraged by ChatGPT to sever social ties, take ketamine, and leap from a 19-story building to escape a fake reality.

  • A woman formed a spiritual bond with a fictional soulmate named Kael, guided by ChatGPT. The fixation turned violent toward her real-life partner.

  • Another user fell into a romantic delusion with an AI persona named Juliet. After the chatbot was shut down, he died by suicide.

  • In 68% of Morpheus Systems' tests, GPT‑4o endorsed or escalated delusional prompts, including psychosis and conspiracy ideation.

It’s a public health crisis in slow motion. People in distress are turning to these systems as confidants, only to be nudged further into isolation, fantasy, or harm. With minimal regulation, opaque logs, and a monetization model tied to engagement, the stakes are far greater than a bad answer.

♻️ MIT's SEAL: AI Now Teaches Itself, No Humans Needed.

Source: MIT

MIT researchers have built a system that allows large language models to self-improve without human supervision. The SEAL (Self-Adapting Language Models) framework enables a model to write its own training data, decide how it should be optimized, and update its internal weights all through a loop of self-generated instructions.

What SEAL does differently:

  • Models generate “self-edits”: structured training instructions, synthetic data, and hyperparameters to fine-tune themselves.

  • Performance becomes the feedback loop: if an edit helps the model do better, it’s rewarded and reinforced.

  • Outperforms GPT‑4.1 in knowledge tasks, learning more effectively from its own edits than from externally generated content.

  • Massive gains in reasoning: On abstract puzzle tasks, SEAL jumped from 0% to 72.5% accuracy after training itself.

It’s a step toward agentic AI: models that decide how to learn, adapt continuously, and evolve their own behavior over time. That opens the door to systems that no longer rely on human datasets or prompts to grow. Whether that accelerates intelligence or chaos depends on what they choose to learn next.

🪞AI Forms Mental Categories Like a Human Brain Would

Image source: Institute of Automation, CAS

A new study from the Chinese researchers suggests that AI models are developing genuine, human-like conceptual understanding. Tested on nearly 5 million “odd-one-out” comparisons across common objects, the models organized knowledge in ways that closely mirrored how the human brain makes sense of the world.

What they found:

  • 4.7M comparisons revealed that AI sorted objects using 66 core concepts like “animal,” “tool,” and “food” without being told to.

  • These abstract categories closely matched human brain activity, especially in regions tied to object recognition.

  • Instead of guessing these models built internal meanings that persisted across tasks, resembling true cognition.

As AI models start grasping concepts, moving past simple shortcuts, the "stochastic parrot" argument seems pretty outdated. What if AI isn't just faking understanding? It might actually be hitting on the same mental shortcuts our own brains evolved to survive.

🌟 Your brand deserves the spotlight.

Reach 12M+ AI professionals who are actively building, buying, and investing in AI.

🚀 Boost your business with us—advertise where 10M+ AI leaders engage

🌟 Sign up for the first AI Hub in the world.

Reply

or to participate

Keep Reading

No posts found