Welcome back! The pioneers who built modern AI are walking in different directions. One wants to give machines a sense of the world, another wants to strip language of its illusion of intelligence, and two rival labs are proving that there’s more than one way to fund the future.

In today’s Generative AI Newsletter:

Yann LeCun leaves Meta to launch a new world-model AI startup
Fei-Fei Li says language has peaked and spatial intelligence is next
Anthropic and OpenAI reveal radically different growth strategies
GPT-5 solves full Sudoku puzzles, setting a reasoning milestone

Latest Developments

Yann LeCun, the Turing Award–winning scientist who invented convolutional neural networks in the 1980s, is leaving Meta after more than ten years to start his own AI company. The man who helped teach machines to see now wants to teach them to think. LeCun says the industry is chasing the wrong dream with oversized chatbots and wants to build AI that learns from the real world instead of just reading about it.

Why He’s Leaving Meta:

  • World model vision. His new startup will focus on world models, AI that learns by watching and predicting how the world behaves instead of mimicking language.

  • Meta’s shake-up. His exit follows a major reorganization inside Meta, which created Meta Superintelligence Labs and hired dozens of engineers from rival firms.

  • Frustration inside FAIR. The reshuffle has frustrated researchers and limited LeCun’s own long-term lab, FAIR, once Meta’s center for foundational AI work.

  • Big-money moves. Meta recently invested $14.3B in Scale AI and appointed its CEO, Alexandr Wang, to lead its new superintelligence unit.

LeCun has never been shy about his skepticism toward large language models, once saying AI is still “nowhere near the intelligence of a house cat.” His departure feels like a turning point for AI’s founding generation. His next chapter could revive the academic curiosity that first shaped modern deep learning.

Special highlights from our network

Accelerating AI Adoption with Microsoft’s Data Assessment Frameworks

Microsoft is joining mindit.io for a live session on one of the hardest problems in AI: getting your data ready.

It’s a rare chance to see how Microsoft’s Data Assessment Frameworks actually work in practice.

You’ll learn how to evaluate your current data setup, spot the gaps that slow AI adoption and design architectures that scale with real business impact.

In one hour, you’ll walk away with a clearer roadmap for AI readiness. Not theory, but steps you can act on.

If you lead data strategy, AI programs, or transformation projects, this is worth your time.
What’s the biggest barrier between your data and your AI goals right now?

Special highlight from our network

Stop formatting slides. Start shaping ideas.

CEOs spend most of their day reviewing presentations. And making professional presentations is not an easy task.

Plus AI lets you convert Word documents into comprehensive PowerPoint presentations using structured templates, native charts, and layout logic designed for professional use.

It supports auto-formatting, language translation, and data-embedded slide types with no manual editing required. You can build client decks, internal briefings, or case studies in minutes.

It integrates natively with Google Slides and Microsoft PowerPoint, and outputs clean files for editing or sharing.

The goal is to reduce time spent on formatting and increase consistency across presentations.

Free trials are open https://link.genai.works/rsOQ 

How are your teams automating slide creation today? 

Fei-Fei Li, the Stanford professor and “Godmother of AI” who built ImageNet and helped launch the modern deep-learning era, just published an essay arguing that language models have peaked. She believes the next leap for AI will come from spatial intelligence, the ability to understand and reason about the 3D world the way humans and animals do.

What Li is saying:

  • Language hits a ceiling. Today’s AI can reason, code, and write fluently but is still blind to space, physics, and motion.

  • Her new blueprint. Spatial intelligence means systems that can perceive distance, depth, and cause and effect, the skills that let people move through real environments.

  • The path forward. Li calls for world models that can generate realistic 3D spaces, process visuals and actions, and predict how those worlds change.

  • What comes next. Her lab, World Labs, is already testing these ideas through tools like Marble, which builds explorable 3D worlds from text prompts.

Li’s message is clear. AI has mastered language but still lives in its head. To evolve, it must learn to see, touch, and move to build an internal sense of the world it is describing. “The limits of my language,” she writes, “mean the limits of my world.”

Two of the world’s biggest AI labs are heading for the same destination, but their paths look nothing alike. Anthropic expects to break even by 2028, while OpenAI will still be spending heavily through 2030 with more than $70B in cumulative losses. The story is less about balance sheets and more about belief systems.

Where the divide shows:

  • Different fuel. Anthropic plans to spend $27B on compute by 2028. OpenAI’s figure is $111B, a reflection of two very different growth philosophies.

  • Chip strategy. Anthropic spreads its workloads across Amazon, Google, and Nvidia. OpenAI stays largely within Nvidia’s infrastructure.

  • Steady growth. Anthropic projects $70B in revenue and positive cash flow by 2028, supported by enterprise APIs and partners such as Microsoft, Salesforce, and Deloitte.

  • Product expansion. OpenAI continues to pour resources into ChatGPT, Atlas, robotics, and new hardware, building reach at enormous cost.

This is the story of two kinds of ambition. Anthropic is building to endure, methodically and with control. OpenAI is scaling with velocity, betting that momentum itself will define the future of intelligence. If both succeed, one will prove that precision pays, and the other that speed conquers.

GPT-5 just became the first AI model to crack a full 9x9 Sudoku puzzle, according to Sakana AI’s Sudoku-Bench, a test built to measure spatial logic and deep reasoning. It’s a genuine milestone in pattern recognition, even if the model still struggles with the creative leaps humans take for granted.

Here’s what the research shows

  • New milestone. GPT-5 solved an entire 9x9 puzzle, the first model ever to do so on Sudoku-Bench.

  • Improved accuracy. It achieved a 33 percent solve rate, nearly double the previous record from GPT-o3 mini.

  • Ongoing challenge. Sudoku-Bench’s modern variants mix shifting rules and visual logic, leaving 67 percent of puzzles unsolved.

  • Missing intuition. GPT-5 can follow deduction chains precisely but falters at the “aha” moments that human solvers use to break puzzles open.

Sudoku might look small next to AI’s grander goals, but it’s a mirror for how reasoning really works. GPT-5 can compute every possibility inside a grid, yet it still can’t see the solution the way a person can. Until models learn that kind of intuition, they’ll be masters of logic, not of insight.

TOOL OF THE DAY

TIME has introduced the TIME AI Agent, a unified platform that merges natural language understanding, voice synthesis, translation, and search. The system is built in collaboration with Scale AI and draws on 102 years of TIME journalism to create a single interactive knowledge base.

Core functions:

  • Summarization: Generates concise overviews of articles in text or audio format.

  • Translation: Supports 13 languages including English, Spanish, German, Hindi, Japanese, and Arabic.

  • Audio generation: Converts written stories into briefings voiced in TIME’s editorial tone.

  • Semantic search: Retrieves information from TIME’s century-long archive through concept-based retrieval, not keyword matching.

  • Conversational response: Interprets reader intent and delivers contextually relevant answers drawn from TIME’s reporting.

Try this yourself:

TIME’s AI Agent changes how people experience information. Ask an AI platform to summarize TIME’s Person of the Year coverage across decades. Observe how it identifies historical patterns, recurring themes, and language shifts. Compare this with the TIME AI Agent’s structured retrieval to see how archival intelligence is applied in practice.

Reply

or to participate

Keep Reading

No posts found