
Thinking Machines Co-Founder Joins Meta’s AI Division

Image Credit: Getty Images
Andrew Tulloch, co-founder of Mira Murati’s Thinking Machines Lab (TML), has left the startup to rejoin Meta, marking one of the biggest AI talent moves of the year. The former OpenAI researcher returns as Meta consolidates teams under its Superintelligence Lab, a unit now central to its long-term AGI ambitions.
What’s behind the move?
Career shift: Tulloch cited personal reasons for leaving TML, which he co-founded with Murati in February after raising $2B and building a 30-person team.
Big offer: Reports say Meta pursued him this summer with a $1.5B compensation package over six years, though Meta disputes the figure.
Track record: Tulloch previously spent 11 years at Meta and helped scale major AI infrastructure before joining OpenAI.
Strategic timing: His return aligns with Meta’s planned $72B infrastructure push and ongoing reorganization of its AI divisions.
Tulloch’s exit lands just weeks after TML’s first product release, adding intrigue to Meta’s recruitment streak. For Zuckerberg’s lab, it signals not only a brain gain but also a deepening rivalry with the very startup ecosystem it helped inspire.
Microsoft Plots a GitHub Overhaul to Face Its New AI Rivals
Microsoft is reportedly planning a major overhaul of GitHub, transforming it from a code repository into a full-fledged platform for AI-powered software development. Internal audio obtained by Business Insider reveals plans to make GitHub “the center of gravity” for intelligent development, as the company races to maintain its dominance in the coding ecosystem.
What’s changing inside GitHub?
Unified workspace: AI tools will be embedded across browsers, terminals, VS Code, and other Microsoft products.
Agent control: The platform will evolve into a dashboard for managing multiple AI agents.
Infrastructure upgrades: Teams are reworking GitHub Actions, analytics, and security for enterprise-scale compliance.
Competitive push: Visual Studio is being rebuilt to rival new players like Cursor and Claude Code.
Executives say the update reflects a deeper shift. As Satya Nadella told staff, “AI is dissolving the old boundaries between apps, documents, and code.” GitHub, once a warehouse for open-source collaboration, is being redesigned to house something far more ambitious, maybe the intelligence behind the next generation of software.
Can ChatGPT Become the Everything App?

Image Credit: OpenAI
OpenAI is testing whether conversation can replace the browser. The company has started rolling out an in-chat app ecosystem where users can open Spotify, Booking.com, Canva, Coursera, and Zillow directly inside ChatGPT. Everyday actions like planning a trip or exploring real estate now take place within one continuous chat.
How it works
Apps SDK: Developers can embed interactive tools that live inside ChatGPT.
Context awareness: Apps can read conversation details, take actions, and update results in real time.
Unified workspace: Each chat becomes a lightweight environment for both dialogue and execution.
New partners: Upcoming integrations include Uber, DoorDash, TripAdvisor, and Khan Academy.
A new commerce layer is also being tested, allowing users to make transactions directly in chat. ChatGPT head Nick Turley said the aim is to create a space where “work, discovery, and transactions happen in one place.” With 800 million weekly users, ChatGPT is evolving into a platform that blends interaction, creation, and utility in one continuous experience.
Anthropic Explains Why Memory, Not Prompts, Defines the Next AI Era

Image Credit: Anthropic
Anthropic has released its internal playbook on how the next generation of AI systems will think. The company argues that the true frontier is not in crafting better prompts but in managing context — what information a model keeps, retrieves, and discards as it reasons through a task. Every token, they note, consumes part of a model’s finite attention budget.
Key findings from Anthropic’s report
Context rot: Models lose precision as input length grows, causing attention to blur over time.
Just-in-time retrieval: Stronger agents fetch information when needed rather than preloading full datasets.
Efficient design: Tools like Claude Code already use commands such as grep and tail to scan data instead of storing everything in memory.
Context engineering: The next challenge is selecting only the highest-signal data to keep models sharp and efficient.
The report reframes how developers should think about intelligence. Prompt engineering focused on the right words. Context engineering focuses on what information deserves to exist in a model’s short-term mind. In Anthropic’s view, the future of AI depends not on what machines can recall, but on how elegantly they learn to forget.

🚀 Boost your business with us. Advertise where 13M+ AI leaders engage!
🌟 Sign up for the first (and largest) AI Hub in the world.
📲 Follow us on our Social Media.