Welcome back! The most expensive partnership in tech history is starting to show some serious cracks. OpenAI is reportedly so frustrated with Microsoft's infrastructure that they are building their own version of GitHub. It is a massive flex that asks a dangerous question: what happens when the AI layer decides it wants to own the underlying developer ecosystem, too?

In today’s Generative AI Newsletter:

  • OpenAI is building its own GitHub alternative.

  • ClawCon hosts a lobster-themed open-source AI rebellion.

  • Claude 4.6 finds 22 severe security flaws in Firefox.

  • AutoResearch lets AI run overnight ML experiments.

Latest Developments

OpenAI Is Building Its Own GitHub to Ditch Microsoft. Can It Steal 100M Devs? 

The project reportedly kicked off after OpenAI engineers became increasingly frustrated by persistent GitHub outages. These disruptions are tied to GitHub's massive, multi-year migration of its legacy infrastructure to Microsoft Azure, a process the CTO admitted could take another two years. The relationship between Microsoft and OpenAI is arguably the strangest in tech history. 

Key Project Details:

  • Codex Integration: Staff have discussed pairing the platform with Codex coding agents to consolidate the entire development process.

  • Commercial Potential: OpenAI staffers have floated the idea of opening the platform to outside paying customers in the future.

  • Infrastructure Independence: The project allows OpenAI to bypass GitHub's infrastructure, which is currently undergoing a slow two-year migration.

  • Investor Conflict: The move represents a further encroachment on Microsoft’s legacy developer moat, adding tension to an already complex partnership.

Microsoft is bankrolling a "national champion" that is now actively building a product to displace GitHub, the platform where over 100 million developers store their code.If OpenAI can convince devs that they don't just need a place to store code, but an AI-native environment to generate and manage it via Codex, Microsoft’s $7.5 billion acquisition of GitHub could look like a relic of the "pre-agentic" era.

ClawCon: The Lobster-Themed Rebellion Against Big AI

Supporters of the open-source tool OpenClaw gathered to discuss the platform’s role as a decentralized alternative to corporate AI models. While the tool presents significant risks, its community views it as a necessary challenge to the "walled gardens" of major tech labs. The event combined high-level technical strategy with a celebratory atmosphere.

Inside the world of the "Claw":

  • Grassroots Crusade: Devotees view OpenClaw as a necessary "escape hatch" from an AI industry currently controlled by a handful of leading labs.

  • Messaging Integration: Much of the platform's early viral success stemmed from allowing users to interact with their AI agents via Discord, WhatsApp, and Telegram.

  • Security Trade-offs: While celebrated for its openness, OpenClaw remains an unpredictable tool that carries significant security risks for its users.

  • Quirky Community: The event featured a distinct aesthetic involving lobster headdresses, necklaces, and a buffet piled high with actual lobster claws.

Organizers called the moment a "watershed," stating that the doors to AI development were now "busted down" for the public. Despite the modest budget, the event attracted over 1,300 sign-ups, indicating strong demand for transparent, non-corporate AI tools. As the tour moves on to cities like Tel Aviv, Tokyo, and Madrid, the OpenClaw movement continues to position itself as the unpredictable, lobster-loving underdog of the frontier AI race.

Claude 4.6 Digs Up 22 Firefox Security Flaws In 2 Weeks

Anthropic revealed that Claude Opus 4.6 spent two weeks analyzing the Firefox codebase alongside Mozilla’s security team. The AI model successfully identified 22 vulnerabilities, 14 of which were rated as high-severity. These quick findings are significant because Firefox is a mature, open-source project with decades of security audits.

The AI-driven security audit:

  • Immediate Discovery: Claude took only 20 minutes to flag its first security flaw after being granted access to the codebase.

  • Massive Coverage: Anthropic filed 112 reports across roughly 6,000 files, with Claude racking up 50 potential flags before the first was even confirmed by humans.

  • Impact on Patches: The 14 high-severity flaws identified by the AI accounted for nearly 20 percent of Firefox’s most serious security patches for the entire year.

  • Limited Weaponization: While the model excelled at finding bugs, it only generated two working exploits out of hundreds of attempts, both of which required the sandbox to be removed.

Anthropic noted that while Claude is currently better at defense than offense, the gap between finding and weaponizing exploits is expected to close quickly. This suggests an urgent need for companies to use AI-driven security tools to lock down codebases before malicious actors do the same.

AutoResearch: Let AI Run Overnight ML Experiments

Andrej Karpathy has released autoresearch, an open-source repository designed to let AI agents autonomously run and iterate on LLM training experiments in a loop. This project strips down the nanochat LLM training core to a single-GPU, one-file version of roughly 630 lines of code.

Core Functions:

  • Autonomous Iteration: Humans edit the program.md instructions, while AI agents autonomously modify the train.py code to find better neural architectures and hyperparameters.

  • Fixed Time Budgeting: Every experiment runs for exactly 5 minutes, ensuring approximately 12 experiments per hour that are directly comparable regardless of model size or complexity.

  • Metric-Driven Success: Agents optimize for val_bpb (validation bits per byte), a vocab-independent metric that allows for fair comparison across different architectural changes.

Try this yourself:

To start your own autonomous research org, clone the repo and install dependencies using uv. Run prepare.py to set up your data, then point a coding agent (like Claude or Codex) at program.md to begin the iterative loop.

AI Alone Can’t Run Revenue

Finance doesn’t run on “mostly right.” It runs on math.

In The Architecture Behind AI-Native Revenue Automation, Tabs’s CTO breaks down why LLMs alone aren’t enough—and what it actually takes to build audit-ready, AI-driven contract-to-cash systems for modern B2B teams.

Reply

Avatar

or to participate

Keep Reading