Welcome back! Google just dropped a compression algorithm that could fundamentally change the economics of running large language models. The AI industry's favourite AGI benchmark got a brutal reset, Cursor got caught being less than transparent about where its new model came from and Reddit is trying to figure out how to keep humans in charge of its platform.

In today’s Generative AI Newsletter:

  • Google: Can TurboQuant really cut LLM costs overnight?

  • ARC Prize: What happens when every frontier model scores below 1%?

  • Cursor: Why is nobody talking about the Kimi connection?

  • Reddit: Is the dead internet theory already here?

Latest Developments

Google's TurboQuant shrinks LLM memory by 6x with zero accuracy loss

Google Research has published TurboQuant, a compression algorithm that shrinks LLM cache memory by more than 6x and delivers up to 8x faster processing on Nvidia H100 chips. The crazy part: zero accuracy loss.

  • What it does: Large language models keep a running log of every conversation. As chats get longer, that storage balloons, slowing responses and driving up costs. TurboQuant compresses that storage without retraining the model.

  • Performance: Scored perfectly on needle-in-a-haystack tests, where a key detail is buried in a massive block of text. Also topped rival methods in vector search.

  • Market impact: AI memory stocks dropped 3-5% on the news. The paper is set to be presented at ICLR 2026 in April.

One compression paper won't crater memory demand overnight. But the fact that Wall Street reacted this fast tells you something. The industry is starting to price in a future where smarter software eats into the premium that AI memory hardware currently commands.

Special highlight from our network

AI can now make decisions and get things actually done. And that makes the data behind it more important than ever.

Meanwhile, companies are hitting the same wall: poor quality (59%) and lack of real-time data (~50%). In fast-changing environments, AI breaks without fresh inputs. That’s what Bright Data’s latest report reveals.

If you want to understand where AI is heading and what gaps could cost you, check out the full report.

Special highlight from our network

Every day, new AI tools come out that can write, design, code and automate. But for most people, it still feels overwhelming. You see what’s possible, but don’t quite know how to use it in your day-to-day life.

That’s exactly the gap this 2-day live AI Mastermind (March 28–29) is trying to close.

You’ll go beyond theory and learn how to actually use AI tools:

➤ build simple automations
➤ create your own AI agents
➤ speed up everyday tasks
➤ explore ways people are using AI to earn

It’s designed for non-technical people who are looking for practical skills.

You’ll also get ready-to-use resources like prompt templates, toolkits and practical setups you can apply immediately.

If you’ve been wondering “how do people actually use AI like this?” – this is where you find out!

ARC-AGI-3 resets the frontier AI scoreboard

François Chollet's ARC Prize Foundation just released ARC-AGI-3, the latest version of its reasoning benchmark. Humans solve 100% of tasks on the first try. The best AI model in the world scored 0.37%.

  • The scores: Gemini Pro led at 0.37%, followed by GPT 5.4 High (0.26%), Opus 4.6 (0.25%) and Grok-4.20 at a flat 0%.

  • The format: Agents face game-like scenarios with no instructions. They have to discover rules, form goals and plan strategies entirely from scratch.

  • The pattern: Labs spent millions training on earlier versions, pushing ARC-AGI-2 from 3% to around 50% in under a year. A $1M prize backs V3, and cofounder Mike Knoop says frontier labs are paying far more attention this time.

Every time a new ARC-AGI version drops, the reset is jarring. The big question to ask here is whether they do it through genuine reasoning or more expensive brute-forcing. That's exactly what Chollet built V3 to find out.

Cursor launched Composer 2 but forgot to mention where it came from

Cursor released Composer 2, its latest in-house coding model. There's just one problem: it's a tuned version of Kimi's 2.5 open-source model… and Cursor didn't disclose that.

  • The controversy: The model's origins only surfaced after launch. Cursor marketed it as their own without crediting Kimi's open-source base.

  • The benchmarking: Cursor tested Composer 2 on CursorBench, their own benchmark, and only compared scores against Claude Code and Codex. They left out other harnesses that outperform them. An odd choice for a company that is itself a harness.

  • Also new: Cursor released Glass, a new interface following the three-column layout that's becoming standard across coding tools.

Building on open-source models is completely normal. Not disclosing it is a different thing entirely. Developers care about provenance, and the selective benchmarking only makes the transparency gap harder to ignore.

Reddit's plan to separate humans from bots

Reddit CEO Steve Huffman outlined a strategy to label and manage automated accounts across the platform without requiring mass ID verification.

  • The system: Automated accounts will carry an [App] label. Suspicious behaviour triggers verification through passkeys or World ID, with government IDs only as a last resort where laws require it.

  • AI content: Not banned. Huffman called it "annoying" but said communities can set their own rules on AI-generated posts.

  • The scale: Rival platform Digg collapsed after being overrun with bots. Cloudflare data shows automated traffic is on pace to surpass human traffic by 2027.

The dead internet theory stopped being theoretical a while ago. Every platform now needs a serious answer to the question of how you keep humans at the centre. Reddit's approach is more band-aid than solution, but at least they're naming the problem at scale.

Tool of the Day: Google Lyria 3 Pro

Google's upgraded AI music model generates custom tracks up to 3 minutes long with intros, verses and choruses. It's rolling out across Gemini, AI Studio, Vids and Vertex AI.

Try this yourself: Head to AI Studio or Gemini, prompt it with a genre, mood and structure, and generate a full track. Sit back and watch the Grammys roll in.

Reply

Avatar

or to participate

Keep Reading