Welcome back! The line between AI assistant and AI employee got a lot thinner this week. Anthropic gave Claude the ability to physically operate your Mac while you step away. OpenClaw kicked off the remote agent race, but the major labs are now shipping their own versions at speed. The question is no longer whether AI can do your tasks. It is whether you need to be in the room when it does.

In today’s Generative AI Newsletter:

  • Anthropic: What happens when Claude can click, type and navigate your desktop?

  • Luma Labs: Can an image model that reasons before it renders change creative AI?

  • Meta: Why is Zuckerberg building himself a personal CEO agent?

  • NVIDIA: Did Jensen Huang just declare AGI on the Lex Fridman podcast?

Latest Developments

Anthropic Ships Remote Computer Use for Claude

Claude can now take direct control of your Mac. Click, type, navigate apps, fill spreadsheets, draft documents. All of it.

  • What launched: A research preview called Cowork that gives Claude physical control of your desktop through the macOS app. A companion feature called Dispatch lets you fire off tasks from your phone while you are away from your computer.

  • How it works: The system checks for direct app integrations and browser access before resorting to screen control. It tries the least invasive route first.

  • Who gets it: macOS users on Pro or Max plans only. Available through Cowork and Claude Code. A Windows version is in the pipeline.

  • Backstory: Anthropic acquired computer use startup Vercept in February. This is the team's first product launch after just four weeks.

The announcement racked up over 30 million views in under 24 hours. Anthropic's Alex Albert framed the ambition plainly: the future where you never have to open your laptop to get work done is arriving fast. Whether users trust an AI with full desktop access is another matter. The research preview label is doing a lot of heavy lifting here.

Special highlight from our network

Every day, new AI tools come out that can write, design, code and automate. But for most people, it still feels overwhelming. You see what’s possible, but don’t quite know how to use it in your day-to-day life.

That’s exactly the gap this 2-day live AI Mastermind (March 28–29) is trying to close.

You’ll go beyond theory and learn how to actually use AI tools:

➤ build simple automations
➤ create your own AI agents
➤ speed up everyday tasks
➤ explore ways people are using AI to earn

It’s designed for non-technical people who are looking for practical skills.

You’ll also get ready-to-use resources like prompt templates, toolkits and practical setups you can apply immediately.

If you’ve been wondering, “how do people actually use AI like this?” – this is where you find out.

Special highlight from our network

For the last ~3 years, the AI industry has been racing to build bigger, smarter models like ChatGPT, Claude, Gemini and so on. The assumption was simple: whoever built the most powerful model wins.

But that race is actually already slowing down.

Now the industry is beginning to look beyond the model race and ask: how do companies turn AI into something that actually works and brings value?

In this LinkedIn Live, Ori Goshen (Co-Founder & Co-CEO of AI21 Labs) joins Steve Nouri to unpack what the Post-Model Era means for enterprise AI, from moving beyond the model race to building systems that are reliable, practical and ready for production.

Wondering where AI is heading next? This conversation will help put the direction into perspective.

Join LinkedIn Live to find the answers.

Luma Labs Launches Uni-1, a Reasoning-First Image Model

Luma AI has released Uni-1, an image model that thinks through what it is being asked to do before and while it creates.

  • Architecture: Uni-1 processes text and visuals through a single pipeline rather than using diffusion. The same approach as GPT Image 1.5 and Nano Banana Pro.

  • What it does well: Real-world understanding that enables creative decisions across infographics, manga and specific aesthetics. It topped human preference rankings for style, editing and reference-based work.

  • Pricing: Around $0.09 per image at 2K resolution, undercutting Nano Banana Pro's $0.134 rate by roughly a third. API is waitlist-only for now.

Luma made its name in video, so an image model is a new direction. The company is teasing extensions into video, voice and interactive worlds. If Uni-1's reasoning architecture scales across modalities, it could become the foundation for a single model that handles all creative output. That is a big if, but the early benchmarks are strong.

OpenAI Hires Meta's Former Ad Chief to Build ChatGPT's Ad Business

OpenAI sees advertising as a core revenue stream, not an experiment. Evidence just landed in the form of hire: OpenAI has tapped Dave Dugan, Meta's former head of ad sales, as VP of global ad solutions. His job is to turn ChatGPT advertising into a commercial product.

  • The hire: Dugan spent over a decade at Meta building its advertising machine. He now brings that playbook to OpenAI.

  • The rollout: Ads are currently going live across ChatGPT for free and Go tier users.

  • The price tag: OpenAI is reportedly charging advertisers a $200,000 minimum spend to get in.

Hiring someone who helped build one of the most profitable ad businesses in history tells you the ambition. The $200K floor also suggests they are going after enterprise budgets, not self-serve small business spend. For developers and builders in the AI space, this raises a straightforward question: how long before ads start shaping what ChatGPT recommends?

Jensen Huang Says "We've Achieved AGI" on Lex Fridman

NVIDIA CEO Jensen Huang appeared on the Lex Fridman Podcast and declared that AGI has already arrived. Then he immediately walked it back.

  • The claim: Lex Fridman defined AGI as an AI system capable of building and running a billion-dollar tech company. Huang responded directly: "I think it's now. I think we've achieved AGI."

  • The caveat: Huang clarified he meant short-lived commercial success, not enduring capability. He compared it to dot-com era companies that were real but brief.

  • The hard limit: "The odds of 100,000 of those agents building Nvidia is zero percent."

  • What else dropped: The two-hour conversation covered AI scaling laws, TSMC and Taiwan, data centres in space, whether NVIDIA will be worth $10 trillion and Huang's view on consciousness and mortality.

It is a masterclass in reframing. By collapsing AGI to "an AI that briefly makes a billion dollars," Huang gets the headline while acknowledging the ceiling. The takeaway may be less about whether AGI is here and more about who gets to define what AGI means. Right now, the answer is whoever has the biggest microphone.

Tool of the Day: ElevenCreative

ElevenLabs just launched ElevenCreative, a tool that lets you generate, edit and localise premium audio and video in minutes. Upload a video, swap the voiceover into another language with lip sync, or generate narration from scratch.

Try this yourself: Head to ElevenLabs and test ElevenCreative on a short video.

  • Try localising a piece of content into a second language

  • See how the lip sync holds up

  • If you produce content for an international audience, this could cut your localisation pipeline from days to minutes.

Reply

Avatar

or to participate

Keep Reading