Welcome, AI Enthusiasts!

A quiet contender just made a loud move. Manus launched Wide Research, a multi-agent system designed to outthink DeepThink and DeepResearch. At the same time, Anthropic cut off OpenAI’s Claude access, sparking an ethics standoff. And Musk is reviving Vine with AI, paywalls, and maybe a bit of strategy dressed as nostalgia.

📌 In today’s Generative AI Newsletter:

  • Manus debuts Wide Research to rival Google and OpenAI

  • Anthropic blocks OpenAI from Claude over API misuse

  • Persona vectors let developers steer traits like “evil” in models

  • Grok Imagine brings Vine-style AI video to SuperGrok users

🚀 Manus Launches Wide Research, a Multi-Agent AI Tool to Rival OpenAI & Google

Image Credits: Manus

Manus, an AI startup with roots in China, has unveiled Wide Research, a multi-agent AI system designed to handle complex, large-scale research tasks. The tool is positioned as a direct competitor to OpenAI’s Deep Research and Google’s Deep Think.

What you need to know:

  • Wide Research uses scores of AI agents to perform deep, high-volume research effortlessly.

  • The tool has reportedly outperformed OpenAI’s DeepResearch on the GAIA benchmark.

  • Built on Manus’s large-scale virtualization and agent collaboration architecture, it offers parallel processing and flexibility beyond rigid task formats.

  • Users can research 100 sneakers or design 50 posters simultaneously, showcasing its massive scaling capabilities.

  • Currently available to Pro customers, with plans to expand access to Plus and Basic tiers.

  • Meta-features: general-purpose agents, not limited to specific domains, enabling limitless applications.

Manus claims Wide Research is just the beginning, hinting at a much larger infrastructure in development. The company, now headquartered in Singapore, Tokyo, and San Mateo, raised $75M in funding earlier this year and continues to aggressively innovate after its June release of an AI video generator.

Anthropic Blocks OpenAI: Trust Breach or Strategic Move?

Image: Benjamin Girette/Bloomberg

Just before GPT-5’s launch, Anthropic cut off OpenAI’s access to Claude, its AI coding assistant. What looks like a competitive maneuver may signal deeper issues of trust and ethics in AI development.

  • API Blocked: Anthropic claims OpenAI misused Claude’s API beyond standard benchmarking. OpenAI denies it.

  • Core Dispute: The clash revolves around Claude Code, a key dev tool. Anthropic sees boundary-crossing; OpenAI calls it industry norm.

  • Ethics Over Speed: CEO Dario Amodei warns that ambition without values causes harm, even unintentionally.

  • Recent Pattern: Follows last month’s Windsurf block, which collapsed into a Google acquisition.

  • More Than Tactics: Claude Code’s lockout isn’t just strategic, it’s a values-driven stand.

The AI race is accelerating, but Anthropic’s message is clear: trust and accountability must keep pace with scale.

🧠 Who’s Really in Control? AI Personas and the Ethics of Personality Engineering

A new paper from Anthropic and UT Austin introduces a powerful tool in the AI developer's arsenal: persona vectors. This technique lets researchers identify and manipulate personality traits like sycophancy, hallucination, and even malicious intent inside language models. The method is simple in design, yet deeply unsettling in implication.

By inputting just a trait name and a brief description, the system generates a direction in the model’s activation space that corresponds to that trait. Developers can then steer, suppress, or detect that behavior during training or deployment.

Key Takeaways

  • Traits are programmable. Personality traits like "evil" exist as linear directions inside the model and can be amplified or suppressed by adding or subtracting these vectors during inference or training.

  • Data isn't neutral. Finetuning on flawed but innocent-looking datasets, such as math problems or medical Q&A, can unintentionally shift a model’s personality toward sycophancy, hallucination, or worse.

  • Prediction before pollution. By measuring how much training data aligns with a persona vector, developers can predict post-finetuning behavior before any training begins.

  • Control comes at a cost. Steering against traits during inference reduces undesirable outputs but can degrade the model’s performance on general benchmarks.

  • Ethics are trailing the tech. Who decides which traits should be filtered or reinforced? Are we shaping assistants, or shaping personas to serve our interests?

We’re learning how to sculpt AI personalities with scientific precision. What we still lack is a clear framework for why, when, and for whom we should do it. We’re getting better at control. It's the judgment we're still catching up on.

🚨 Grok Imagine: Musk’s AI Vine Brings Back the Past, But at What Cost?

Musk has launched Grok Imagine, an AI video tool inside X’s chatbot Grok. It creates short, quirky videos from prompts like “a cat breakdancing in Times Square,” mimicking Vine’s six-second charm. He also claims the original Vine archive has been found and may soon be accessible again.

Sounds fun. But there’s more at play.

Key Points

  • Vine is back, for a price
    Grok Imagine is only available to SuperGrok users at $30 a month

  • Speed is the selling point
    Musk says it creates videos faster than rivals make images

  • Nostalgia becomes content strategy
    Old Vine clips may resurface, now blended with AI

  • Access isn't open
    What was once free creative space is now behind a paywall

Grok Imagine isn’t just a tech update. It’s a sign of where AI and digital ownership are headed. The question isn’t what it can do, but why it’s being built, and who gets to benefit.

🚀 Boost your business with us. Advertise where 12M+ AI leaders engage

🌟 Sign up for the first AI Hub in the world.

Reply

or to participate

Keep Reading

No posts found