
Happy Friday! Meta just built an AI that can predict how your neurons fire before you even step inside a brain scanner. Apple is opening Siri's front door to every AI lab that wants in. And Wikipedia's volunteer editors just drew a hard line against AI-written content in a near-unanimous vote.
In today’s Generative AI Newsletter:
Apple Siri: Is Apple about to turn your voice assistant into an AI marketplace?
Meta TRIBE v2: Can synthetic brain scans replace the real thing?
Wikipedia: What happens when the internet's largest encyclopedia says no to AI?
Gemini 3.1 Flash Live: Is Google finally cracking natural AI voice?
Latest Developments
Apple Is Opening Siri To Rival AI Assistants

Apple plans to let third-party AI models plug directly into Siri starting with iOS 27, according to Bloomberg. The move ends ChatGPT's exclusive integration and hands users the ability to choose which AI handles their queries from the assistant itself.
The details:
User Choice: A new extensions menu in settings will let users route Siri questions to any compatible model, replacing the current ChatGPT-only setup.
Minimal Uptake: ChatGPT has been integrated since 2024 but usage of the Siri integration has reportedly been minimal, suggesting the walled-garden approach was not working.
Revenue Play: AI chatbots distributed through the App Store could become a new subscription revenue stream, with Apple taking a cut of purchases made across its devices.
Gemini Under The Hood: Apple is expected to unveil a broader Siri overhaul powered by Google's Gemini at WWDC in early June.
Apple's logic here is straightforward. Rather than trying to build the best AI model itself, it is turning a billion iPhones into the distribution layer for everyone else's. The hardware moat does the work. The model war becomes someone else's problem.
Special highlight from our network
Arango Contextual Data Platform 4.0 provides a unified architecture for AI, combining graph, vector, document, key-value, and search into one contextual data layer so AI can retrieve, reason, and act on connected data.
With 20+ services like AutoGraph and AutoRAG, it automates modeling, ingestion, retrieval, and workflows, reducing complexity and speeding deployment. Teams move from prototype to production faster with transparent, governed data flows and real-time, enterprise-scale AI.
Special highlight from our network
You don’t need to start from scratch.
Every insight, decision, and detail is already in your call. Most AI tools make you rebuild that context manually.
Supernormal skips that step.
It captures your meetings automatically and turns them into deliverables right after the call:
Slide decks you can present
Follow-ups ready to send
Strategy docs you can share
Spreadsheets you can use
Turn your next meeting into a presentation.
Meta's Brain AI Outperforms Real Brain Scans

Meta open-sourced TRIBE v2, a foundation model trained on brain scan data from over 700 people that simulates neural activity across vision, hearing and language. The headline result: its synthetic predictions matched population-level brain activity better than most real fMRI recordings.
The details:
Scale Jump: The original TRIBE used data from 4 volunteers and covered 1,000 brain regions. V2 jumps to 700+ subjects and 70,000 regions, trained on over 1,000 hours of brain data.
Better Than Real: Real fMRI scans are clouded by heartbeats, movement and noise. TRIBE v2's synthetic predictions outperformed those noisy recordings at the population level.
Decades In Software: The team replicated established neuroscience findings entirely in software, correctly identifying brain regions responsible for processing faces, speech and text with zero new scans.
Fully Open: Meta released the model, weights, codebase and a live demo for any researcher to use.
Neuroscience has been bottlenecked by expensive scanners and slow, one-study-at-a-time progress for decades. TRIBE v2 could compress months of scanning into seconds of compute. The comparison to AlphaFold's impact on protein structure research is not a stretch.
Wikipedia Bans AI-Generated Articles

Wikipedia's English-language editors voted 40-2 to ban the use of AI for writing articles. The policy's author described it as a pushback against the forced adoption of AI across platforms.
The details:
Narrow Scope: The ban covers writing or rewriting articles with LLMs. Editors can still use AI for grammar fixes and translations as long as a human reviews the output.
Three Violations: The decision came down to LLM-generated text consistently failing Wikipedia's core content policies: neutrality, verifiability and attribution to reliable sources.
Broader Pattern: StackOverflow and German Wikipedia have enacted similar bans. Spanish Wikipedia went further and banned AI use entirely, including for editing.
Grokipedia: The move comes as Elon Musk pushes an AI-generated alternative to Wikipedia through Grok, pulling in the exact opposite direction.
AI-generated text already reportedly surpassed human-written output for the first time back in 2025. Wikipedia is betting that human editorial standards still matter. How long that line holds against the volume of AI content flooding the web is another question.
Google Ships Gemini 3.1 Flash Live For Real-Time Voice

Google released Gemini 3.1 Flash Live, a new voice model built for faster and more natural audio interactions. The model now powers Gemini Live across Search and the Gemini app.
The details:
Conversation Length: Sessions can run 2x longer than previous versions before timing out.
Reduced Latency: Response times are faster with fewer awkward pauses mid-conversation.
Tone Awareness: The model adjusts its tone depending on context, moving between casual and informational registers.
API Access: Flash Live is also available through Google's API for developers building real-time voice agents.
Google's voice AI push is accelerating. Flash Live sits alongside Mistral's new Voxtral TTS (which clones any voice from a 3-second clip across 9 languages) in what is becoming a very crowded week for voice model releases.
Tool of the Day: Omma

Omma lets you generate fully interactive 3D websites and apps from a text prompt. Type a description, get a working 3D landing page in minutes and edit it with follow-up prompts.
Try this yourself:
Go to Omma and sign up.
Enter a prompt like "Create a bold 3D landing page for a food delivery service with menu previews."
Within minutes you will have a working 3D page you can preview, edit and publish.
You can also remix creations from the community for inspiration.
Light Bytes
OpenAI shelves erotic chatbot mode: The planned feature has been put on hold indefinitely following pushback from staff and investors.
Novo Nordisk deploys AI agents in clinical trials: The pharma giant says the technology is trimming drug approval timelines and reducing contractor costs.
Cohere Transcribe tops HuggingFace: The free, open-source speech recognition model hit the number one accuracy spot across 14 languages.
Suno v5.5 launches: New AI music generator adds voice cloning, custom model tuning and personalised style learning for Pro subscribers.
Mistral ships Voxtral TTS: A lightweight voice AI that clones any speaker from a 3-second audio clip and generates speech across 9 languages.





