
Welcome back! China’s open models are spreading through classrooms and small labs faster than the United States can regulate its own frontier stack. Apple is rebuilding Siri from the inside after years of stalled progress. NVIDIA is pushing reasoning into the physical world through open autonomous tools. And small agent models are now clicking through the web with enough competence to replace whole categories of routine work.
In today’s Generative AI Newsletter:
• MIT and Hugging Face reveal China’s rising dominance in open model adoption
• Apple reshapes its AI leadership as Siri’s overhaul slips again
• NVIDIA releases Alpamayo R1 to advance reasoning for autonomous systems
• Microsoft launches Fara 7B for real browser and workflow automation
Latest Developments

A paper “Economies of Open Intelligence” from MIT and Hugging Face found that Chinese open models now show 17% of global downloads as compared to 15.8% for US models. In 2025, downloads of open-weight models surpassed fully open-source ones. The study suggests that the center of the open-model is shifting while mainly concerned with regulating new technology and immediate profits.
Here’s how this change unfolded:
Strategy: US giants keep frontier systems behind paid APIs, while Chinese labs ship open weights tuned for cheaper hardware.
Provenance: DeepSeek and Qwen power many models but are masked by community repos that quantize and rebrand their work.
Cost: DeepSeek’s reported $6 million training bill hints at extreme efficiency and mocks labs burning hundreds of millions.
Reach: In parts of Africa and the Global South, the free and fast model becomes the primary teacher.
This shift shows an infrastructure swap. While US giants protect closed, high-margin systems, many Chinese models provide defaults that students, enthusiasts and small businesses use more. China is connecting itself to the back rooms of devstacks, ports, factories, and small startups. This raises the question of views and values that are included when individuals make selections.
Special highlight from our network
Planning isn’t the problem. Timing is.
By the time a strategy’s locked in, the world’s already moved on.
Join GigaSpaces on December 9 for a live eRAG session.
See how teams are syncing GenAI with live data and crowdsourced forecasts to plan in real time.
Get the tools to react faster, decide smarter, and stay ahead.
📅 LinkedIn Live · Dec 9 · (10 AM ET)

John Giannandrea, Apple’s AI chief who was an ex-Google hire, is stepping down and moving into an advisory role until spring 2026. Apple has hired Microsoft’s Amar Subramanya as the VP of AI. Apple calls it a new chapter but the timing raises questions because it comes right after Siri’s big upgrade was pushed to 2026.
Here’s how it lines up:
What he built: Giannandrea upgraded Apple’s AI but Siri’s revamp slipped and was pushed to 2026 after almost 80% test success.
First crack: Earlier this year, Tim Cook reportedly lost confidence, handing Siri to Vision Pro lead Mike Rockwell.
New bet: Subramanya brings Gemini and Microsoft experience to run Apple’s models and AI safety.
Leak: This follows months of Apple searching for a replacement and losing AI talent to rivals while pushing privacy-first, on-device systems.
Apple's decision to hand over Siri reflects the company's commitment to improving its AI capabilities without neglecting the priority for a slow, private, on-device AI. Rivals are sprinting ahead with giant cloud models and public expectations.

NVIDIA used the NeurIPS stage to release a new set of open models and tools built for robots and self-driving systems that need to navigate the real world with something closer to human judgment. The centerpiece is Alpamayo R1, a reasoning vision language model designed specifically for autonomous driving research and built on top of the company’s Cosmos Reason family.
What NVIDIA released:
A driving-focused VLA model: Alpamayo R1 combines images and text to interpret road scenes and make stepwise decisions, bringing reasoning to tasks that usually rely on raw perception.
A path toward Level 4 autonomy: The model tackles the “common sense” layer of driving that current systems still struggle with, helping vehicles handle ambiguous moments that require judgment.
Full open access for researchers: The model is live on GitHub and Hugging Face, opening the door for labs working on next-generation autonomous systems.
A full Cosmos Cookbook: NVIDIA added data curation guides, synthetic data workflows, and evaluation resources so teams can tailor Cosmos models to their own robotics or AV projects.
NVIDIA is betting that the next leap in AI will happen off-screen, in machines that move through physical environments. The company has been saying this for more than a year, and the release of Alpamayo R1 gives researchers a clearer look at the kind of reasoning engines that could sit inside future self-driving cars and autonomous robots.

Fara-7B is Microsoft’s small model designed for computer tasks. It interacts with your browser, processes information on the screen, and performs actions like clicking, typing, scrolling, and opening URLs at your command. Instead of walking through every form or dashboard yourself, you give it a clear goal and it handles the steps.
The core functions and how to use them:
• Form workflows: Open a site, log in and fill out forms with your details..
• Research runs: Browse a few links and summarize what matters.
• Price checks: Compare 2–3 products with the price and key specs into a note.
• Product UX checks: Walk through signup or checkout and flag broken steps.
• Daily browser errands: Grab one number, check tracking, or confirm a booking for you.
Try this yourself:
Set up a Fara-7B computer use demo. Ask it to 'Open Amazon, search for wireless mouse under $30, select one with at least 1,000 reviews and paste the name and price into a designated document or text area.' If it works, swap Amazon for your own signup flow or a simple competitor price check.
Special highlight from our network
Sinapsis is the only all-in-one AI platform built for real-world use. From data ingestion to model training and deployment, everything runs in one drag-and-drop interface.
- Use pre-built templates or start with full agents.
- Connect your models and logic with ease.
- Deploy to edge or cloud without custom builds.
Large-scale AI rollouts now take days instead of months.
Lights, Camera, AI!
Create your first AI-powered short film from start to finish.
Lights, Camera, AI - Intro to AI Powered Video Production is a live course where you’ll create short films using modern AI workflows, learning alongside a community of 13M+ learners.
What you’ll learn:
• Character building and script writing
• Generate voiceovers and animation
• Music composition and a full video assembly
Explore AI video creation with Moses Ukoh in a three-day, beginner-friendly course for storytellers.
8-10 December 2025
10–11 AM Pacific Time
Seats are limited so sign up now!






