
Welcome back! The verdict on 2025 is in, and it is a paradox. Depending on who you ask, we just lived through the year of the "Architect" or the year of "Slop." While the culture war heats up, the platform wars are getting strange. A sci-fi dream of universal translation is finally hitting consumer hardware, and the world’s most dominant chipmaker is suddenly giving away open software to protect its moat. The industry is moving from "move fast" to "lock them in."
In today’s Generative AI Newsletter:
Merriam-Webster and Time split on whether AI is genius or junk.
Google brings real-time translation to Android headphones.
NVIDIA fights cheap Chinese AI with new open models.
Chunks finds the viral moments hidden in your long videos.
Latest Developments

Two institutions looked at artificial intelligence in 2025 and reached very different conclusions. Merriam Webster named “slop” its word of the year, defining it as low quality digital content produced at scale by AI. Time magazine crowned the “Architects of AI” as Person of the Year, spotlighting the executives and researchers who built the systems now embedded across media, work, and politics. Between them sits the real legacy of 2025, a technology that scaled faster than the stories we tell about who actually built it.
Here Is What Shaped the Year
Merriam-Webster: “Slop” captured public exhaustion with the deluge of AI-generated filler flooding feeds, art, and media.
Unprecedented Scale: Studies estimate nearly 75% of new web content in 2025 utilized AI tools.
The Pushback: Platforms like YouTube, Wikipedia, and Spotify implemented strict measures to curb the proliferation of low-quality synthetic material.
Time Magazine: The publication honored eight leaders as “Architects,” framing the technology as an unavoidable, transformative force.
The pairing tells a sharper story than either honor alone. AI reached the mainstream in 2025 through volume, visibility, and power. The same systems that generated disposable content also generated wealth, influence, and political presence. While Merriam-Webster heard the public mocking the mess, Time visualized the hierarchy. The cover inverted the original "Lunch Atop a Skyscraper" photo, which honored workers holding a city aloft with muscle. Here, pioneer Fei-Fei Li was diminished while Mark Zuckerberg received visual dominance, despite publicly cooling on AI. Between them sits the unresolved question of the year. Who gets remembered for what AI made, and who is left sorting through what it left behind.
Special highlight from our network
Join our fast, live, hands-on workshops to learn powerful AI tools and unlock exclusive discounts and free credits to boost your projects.
Thu, Dec 18 · 9–10 am PT
Creating Impactful Slide Decks With AI
Turn ideas into clean, high-craft decks using modern AI presentation tools.
Featuring: Google Slides, Nano Banana & Chronicle
Every session includes exclusive discounts and a giveaway of 100 free AI tool access passes.

Google has integrated 'Gemini 2.5 Flash Native Audio' into two daily tools: Search Live and Translate. This initial release enables you to speak and receive a verbal response, starting with the US. Meanwhile, Translate began experimenting with real-time speech translation via headphones on Android devices in the US, Mexico, and India. Google presents it as a more natural way to have conversations with AI. However, users often forget they are interacting with AI within a minute when a search turns into a conversation.
Here's what we know so far:
Rollout: Search Live introduces instant voice responses with the use of Gemini 2.5 Flash Original Audio.
Beta: Translate tests live speech translation with headphones, with iOS planned later.
Evidence: UWM says its voice system processed 14,000+ loans after integrating the model.
Benchmarks: Google reports 71.5% on an audio function-calling benchmark with 90% instruction adherence.
Everyone desires the microphone because it captures data and incorporating voice creates a human-like experience in products. The benefits are clear with reduced menus, quicker responses, and smoother navigation. The drawback is the always-on voice raising privacy concerns, bias and errors that may be hidden by a friendly, human-like tone. As AI technology progresses, companies need to tackle these challenges before establishing the standard interface to appeal to and keep more users.

NVIDIA has launched the Nemotron 3, a new open model family that promises to outperform China's low-cost AI models and bought SchedMD. It is the company behind Slurm, which is the scheduler that runs many of the world’s biggest GPU clusters and keeps them from melting down. NVIDIA calls it open innovation but if NVIDIA can set the model roadmap and the cluster rules under one roof, it gains leverage over how teams build and run AI.
Here's what comparing the announcements shows:
Model: Nemotron 3 Nano model with ~30B parameters, 3B active per token and a 1M-token context frame.
Roadmap: 2026 will include Nemotron Super and Ultra, with 10B and 50B tokens.
Cost: NVIDIA claims 4x higher efficiency and 60% fewer reasoning tokens.
Control: NVIDIA says Slurm stays open source. Slurm runs more than half of the 'TOP500' systems.
This looks like the new open-model playbook, similar to Meta’s Llama and China’s fast-moving open releases. The upside is cheaper, auditable models they can customize to avoid betting everything on one closed API. The risk is the monopoly over the model, optimization stack and scheduler can steer you into one ecosystem. Long-term success of this strategy will depend on how companies balance the benefits of open-source models with the risks of overreliance on a single provider.

Chunks is built for anyone sitting on hours of raw footage and no patience to scrub timelines. You upload your videos, describe what you want in plain language, and Chunks finds the moments worth sharing. It is less about fancy editing and more about speed, clarity, and not wasting good footage.
Instead of hunting for highlights, you tell it what you are looking for and let the system surface it.
Core functions (and how to use them):
• Automatic moment detection: Upload raw footage and let Chunks scan for faces, speech, reactions, and high energy moments. Use this when you do not know where the good parts are yet.
• Search inside videos: Type something like “when the speaker laughs” or “key takeaway about pricing” and jump straight to the exact timestamp across all files.
• Prompt based shorts: Describe the clip you want such as “make a 30 second highlight about the product launch” and get an instant first cut.
• Facial detection and naming: Label people once and quickly find every moment they appear. Useful for interviews, panels, and podcasts.
• Instant exports: Generate social ready shorts without watermarks, then tweak or export immediately.
Try this yourself:
Upload one long video. Ask Chunks to find three moments where something important or emotional happens. Generate a short from one of them. Even if you do not post it, you will see how much usable content was hiding in plain sight.
ToneUp centralizes your workflow, strategy, drafts, posts, voice matching, visuals and editing, all inside one workspace designed for speed and consistency.
What once powered our own content engine is becoming a product for everyone ready to scale their presence.
Support GenAI Works as we introduce ToneUp to the world.
Reg CF offering via DealMaker Securities. Investing is risky.





