
Welcome back! The assistant war has officially entered your inbox. One tech giant is betting you will trade your private data for a perfect morning briefing, while a competitor is trying to prove its models can survive a PhD program. But the users might be ahead of the labs. New data suggests people are done using AI for digital chores and are quietly moving to heavy cognitive lifting. The novelty is fading, and the real work is beginning.
In today’s Generative AI Newsletter:
Google launches a morning agent built on your personal files.
OpenAI creates a benchmark to see if AI can replace scientists.
Perplexity finds users want thinking partners, not travel agents.
Meta SAM Audio isolates specific sounds just by describing them.
Latest Developments

Google has launched CC, a new AI assistant designed to deliver a personalized morning briefing built from your emails, calendar, files, and the wider web. The tool sends users a daily summary called “Your Day Ahead,” pulling meetings, priorities, and suggested actions into a single digest. CC is built on Google’s Gemini model and is rolling out in early access to paid users over eighteen in the United States and Canada, with a waitlist opening more broadly.
What Google Launched:
• Personal Data Access: CC connects to Gmail, Google Calendar, and Google Drive to shape daily briefings.
• Daily Briefing: The assistant delivers a morning summary of appointments, tasks, and key updates.
• User Training: People can reply to CC or email it directly to teach preferences and memory.
• Product Context: CC joins Google’s growing lineup of agents for coding, shopping, and browsing in Chrome.
The ambition behind CC sits in its timing. Mornings are one of the last uncontested rituals on the internet, ruled by inboxes, feeds, and half focused scrolling. Google is proposing a cleaner alternative, one voice that speaks first and frames the day. The idea sounds efficient and highly invasive. An assistant that knows your schedule also learns your priorities, your stress points, and your habits of attention. OpenAI made a similar bet with ChatGPT Pulse, which Sam Altman called his favorite launch.
Special highlight from our network
Join our live, hands-on workshops to explore powerful AI tools and get exclusive perks to boost your next project.
Coming up:
Thursday, Dec 18 · 9–10 am PT
Creating impactful slide decks with AI
Learn how to turn rough ideas into polished decks using modern AI tools.
Featured tools: Google Slides, Nano Banana, Chronicle
Every session includes:
Exclusive tool discounts
100 free AI access passes (giveaway)

OpenAI just launched FrontierScience, a benchmark designed to see if neural networks can graduate from basic assistants to genuine laboratory collaborators. Tech leaders like Sam Altman and Dario Amodei have long promised that AI will eventually function as a country of geniuses in a data center, but proving that capability requires more than just high-quality prose. The new evaluation uses complex problems in physics, chemistry, and biology to pinpoint where reasoning ends and hallucination begins. The core idea is not merely to respond to simple questions, but rather to imitate the demanding, multi-week mathematical processes that characterize contemporary research.
The State of the Lab:
Performance Metrics: The new GPT-5.2 model achieved 77.1% on student-level Olympiad questions but dropped to 25.3% on professional research tasks.
Reasoning Speed: Experts confirmed the model solved plasma wave derivations in minutes that typically require three weeks of manual human calculation.
Evaluation Scarcity: Finding specialized experts to grade these models has become so difficult that OpenAI now relies on $10 billion data annotation firms.
Academic Friction: Journals report a doubling of submissions as the ease of LLM generation creates a massive burden of unreliable synthetic papers.
The gap between a brilliant calculator and a true scientist remains defined by physical intuition and the ability to design real-world experiments. While tools like AlphaFold have already cataloged millions of protein structures, general-purpose models still struggle with the electronic properties and visual data that define a laboratory. We are entering an era where AI can solve a math problem that stumped professors for years, yet still lacks the judgment to know if the result actually matters. Science is finally moving at the speed of silicon, but the burden of proof still rests entirely on human shoulders.

A new study by Harvard researchers and Perplexity has upended the popular "digital concierge" narrative. While companies often pitch AI agents as tools for booking flights or ordering groceries, the analysis of hundreds of millions of interactions in Perplexity’s Comet browser shows a decisive shift toward high-level cognitive work. The study reveals that users are increasingly viewing AI as a thinking partner for complex research rather than a simple automation bot.
Key Findings from the Field:
Cognitive Dominance: Over 57% of all agentic queries focused on Productivity & Workflow or Learning & Research, far outpacing personal lifestyle automation.
The Knowledge Class: Adoption is driven by "high-value" professionals: digital technologists (30% of volume), finance experts, and academics lead the usage.
Industry Specificity: Users leverage agents to solve niche friction points; finance pros dedicate 47% of tasks to productivity, while students focus 43% on research.
The Capability Trap: As queries become more specialized, the cost and difficulty of verifying AI "experts" has skyrocketed, fueling a $10 billion annotation industry.
The data highlights a "gravitational pull" toward complexity: users typically start with low-stakes trivia or travel planning but quickly migrate toward debugging code or synthesizing financial reports. This mirrors the early evolution of the PC, which transitioned from a hobbyist's game machine to an essential engine for spreadsheets and word processing. We are seeing the birth of a hybrid intelligence economy where the most expensive human assets are the ones most likely to offload their heavy lifting to silicon.

Most audio tools assume you think in waveforms and sliders. You usually do not. You think in moments. The bark that ruined a sentence. A melody lost beneath the applause, the telltale mic tap was caught only upon export.
SAM Audio starts from that reality. It lets you isolate sounds by describing them, pointing at them, or marking when they happen. No audio engineering mindset required.
Core functions (and how to use them):
• Text based isolation: Type what you want to remove or extract, like “dog barking,” “traffic,” or “clapping.” SAM Audio scans the entire clip and separates that sound. This is the fastest way to clean recordings when the problem is obvious.
• Visual selection: If your audio comes from video, click the person or object making the sound. SAM Audio isolates only the audio tied to that visual source. This is especially useful for interviews, panels, and live events.
• Time span control: Highlight the exact moment a sound occurs and isolate just that segment. Ideal for removing coughs, mic bumps, or brief distractions without touching the rest of the clip.
• Selective cleanup: Use combinations of prompts to reduce noise, extract instruments, or separate dialogue from background ambience.
Try this yourself:
Upload a messy recording. First isolate the unwanted sound. Then isolate the main voice. Hearing the layers separated makes it clear why this approach feels different from traditional audio editing.




