
Welcome back! Today, we're looking at how AI is becoming the invisible helper in our lives: your usual voice assistant is moving to your browser, a chatbox is acting as an overnight medical guide, a car is now explaining its driving choices and a new report shows you what people's AI helpers are saying about your company. Essentially, software isn't just answering questions anymore, it's actively helping us make decisions, whether we're stuck in traffic, in a hospital, or just searching online.
In today’s Generative AI Newsletter:
Amazon brings Alexa+ assistant into web chat.
OpenAI logs 40M daily health advice chats.
NVIDIA demos Alpamayo driving kit with explanations.
Azoma tracks how AI mentions your brand.
Latest Developments

Amazon has launched Alexa.com, a browser based interface that brings its new Alexa+ assistant to the web. Until now, Alexa largely lived on speakers, displays, and phones. This move puts Amazon directly into the same conversational space as ChatGPT, Gemini, Claude, and Grok. It also signals that Amazon no longer sees voice alone as the future of assistants. The company is repositioning Alexa as something people type to, not just talk to.
How has Alexa changed with this rollout?
Web Access: Alexa+ is now available through Alexa.com for Early Access users.
Agent Partnerships: Expedia, Yelp, Angi, and Square join Uber and OpenTable for bookings and services.
Usage Growth: Amazon says shopping and cooking activity is up 3 to 5 times since Alexa+ launched.
App Redesign: The Alexa mobile app is shifting to a chatbot first layout.
Amazon has invested billions in Anthropic and Claude, while now pushing Alexa into a similar conversational role. Still, Alexa has something most chatbots do not. It is already embedded in daily routines. That leaves Amazon straddling two assistants without a clear boundary. Alexa’s advantage is reach, not novelty. Whether that reach translates into sustained use outside the home remains an open question.
Special highlight from our network
GenAI Academy presents Vibe Sessions: live, practical workshops led by Samuel Cummings. They're beginner-friendly and focused on real results.
Jan 12: Design Vibes (image and video)
Jan 14: Biz Vibes (business apps)
Jan 16: Web Vibes (web apps)
Join one for all.
Live from 9 to 10 AM Pacific.

OpenAI says more than 40M people now use ChatGPT every day for health related questions, but the company’s latest report makes one thing clear. Usage is no longer the bottleneck. Regulation is. Health topics now account for over 5%of all ChatGPT messages and OpenAI is using that scale to argue that regulators need to move faster if AI is going to play a formal role in healthcare. Rather than staying in a legal gray zone.
What people are actually asking:
Symptom Checks: Many users ask about physical signs before deciding whether to seek care.
Medical Language: ChatGPT is used to translate diagnoses and test results into plain terms.
Insurance Help: Around 1.6 to 1.9M weekly questions focus on billing disputes and plan comparisons.
Access Gaps: About 600K weekly messages come from rural areas far from hospitals.
About 70 percent of health chats happen outside clinic hours. OpenAI is urging the Food and Drug Administration to create clearer approval paths for AI medical tools, arguing that current rules were not built for systems that update continuously. OpenAI acknowledges the risk, noting that errors in healthcare can cause real harm and that past incidents show chatbots can give unsafe advice. The open question is whether this push is about patient safety or about securing regulatory cover for tools that are already deeply embedded.

During CES 2026, NVIDIA showcased a Mercedes CLA on San Francisco's public roads for a 40-minute demonstration, using it as a centerpiece for a larger presentation. NVIDIA has introduced Alpamayo, a self-driving kit designed with an open approach, utilizing a model that generates driving actions along with reasoning traces. This allows teams to understand the rationale behind the car's decisions. Debugging failures is crucial, as a single problematic scenario can jeopardize a launch, but making failures debuggable can simplify defending launches.
Here is what the demo reveals:
Model: Alpamayo 1 is a 10B parameter Vision Language Action model (VLA).
Data: 1,727 hours across 25 countries and 2,500+ cities, split into 310,895 clips of 20 secs each.
Hardware: DRIVE Hyperion uses two AGX Thor chips and claims 2,000+ FP4 TFLOPS for 360-degree sensor fusion.
Customer: Mercedes targets U.S. “Level 2++” production in 2026 with bigger autonomy goals teased for 2027.
NVIDIA's strategy revolves around selling tools and training data rather than offering a 'robotaxi' service. NVIDIA is driving the industry towards systems that are debuggable and testable. Yet, a system that explains itself may confidently offer inaccurate explanations. This situation resembles Tesla's Full Self-Driving (FSD) software, which has shown impressive demonstrations over time but has faced challenges in gaining trust at a gradual pace. If Alpamayo proves reliable, NVIDIA is likely to become the top supplier for autonomy teams seeking verification.

Azoma is a tool that tracks how frequently AI assistants reference your brand, the content of their mentions and the sources they use. You can use it to identify instances where competitors are mentioned more than your brand and then address this by updating your content with FAQs, comparisons, and listing information.
Core functions (and how to use them):
AI share of voice: Compare how often ChatGPT, Gemini, Perplexity and shopping assistants mention your brand for a customer inquiry to 2-3 competitors.
Citation tracking: Find out which pages AI uses as “proof” to target the sources that shape category answers.
Gap detection: Find prompts where your brand is missing and fill them with content clarifications or price comparison tables.
Content outputs: Create useful resources you can ship today, product-page FAQs, “X vs Y” comparisons, recipe/how-to pages, and category titles for listings.
Measure after changes: Try the same prompt set after edits to see if mentions, wording and source citations improved.
Try this yourself:
Write 8–10 buyer questions about one product (budget, calories, “vs” comparisons, “best for” use cases). Check your AI visibility view to see where you're missing and what source the AI used. Make one change today: add a tight FAQ section on your product page with five most common queries (pricing, ingredients, who it's for, how it compares to a known brand, and “is it good for X?”). Review the same questions in 3–7 days for improvements in mentions and phrasing.




