
Welcome back! Sam Altman is officially pushing back on the environmental cost of AI by comparing the energy used by data centers to human biology. It is a wild rhetorical move that changes how we talk about infrastructure.
Meanwhile, Google is delivering a harsh reality check to founders. The company is stating clearly that the era of the simple AI wrapper is completely over, and the market is out of patience for borrowed intelligence. We also have details on Apple's plan to put cameras in your ears and NVIDIA's new way to teach robots about physics.
In today’s Generative AI Newsletter:
Sam Altman compares of AI energy consumption to human upbringing.
Google warns founders that thin LLM wrappers are losing investment.
Apple develops camera-equipped AirPods and smart glasses for Siri.
NVIDIA releases DreamDojo to simulate robot physics using video.
Latest Developments
Sam Altman: AI Energy Use Is Fairer Than Human Growth Energy

Speaking at the India AI Impact Summit in New Delhi this week, Sam Altman aggressively pushed back against what he called "totally fake" claims regarding the environmental toll of ChatGPT. He specifically targeted viral reports suggesting a single query consumes 17 gallons of water or the equivalent of 1.5 iPhone battery charges. Altman argued that while AI's total global footprint is a legitimate concern, the individual metrics circulating online are based on outdated data center cooling methods that OpenAI no longer uses.
Altman’s logic:
Cooling myth: Altman labeled claims that ChatGPT uses 17 gallons of water per query as a fake narrative based on outdated evaporative systems.
Caloric comparison: The OpenAI chief argued that raising a child for 20 years consumes more total energy than the one time training of a model.
Grid demand: Total power consumption remains a legitimate worry that requires the world to move toward nuclear or solar solutions immediately.
Terrestrial focus: He ridiculed the idea of putting data centers in space as a ridiculous distraction from terrestrial hardware repairs.
This rhetorical move pushes the discussion from corporate responsibility toward a strange new biological balance sheet. By treating human upbringing as an inefficient training run Altman is attempting to justify the massive infrastructure costs of modern models as a natural bargain. The conversation has moved beyond simple water usage into the territory of planetary accounting. We are now witnessing the moment where the 100B people who lived before us are rebranded as the R&D department for a more efficient spreadsheet.
Special highlight from our network
By 2026, the question won’t be “should we use AI?” It’ll be “why are we still slower than everyone else?”
On Feb 25 at 7 PM GMT, join Dimi (Brizy) and Vito (Atarim) for a 45-minute live fireside chat on how SaaS and agency teams are actually reinventing themselves with AI.
• When to build AI tools and when not to
• How creative teams are using AI to move faster
• Common mistakes when “adding AI” to workflows
Google VP: LLM Wrappers And AI Aggregators May Not Survive

Darren Mowry, the lead for Google’s global startup group, delivered a blunt message to founders. He warned that the market is losing patience with companies that provide nothing more than a cosmetic layer over existing models. As the initial craze for generative tools settles into a more mature phase, the check engine light is flashing for businesses built on borrowed intelligence. Google is now distancing itself from the flood of thin wrappers that populated the landscape during the 2024 boom.
Google identifies dying business models
Wrapper Fatigue: Startups that white label models like Gemini or GPT-5 without adding unique intellectual property are failing to attract long term investment.
Aggregator Risks: The business of routing queries across multiple models is losing ground as users demand internal routing logic rather than simple access.
Infrastructure Mirror: Mowry compared the current market to the early cloud era when resellers were crushed as Amazon built its own management tools.
Vertical Moats: Success is now limited to companies like Cursor or Harvey that build deep features for specific professions like coding or law.
The era of the "AI for everything" startup is giving way to a period of brutal vertical specialization. Founders can no longer rely on the novelty of a chat interface to secure 10M in funding or maintain a growing customer base. Google is moving its focus toward biotech and climate tech where massive datasets provide the kind of natural defense a simple API call cannot match. The platform owners are reclaiming the value they originally outsourced to the first wave of experimental developers.
Apple Visual Intelligence: AirPods, AI Pin, and Smart Glasses

Apple is reportedly developing several "eyes-on" devices to feed visual data directly into Apple Intelligence. This builds on computer vision research previously utilized for the Apple Car and Vision Pro projects. Visual Intelligence aims to make human-computer interaction more intuitive. Gurman suggests applications could range from identifying ingredients on a dinner plate to landmark-based navigation, where Siri provides directions like "turn left at the red signpost" instead of just using distance measurements.
Key Devices in Development:
AirPods with Cameras: Expected later this year, these earbuds will reportedly feature low-resolution or infrared cameras designed for environmental awareness rather than photography.
Smart Glasses: Targeted for a late 2026 release, the initial version of Apple Glass will likely focus on audio and camera capabilities before evolving into a full augmented reality (AR) platform.
AI Pin or Pendant: Rumored as a wearable clip or necklace, this device would use a camera to help users interact with their physical surroundings via Siri.
Apple's push into visual wearables marks a move toward "pervasive AI," where the company's 2.5-billion-strong device base becomes a massive distributed sensory network. By bringing visual models on-device, Apple can pitch these "creepy" cameras as privacy-first alternatives to cloud-based solutions, even as they gather more intimate data about a user's physical world than ever before. The transition from external services like OpenAI to in-house tech is a classic Apple move to own the full stack.
DreamDojo: NVIDIA’s Open Source World Model for Robotics

DreamDojo is an open source robotics world model from NVIDIA’s GEAR Team. It is designed as a new type of simulator that predicts physical interactions using AI instead of a traditional physics engine. Rather than relying on 3D meshes or hand written rules, the model learns how the world behaves by watching video.
DreamDojo was pre trained on 44,000 hours of first person human video.
Core functions (and how to use them):
Video based simulation: Input an image and robot motor commands, and the model generates future frames showing what would likely happen next.
Robot specific fine tuning: Use released checkpoints or fine tune the model to match a specific robot’s control system.
Real time interaction: Runs at roughly 10 frames per second, enabling live teleoperation in a simulated environment.
Policy evaluation: Test a robot’s decision making system thousands of times without risking hardware damage.
Model based planning: Simulate multiple possible futures and select the safest or most effective outcome.
Try this yourself:
Clone the repository from NVIDIA’s GitHub, download the 2B or 14B checkpoints from Hugging Face, and run inference on a supported robot setup. Provide a scene and action command to generate a predicted video outcome.




