
Welcome back! The ground under white-collar work is starting to tilt. AI is no longer nibbling at the edges. It is replacing the parts of early careers once treated as training grounds, the slow repetitive tasks that taught people how an industry works. New systems are now absorbing that work at speeds that feel structural rather than cyclical. Research cycles are collapsing, group conversations are absorbing AI as a participant, and even grief is becoming something people can automate.
In today’s Generative AI Newsletter:
• Anthropic warns early-career roles face rapid displacement
• Edison Scientific debuts Kosmos for end-to-end research automation
• OpenAI pilots group conversations with ChatGPT
• 2wai launches grief avatars that spark ethical backlash
• Google DeepMind upgrades forecasting with WeatherNext 2
Latest Developments

Anthropic CEO Dario Amodei is once again warning that AI could remove a huge layer of white-collar work within the next five years. He told CBS News that entry level consultants, paralegals, financial analysts, and similar roles could face 10 to 20 percent unemployment as models grow sharper and companies adopt them faster than people can adapt. His worry is not abstract. At Anthropic, Claude already produces most of their internal code.
What Dario is saying:
Up to 50 percent of junior roles are at risk as AI handles tasks that once taught people how to enter a field.
Ten to twenty percent unemployment could appear within one to five years if adoption continues on its current path.
Ninety percent of Anthropic’s internal code is generated by Claude, showing how fast the shift is happening.
Professionals in law, consulting, and finance are highlighted as the first groups likely to feel the impact.
The warning lands awkwardly in a year where tech leaders trade optimism and anxiety like weather reports. Amodei says the danger is speed, not mystery, and his comments revive an uncomfortable idea. If AI takes the ladder of early career work, the next generation will climb without rungs. Whether this becomes a correction or a crisis depends on how seriously policymakers and companies treat the transition, because the clock he is describing moves very quickly.

A new system from Edison Scientific is pushing automated research into unfamiliar territory by coordinating full scientific workflows end to end. Kosmos reads literature, analyzes datasets, tests hypotheses, and documents every step with full traceability. Early testers report that work requiring months of reading and coding can finish in roughly twelve hours, with every claim anchored to either a source paper or a runnable notebook.
What it can do
Runs full research cycles by processing about fifteen hundred papers and tens of thousands of lines of code in one session
Shows its work through citation links that trace every conclusion to source material or executed code
Replicates unpublished results while generating new findings across neuroscience, materials science, and other fields
Enters commercial rollout as Edison Scientific moves to serve pharma and other high-analysis industries
Kosmos raises a real question about the future of scientific pace. It hands researchers a system that stays coherent across long investigations, keeps a perfect memory, and works at a speed that blurs the usual timelines for discovery. It challenges old assumptions about what a single team can attempt, and hints at a world where scientific ambition expands faster than human bandwidth would normally allow.

OpenAI is testing a new feature that lets people bring ChatGPT directly into shared conversations. The pilot is live in Japan, New Zealand, South Korea and Taiwan, and it allows up to twenty people to work together in a single thread, while ChatGPT joins in when asked. The update also fixes the long-running punctuation issue by making the model stop using em dashes when told to.
What’s rolling out:
Group conversations: Users can add friends or coworkers into a shared chat where ChatGPT responds only when mentioned.
Full tool access: Search, image upload, file upload, code help, and reactions all work inside group threads.
Separate spaces: Group chats do not pull from personal memories and do not add new ones.
Custom behavior: Each group can set its own tone instructions and manage who joins, leaves or shares the invite link.
ChatGPT entering group chats marks a shift in how people coordinate online. Conversations now include a participant that can plan trips, resolve debates, or manage information without slowing the thread down. The pilot will show how groups handle an always-ready extra mind, and whether shared spaces start to evolve once an AI can join in as naturally as anyone else.

A new AI app called 2wai, launched by Disney Channel actor Calum Worthy, promises to let people talk with digital replicas of deceased relatives. A promo video showed an AI grandmother visiting her family across decades without aging a day, which lit X on fire with reactions that ranged from horror to full panic. The app is real, the avatars talk, and the internet is wondering if grief tech has finally gone too far.
Here is what the app does:
Generates “HoloAvatars” using a short video clip to create a talking digital likeness of a late family member.
Runs a free public beta on iOS while preparing a paid subscription model and an pcoming Android release.
Builds a “living archive” in Worthy’s words, aiming to preserve stories and personalities in interactive form.
Faces heavy backlash after users called it unsettling, exploitative, and emotionally unsafe for people grieving.
In early reactions, many compared the demo to Black Mirror, and others questioned what it means to recreate a relative without consent. The idea has a sci-fi charm until you imagine your dead grandmother showing up in your phone to weigh in on your life choices. Critics called the concept a grief trap dressed up as comfort. The technology is new, the ethics are ancient, and the line between remembrance and puppetry is about to get very blurry.

WeatherNext 2 is Google DeepMind’s newest AI weather model built with Google Research. It produces global forecasts in under one minute on a single TPU and delivers higher accuracy across nearly every weather variable. The model powers Google’s weather results in Search, Gemini, Pixel Weather, and Google Maps Platform’s Weather API.
Core functions:
• High accuracy: Predicts wind, precipitation, pressure, and extreme weather with improved reliability across 0 to 15 day lead times.
• Fast forecasting: Generates global predictions about eight times faster, making it possible to simulate hundreds of scenarios from a single data input.
• Fine resolution: Produces detailed, hour-by-hour forecasts four times per day for more precise planning.
• Broad availability: Accessible through Google Cloud Vertex AI, BigQuery, and Earth Engine for custom modeling and geospatial work.
Where it helps:
• Monitoring storm formation and intensity for early preparation.
• Running rapid scenario testing for rare or low-probability events.
• Supporting energy planning, logistics, and climate research.
• Improving local forecasts through Google’s core weather features.
Try this yourself:
Use WeatherNext 2 through Vertex AI or Weather Lab. Select a region, load a recent weather pattern, and run multiple forecast scenarios. Compare how the model reacts to shifts in pressure or wind patterns and how early it surfaces potential extreme events.



