
Welcome back! Factories for intelligence, power grids, and quiet laptops all feel strangely connected here. Leadership is trying to turn raw compute into a show of strength, utilities are being drafted into PR strategies, engineering teams are steering work they no longer type, and new models are sold as the friend that can finally read everything in one go. The mood is simple: AI is becoming heavy infrastructure and invisible coworker at the same time, and the real tension lives in how much of both you’re willing to trust.
In today’s Generative AI Newsletter:
Spotify says top developers stopped writing code.
xAI restructures and pitches moon data centers.
Anthropic pledges to cover AI power costs.
Z.ai releases GLM-5 for long documents.
Latest Developments
Spotify’s Best Developers Have Not Written a Single Line of Code Since December

Spotify put a weirdly specific claim on the record during its Q4 2025 earnings call. Spotify executive Gustav Söderström said the company’s “best developers… have not written a single line of code since December,” because AI tools got good enough to build and ship work without traditional coding. The stake is its users' trust and product control because a lack of oversight from human developers could build on ineffective features.
Here are the call details:
Tooling: Gustav linked the shift to Claude Code and said the jump happened around “Christmas,” with engineers steering and reviewing instead of typing.
Workflow: Senior engineers now spend more time approving changes than writing them.
Money: Spotify reported $4.5B in revenue and a 33.1% gross margin, positioning AI as a scale and efficiency lever, not a lab demo.
Concern: Spotify would not share a percentage of AI-generated music uploads and warned about AI spam tracks while pushing for better metadata disclosure.
Spotify is selling a future where software becomes cheap and constant. That can feel great when it means fewer bugs and faster improvements. It can also mean more nonstop experiments, more synthetic content, and blurrier responsibility when something goes wrong. If engineers stop writing code, Spotify's next trust test is whether users still feel like humans and who owns the outcome. The challenge lies in striking a balance between applying AI for efficiency while maintaining the human touch that users value.
Special highlight from our network
Most AI tools sit on the side of your workflows.
Sidekick moves the artificial intelligence into Shopify itself.
By connecting data across your store, Sidekick can:
Proactively generate recommendations for your store
Automate multi step tasks like launches, pricing updates and metafields
Turn plain language into pages, imagery and even simple apps
All inside the platform merchants already use every day.
See how Shopify Sidekick changes from “AI assistant” to “store operator” and what it looks like in real merchant workflows.
Musk Says Founders Were Laid off as xAI Pitches Moon Data Centers

xAI published an internal meeting on X to explain why top people are leaving. Elon Musk called this departure wave "layoffs” from a reorganization. Then he tried to justify the shakeup by explaining that xAI will build faster by splitting into four teams and scaling its compute. Musk also claimed that xAI already runs on 100,000 top-tier AI chips and wants the power of roughly a million of them. Speed can make Grok better quickly, but instability can make it unreliable for users who just want the assistant to work.
Here's what xAI is really doing:
Structure: Four groups now run the company: Grok, Coding, Imagine, and Macrohard.
Scale: Musk claimed Grok is in more than two million Teslas, and Imagine makes 50 million videos a day.
Control: Macrohard is pitched as software along with Grokopedia, pitched as a Wikipedia successor with 6 million articles
Moonshot: He talked about a moon mass driver and orbital compute, turning AI into a space project.
AI companies used to win by having the smartest model but now they win by compute. OpenAI has people relying on ChatGPT, Google has Search’s reach, while xAI has X and Tesla dashboards and a habit of turning strategy into spectacle. That can be an advantage for users because it forces attention and accelerates processing. It can also backfire because users don’t want their assistant to feel like it’s being rebuilt mid-conversation. If it stumbles, the industry gets another reminder that scale does not automatically create trust.
Anthropic Pays for AI Growth So That It Skips Your Power Bill

Anthropic has picked a clear battleground for AI’s growing backlash: the electricity bill. In a new U.S. pledge tied to upcoming data centers, the Claude maker says that it will pay for 100% of the grid upgrades and cover any power price increases its sites cause for homes and businesses. The company warns that training a single frontier model will soon use gigawatts of power and wants to show that this hunger will not make local bills more expensive.
Here is the plan Anthropic says it will follow:
Infrastructure: Anthropic says it will pay for all transmission and substation work.
Pricing: It will model and reimburse data center-driven rate hikes for local customers.
Supply: The firm promises net new power instead of just grabbing existing cheap energy.
Controls: It talks about turning data centers down at peak hours and using water-saving cooling.
Anthropic’s promise looks generous, but it also reads like preemptive damage control as politicians warn that tech must 'pay its own way.' Microsoft is testing similar protections, and rivals may copy it if the strategy buys time with regulators. The hard part is proving cause and effect because if people still see higher bills and the math behind repayments stays opaque, this promise could look like clever accounting.
GLM-5: Open-Weights Model for Long Docs and Code

GLM-5 is Z.ai’s open-weights language model you can run locally (Ollama) or access by API. It’s most useful when your problem involves big specs, long error logs, or multi-file changes that exceed the capacity of a small prompt. You paste the relevant information once, then ask for concrete outputs like a refactor diff, a test plan, or a set of tickets.
Core functions (and how to use them):
Repo refactor: Paste a folder tree plus 2–3 key files. Ask for a two-PR plan and a diff for PR1.
Spec to tickets: Paste a PRD. Ask for tickets with acceptance criteria, edge cases, and “not doing” notes.
Log triage: Paste the stack trace and 50–100 lines around it. Ask for root causes and a minimal repro.
UI flow draft: Describe the features and constraints. Ask for screens, empty states, error copy, and required API calls.
Test plan as JSON: Ask for a JSON list of test cases tied to specific requirements you pasted.
Try this yourself:
Take one document you already have open your PRD, a bug report, or a failing CI log. Paste it into GLM-5 and ask for one usable artifact: either (1) a PR plan with a diff for the first change or (2) Jira-ready tickets with acceptance criteria. After you get the output, conduct a review process: tell it to quote the exact line from your input that supports each ticket or code change, and delete anything it can’t support.




