Welcome back! There is a push here from money, rules and real work loads. Large cloud providers are spending a lot of money to keep AI running, while new regional players are promoting their own "home turf" clouds. Defense officials are arguing about how much to let algorithms affect national security, and long-context assistants are quietly positioning themselves as the coworker for anything that can't be said in a short chat. The strain between them is simple: AI is becoming infrastructure. The real questions are who will pay for it, who will run it, and what you're willing to let it touch.

In today’s Generative AI Newsletter:

  • Amazon plans $200B AI buildout, unnerves investors.

  • Abu Dhabi backs $1B Vietnam AI cloud.

  • 35 countries sign AI military use pledge.

  • Anthropic launches Claude Opus 4.6 for long projects.

Latest Developments

Amazon’s $200B AI Investment Could Mean More Ads Everywhere

Amazon startled the market by revealing that around $200 billion will be allocated in 2026 for data centers, chips, and AWS capacity to enhance AI capabilities across AWS. Investors questioned whether the payoff shows up fast enough to justify that bill. Amazon's stock fell about 9%, and the drop helped pull other large tech names down with it. If the expenditure occurs immediately but the returns are delayed, shareholders will bear the difference.

Here’s the evidence:

  • Trigger: Amazon indicated about $200B in 2026 capex, meaning money for servers, power, and buildings that run AI.

  • Results: Amazon still reported a $21.2B profit and AWS revenue of $35.6B, up 24%, but Wall Street stayed focused on the next bill.

  • Impact: Software and services names sold off on the fear that AI tools would replace paid workflows, not just boost them.

  • Stress: Credit markets blinked too, with $17.7B of tech loans flagged as trading at distressed levels in a widely cited index.

Every boom starts with investing for the future and every bust starts when the future sends an invoice. AI can indeed reduce costs and enhance speed by cutting time on repetitive work like drafting and basic review. The drawback is just as significant because it also forces companies to spend like utilities while they still get priced like high-margin software companies. If the next quarters can’t show clear revenue attached to this compute buildout, investors will keep treating AI as a cost center wearing a growth costume.

Special highlight from our network

Agentic systems need live data, but enterprises need governance, auditability, and control. MCP solves connectivity, not trust. Join us to see how Simba Intelligence adds the trust layer MCP needs, with a live demo of agentic systems querying live data while maintaining access controls and audit trails.

When? 

📅 Feb 19

1:30 EST

Abu Dhabi’s $1B Deal Could Make AI Cheaper in Vietnam

Abu Dhabi’s G42 has struck a framework deal with Vietnam’s Financing and Promoting Technology (FPT) and Viet Thai Group to build AI cloud infrastructure in Vietnam. The deal involves up to $1B and three data center locations, pitched as 'sovereign AI' so data can stay under local control. In deals like this, the missing details, like a clear timeline and investment split, often decide whether it becomes a real platform or a press release with good lighting.

Here’s what the deal shows:

  • Build: Three sites will deliver significant cloud capacity for government and business use.

  • Pitch: G42’s Ali Al Amine sells data sovereignty and digital independence as the main benefit.

  • Gaps: The partners still need to agree on the workload split, get approvals for public cloud use, and begin site development.

  • Trust: G42 brings scale, but it also carries past scrutiny over China links, which can complicate a deal built on trust.

This story fits the current AI arms race, where countries want powerful AI without handing their data to someone else. Vietnam could gain local jobs and faster access to modern AI tools while keeping sensitive data closer to home and building skills locally. The risk is that data centers burn power and need permits while attracting politics. Sovereign sounds appealing on paper but the project will live or die on electricity and rules.

US and China Stay Out as 35 Countries Sign the AI Military Declaration

At the REAIM summit in Spain, governments tried to agree on basic safety rules for using AI in the military. It was attended by 85 countries but only 35, approximately one-third, signed a non-binding declaration with 20 principles, however, the US and China stayed out. When the biggest militaries refuse even a voluntary pledge, every smaller country has to decide whether safety rules protect them or just slow them down.

Here are the key clues:

  • Split: Small states signed anyway, hoping norms restrain allies as much as rivals.

  • Enforcement: The text lacks audits, penalties, or inspections, so compliance stays voluntary.

  • Scope: It targets military use, not labs selling models that governments later adapt.

  • Pressure: Reports cite alliance tension and fear of falling behind rivals moving faster.

Companies publish safety principles while racing to ship stronger models, and the public has to trust that the guardrails mean something. The good side is that shared rules can prevent mistakes and reduce panic decisions but the rules without the biggest players can become a feel-good label without a real break. If consumer AI already struggles with reliability, military AI will face a tougher test. The risk is ownership for the decision when the software sounds confident and the outcome can impact national safety.

Claude Opus 4.6: Long-Context Agent for Coding and Docs

Claude Opus 4.6 is Anthropic’s newest Claude model for work that doesn’t fit in a short chat. It can handle very large inputs (up to a 1M-token window in beta), make a plan before it changes anything, and finish multi-step tasks like cleaning spreadsheets, reviewing code changes, and summarizing long documents into clear next steps.

Core functions (and how to use them):

  • Big file refactor: Paste a full file (or a few files) and ask for a step-by-step refactor plan, then ask for the updated code.

  • Code change review: Paste a git diff and ask it to point out bugs, risky changes, and missing tests, then request exact fixes.

  • Spreadsheet cleanup: Paste a CSV and ask it to rename messy columns, remove duplicates, and give you the exact Excel/Sheets formulas to apply.

  • Long document summary: Paste a contract/spec and ask for the main points, hidden risks, and a short “what we should do next” list.

  • Keep rules consistent: Tell it to keep a “rules + decisions” note (your constraints, naming rules and what’s settled) and update it each round.

Try this yourself:
Open a messy CSV you care about. Copy 30–100 rows and paste them into Claude. Say, "First, list the cleanup steps you will do. Then give me (1) the cleaned header names, (2) the exact Excel/Google Sheets formulas to apply, and (3) a quick checklist to confirm nothing important got lost.”

Reply

Avatar

or to participate

Keep Reading