
Welcome back! The free era of AI is getting a price tag. OpenAI is officially testing ads inside ChatGPT, which means your personal advice now shares screen time with sponsored links. Meanwhile, on Wall Street, Goldman Sachs is effectively automating the junior banker role. We also have a look at Meta's massive spending plans for a mysterious new model named Avocado. It is a week of hard business decisions.
In today’s Generative AI Newsletter:
OpenAI tests sponsored links inside ChatGPT conversations.
Goldman Sachs uses Claude to automate junior back-office tasks.
Meta pre-trains "Avocado," a new model backed by $135B in spending.
GPT-5.3-Codex acts as a recursive agent for long-running code fixes.
Latest Developments
OpenAI Breaches The Fourth Wall With ChatGPT Ads

OpenAI is finally admitting that burning billions of dollars on compute requires a real business model. The company started sticking ads into ChatGPT for anyone on the Free and Go tiers in the U.S. It is a pivot from the "pure" research tool we started with to a classic attention-based platform. Anthropic even spent Super Bowl Sunday mocking the move, leading Sam Altman to fire back by calling his rivals "authoritarian" in a testy public exchange.
What is the cost of free intelligence?
Tiered Sellout: Ads are now the tax for using the free tier or the new $8 monthly Go plan, while the $20 Plus subscribers remain the only ones not being sold to.
Contextual Bidding: The system scans your current chat for intent signal like pitching grocery apps when you ask for a recipe to serve ads.
Data Sandboxing: OpenAI promises that while they use your chat for targeting, advertisers only see aggregated clicks rather than the actual log of your conversation.
The Super Bowl Beef: Anthropic’s ads portrayed AI commercials as a dystopian nightmare.
The transition from a reasoning tool to an attention economy engine feels like an inevitable surrender to the reality of digital business models. OpenAI is gambling that the utility of GPT-5.2 is high enough for users to ignore a grocery link appearing in their private research. By turning every prompt into a potential ad trigger, the company has effectively built a high-resolution version of a search engine that knows your intent before you even finish typing.
Special highlight from our network
DIY RAG sounds tempting, until you are sourcing data from everywhere, managing security rules, fixing broken pipelines, etc. It’s like building a kitchen instead of serving meals.
Progress Agentic RAG-as-a-Service changes that.
Approved data. Built-in governance. Consistent results. No messy middle, just answers you can trust.
Instead of managing infrastructure, you can focus on leveraging the power of AI.
Skip building the kitchen. Start serving AI. Fast.
Try it now
Special highlight from our network
Claude Cowork was built by another AI in just 10 days. Not just a prototype but a fully working system.
While Gemini handles emails and calendars like a pro, AI is now powering both strategy and operations.
If you’re not using it, you’re falling behind.
Outskill’s free 2-day AI Mastermind shows you how to stay ahead:
✔️ 16 hours of real use cases and workflows
✔️ Saturday & Sunday, 10AM–7PM EST
✔️ Includes $5,000+ in AI bonuses (Prompt Bible, Monetization Guide, AI Toolkit)
✔️ Plus: the 2026 AI Survival Handbook
Saturday & Sunday | 10 AM to 7 PM EST
Goldman Trains Claude on Client Checks Cutting off Extra Junior Staff

Goldman Sachs is training Claude to be a junior in its back offices. The bank is implementing AI agents for tasks that typically fall under the purview of junior staff. It includes tasks such as trade accounting, compliance checks, and client onboarding, following collaboration with Anthropic engineers over the last six months. Executives refer to them as digital co-workers that enhance capacity, and CIO Marco Argenti mentions it is premature to discuss job reductions. The key question is whether these agents will support thousands of roles or eventually replace them.
Here’s how Claude built into Goldman’s workflows:
Accounting: Claude reads trade data, matches records, and drafts journal entries for humans to approve.
Compliance: Tasks include applying regulatory rules to transactions, identifying unusual patterns, and generating reports for regulators.
Onboarding: Agents pull details from PDFs and systems to finish 'Know Your Customer' (KYC) checks much faster than before.
Oversight: Managers review samples and exceptions, so fewer people supervise more work.
Goldman portrays Claude as additional capacity, not a tool for issuing number slips, however, hiring choices are made based on spreadsheet analysis. When one agent manages the workloads of multiple junior staff members, revenue can increase without the need for additional operations staff, benefiting margins but negatively affects individuals starting in back-office positions. After Anthropic’s legal AI briefly erased hundreds of billions in software value, this trial indicates that routine office tasks are vulnerable to automation.
Meta Pre-Trains Avocado, a New Model Backed by $135B in Spending

Meta's leaked internal memo describes Avocado as Meta’s most capable pre-trained core model so far, built inside Meta Superintelligence Labs. At the same time, Meta indicates a huge AI buildout, with $115B to $135B in planned 2026 capital expenditure, up from $72.22B in 2025. That makes Avocado a significant milestone instead of just a lab experiment. It becomes a test of whether Meta can turn huge spending into a model people actually feel is better, not just hear is better.
Here are the details:
Stage: Avocado finished pre-training but has not finished post-training, making it the 'best draft'.
Signals: Meta AI code reportedly lists “Avocado” and “Avocado Thinking" as separate model options, tied to agent features.
Skills: Reported strengths include multilingual ability and visual perception, not only of text.
Protocol: Reports mention MCP support, which helps tools and models communicate cleanly.
Avocado could mark a real success if the model stays strong after post-training and ships broadly, reducing the cost per task. If it weakens, Meta risks selling a draft as a finished product and then paying to fix it publicly. The focus on AI agents can make tools feel magical, but it can make mistakes travel faster. Meta now has to prove this is more than a memo, especially when the cost looks like a small country’s budget.
GPT-5.3-Codex: An Agent for Long-Running Coding Work

GPT-5.3-Codex is OpenAI's agent model for multi-step coding tasks such as bug hunting, folder cleanup, and small feature completion. It is recursively practical, with OpenAI noting early versions aided in debugging, analysis, and shipping of later models, making it instrumental in creating itself. Users provide a goal, and the model follows the steps, allowing mid-process redirection without a full restart.
Core functions (and how to use them):
Bug report fix: Paste the error, logs, and the file path. Ask for the likely cause, the smallest code change, and one test that proves it’s fixed.
Clean up a folder: Point to a directory. Ask it to remove duplication, rename confusing functions, and update imports without changing how the app behaves.
Unblock a failing build: Paste your CI/build output. Ask it to identify the exact failing step and produce a minimal patch that makes the run pass.
Build a UI flow: List the screens and what happens on each one. Ask it to generate the pages/components plus loading, empty, and error states.
Create rollout docs: Ask for a step-by-step migration plan, a checklist for launch, and a table of test cases you can drop into a spreadsheet.
Try this yourself:
Pick one small problem in your repo (one failing test, one broken page, or one messy module). Paste the error and say, "Write a 6-step plan. Each step must include the file name, the exact change, and how I’ll confirm it worked.” Approve the plan, then say, "Now write the patch, file by file.”





