Welcome back! Trust, control and shortcuts are all present here. One group aims to make “no ads” a promise worth promoting on a grand scale. Another group is offering a single control room for every AI coworker in your company. A third relies on models to evaluate people in just a few clicks. Meanwhile, a discreet tool promises to recreate your research figures automatically. The main idea connecting all these efforts is clear: more of your decisions, processes, and results are being discussed between what you do on your own and what you allow an AI system to manage quietly.

In today’s Generative AI Newsletter:

  • Anthropic attacks ChatGPT Super Bowl ad trust.

  • OpenAI launches Frontier dashboard for company agents.

  • UC researchers test 3-minute AI addiction screen.

  • PaperBanana auto-builds research diagrams from text.

Latest Developments

Anthropic Criticizes ChatGPT Ads at Super Bowl as Altman Calls It ‘Dishonest’

Anthropic is selling trust at Super Bowl by assuring that Claude stays ad-free. It is also taking a clear swipe at OpenAI’s plan to add ads to ChatGPT, framing itself as the safer choice. Ads are portrayed as incentives that subtly influence advice. Anthropic is betting that this fear lands with regular users who just want an AI chatbot that answers to help without selling the problems or data.

Here is what the reporting lines up:

  • Promise: Anthropic says no sponsored links and no product placements you did not ask for.

  • Proof: The campaign includes four versions online, plus a 30-second in-game cut and a 60-second pregame AI therapist spot.

  • Reaction: Sam Altman calls the framing 'clearly dishonest' and says OpenAI would 'never run ads in the way Anthropic depicts them.'

  • Limits: OpenAI says ads would stay outside answers and claims, “Ads do not influence the answers ChatGPT gives you” and will remain “separate and clearly labeled.”

Both parties are in dispute over trust because trust has become the primary product in this context. The promise of being ad-free can guide you in navigating different pricing levels effectively. Even with clear labels, supported ads can influence user choices, especially in time-sensitive situations. Either way, the user pays in cash or trust. The Super Bowl makes one thing obvious that the next big AI upgrade is a business model you can trust.

Special highlight from our network

Big companies want AI to work like a helpful assistant. Fast, reliable and safe. But AI only works well with clean data, clear rules and humans guiding it.

ElevenLabs helps enterprises build AI that actually works. They partner with your team, design secure and natural-sounding systems, set clear goals and keep improving performance over time.

The result is AI you can trust, not experiment with.

OpenAI Launched Frontier Targeting 1M Businesses With One AI Dashboard

OpenAI has launched Frontier, a new enterprise platform for building and running AI agents that can handle work inside company tools like files, apps, and internal systems. OpenAI is selling it as the bridge between small bot experiments and start running agents like you run employees, with onboarding, access rules, and performance checks. OpenAI points to early results like a workflow cut from 'six weeks to one day' and a team freeing up 90% more time for sales work.

Here's what Frontier really changes:

  • Users: OpenAI names Intuit, State Farm, Thermo Fisher and Uber as early customers.

  • Memory: It aims to give agents shared context so they stop acting like a new hire who forgets everything overnight.

  • Rules: Each agent can run with identity, permissions and boundaries, which helps security teams sleep.

  • Price: OpenAI still will not say what it costs. One report says it declined to disclose pricing in a briefing.

Frontier also creates a new problem of centralizing power because teams rush in with scattered experiments and then they buy the system that manages them. The risk is that one platform can become a barrier, recurring bill and a vendor that is hard to replace. The good news is fewer chaotic bots with more safety rails. If this works, it pushes agents from a novelty into a budget line item, and that is where the real fight with rivals like Anthropic and Microsoft will happen.

Can an AI Model Spot Substance Addiction in 3 Minutes?

University of Cincinnati (UC) researchers have developed a method to detect substance addiction using AI without the need for extensive interviews or confessions. In this study, participants spend 3 to 4 minutes rating pictures. An AI model then examines addiction-related patterns and assesses its severity. The team trained and tested the AI model on 3,476 adults aged between 18 and 70, and they reported promising results for assessment.

Here is what the study shows:

  • Test: People rated 48 pictures on a like/dislike scale, and the model converts those ratings to learn patterns in judgment.

  • Strength: The model successfully identified key addiction behavior groups with an accuracy of 81–83%, minimizing false alarms.

  • Weakness: The accuracy of alcohol predictions dropped to 61%, with a sensitivity of 42%, missing several cases.

  • Motive: The authors say they plan a provisional patent, which hints this could move from research into a paid tool.

Although this type of AI does not substitute for a clinician, it functions as a quick filter to identify individuals who might otherwise go unnoticed. However, it may also incorrectly flag individuals without a disorder, particularly when the positive predictive value remains low in certain sections of the paper. Using tools like this in hospitals raises concerns about how a brief test could impact decisions for both patients and clinicians.

PaperBanana: Auto-Made Research Figures From Your Text

Since its release by Peking University and Google Cloud AI Research, PaperBanana quickly gained popularity as a go-to tool for researchers looking to skip the manual labor of diagram design. It’s for anyone writing technical reports without dragging boxes in PowerPoint. You feed it text plus a few reference figures and it drafts a paper-style figure with a built-in layout check.

Core functions (and how to use them):

  • Caption to diagram: Paste your method paragraph and caption. Ask for a box and arrow diagram that only includes steps mentioned in the text.

  • Reference style: Add 1 to 2 figures you like. Tell it to copy spacing, label length, and clean academic styling.

  • Plot as code: If you need charts, ask for Python plotting code first, then an image. This makes updates easy when numbers change.

  • Cleanup pass: Upload your messy diagram. Ask it to align boxes, standardize fonts, and shorten labels to 3 to 5 words.

  • Link checking: Ask it to list every arrow and quote the sentence that supports it. Remove any link that has no text support.

Try this yourself:
Take one figure you already need for a doc or report. Write a 2–3 sentence caption that explains what the figure should show. Then paste your method text and that caption into your AI tool and ask for a first draft figure layout. After you get the draft, do one cleanup pass: tell it to shorten every label to 3–5 words and remove any arrow it cannot justify from the text.

Reply

Avatar

or to participate

Keep Reading