Welcome back! The breakthroughs are spectacular, but the real power now sits in places that look far less glamorous. University labs turning proteins into software. Messaging platforms deciding who gets to speak. Consultants slipping AI errors into policy. Regulators in small countries racing to understand systems that arrive faster than they can audit them. This is the truth about AI that rarely makes the headlines.

In today’s Generative AI Newsletter:

AlphaFold fuels major biology breakthroughs across Asia Pacific
WhatsApp blocks outside AI bots and reshapes global access
Deloitte faces questions over AI linked citation errors
UNESCO confronts infrastructure gaps in its AI readiness work
Tool of the Day FLUX.2 delivers high control image generation

Latest Developments

WhatsApp is rewriting its rules on January 15, 2026, and the update means ChatGPT, Microsoft Copilot, and every other third-party AI chatbot will be forced to leave the platform. The new policy blocks companies from using WhatsApp’s Business API as a delivery channel for their own AI assistants. OpenAI announced its exit first, Microsoft followed, and several others are preparing to pull out next.

What the new rules do:

  • Blocks outside AI bots: WhatsApp will only allow simple customer support bots from businesses, not full AI chatbots made by other companies.

  • Confirmed exits: OpenAI and Microsoft say their bots will stay live until January and then shut down.

  • Chat history gap: ChatGPT users can link accounts to save their chats, while Copilot users do not get that option.

  • Only Meta’s bot remains: The update means Perplexity and other AI tools will likely disappear too, leaving Meta’s own AI as the only built-in option.

WhatsApp has become one of the most powerful channels for AI adoption, especially in regions where messaging apps serve as people’s default interface for work, shopping, and school. Removing outside bots turns WhatsApp into a closed AI garden, with Meta AI as the only voice left in the room. It is a revealing moment for the future of AI access, because it shows how quickly the biggest platforms can reshape the field not through technology, but through a single terms-of-service update.

Special highlight from our network

How teams are using AI to cut costs and be productive

Pecan AI makes predictive analytics actually accessible. Business teams build trust-worthy models in hours. No data scientist required. Pecan handles the messy parts: data prep, feature engineering, and optimization, while predictions flow straight into your CRM or marketing tools. Predict churn, forecast demand, score leads, ROAS and more. From eCommerce to fintech, teams are cutting costs and driving real impact with real ROI in weeks.

AlphaFold turned five this year, and the system has gradually become one of the most productive scientific tools in the region. Researchers across Malaysia, Singapore, Korea, Taiwan, and Japan are using it to chase diseases, decode molecular oddities, and confirm structures that were once only theory. More than three million scientists now rely on it worldwide, and APAC accounts for over a third of that work. 

What researchers are doing with AlphaFold:

  • Malaysia: Scientists studying melioidosis, a disease that kills nearly 90,000 each year, used AlphaFold to map the bacterium’s survival proteins and accelerate drug discovery.

  • Singapore: Teams at A*STAR and NNI generated a 3D model of a Parkinson’s-linked protein and found how the immune system disrupts it, giving researchers a clearer path toward earlier diagnostics.

  • Korea: Work at KAIST revealed a hidden interaction site inside a DNA-organizing protein, offering a new explanation for how disruptions in cell identity can lead to cancer.

  • Taiwan and Japan: Taiwanese researchers confirmed a predicted 71-torus knot protein fold, while Japanese teams used AlphaFold to classify unusual hot-spring viruses and map a previously unknown family of life.

The region now hosts 13,000 research papers citing AlphaFold, and the work is no longer theoretical. It is becoming a standard part of the scientific bench, the same way microscopes and sequencers once were. The APAC projects hint at a new kind of biology where ideas move straight from a model’s prediction into a real experiment, and the speed of that loop is starting to rewrite the tempo of discovery.

A $1.6 million report by Deloitte that was meant to help the province to tackle staffing issues in the healthcare industry, has become a case study in how AI hallucinations can slip into serious policy work. ​​The government paid the firm for a detailed 526-page report to help in guiding decisions about doctors, nurses, keeping staff and future funding. When local reporters and academics began looking into the citations, they came across hospitals that were described in ways the staff didn’t recognize, along with mentions of research that don’t exist. That set off a deeper review of how much of this high-priced advice was actually checked by humans.

Here is what that review has uncovered so far:

  • AI’s role: The company says AI did not write the report but helped with a small number of research citations.

  • Damage control: The company agrees that citation corrections are required but claims the findings remain accurate and unchanged.

  • Government response: In an effort to keep the key findings intact, the health department of Newfoundland and Labrador has asked Deloitte to fix the mistakes instead of pulling the report.

  • Emerging Pattern: Earlier, a similar Deloitte study used AI to produce fake legal references which led to a partial refund in Australia.

This does not look like a single mistake when you look at how consulting firms are quietly weaving AI into research and drafting. AI can help deal with messy data, speed up analysis and find patterns, but it can also invent sources, fabricate experts to sound more impactful. It is a governance risk when AI can change reports, influencing funding and staffing decisions. These incidents raise an alarming question of who is responsible when the systems give out false information that can threaten the decisions and safety.

In 2025, while chatbots grab headlines, a slower story is unfolding in Ecuador. UNESCO and local officials ran an AI “readiness check” and found three hard numbers: nine data centers for the country, internet in about half of rural homes, and under 100 university courses on tech ethics. Ecuador wants trustworthy AI on paper but in practice, the pipes, people and rules are still being built. In a series of follow up meetings in Quito, officials tried to answer if they are building public rules for AI or just making it easier for vendors to sell them cloud products.

Four moves stand out:

  • The X-ray: UNESCO's audit reveals weak infrastructure, significant rural gaps, and limited formal AI ethics training.

  • The first test case: The competition authority introduces Ecuador’s first public-sector AI Code of Ethics.

  • The bootcamps: Lawyers, judges, and officials complete 20-hour courses on how to question and control AI tools.

  • The regional script: Latin American lawmakers receive nine templates, ranging from light guidelines to strict penalties.

UN bodies write AI ethics, but vendors move faster. In the hopeful version, Latin America uses this moment to demand systems that explain their decisions and hold themselves accountable. In the bleak version, ethics become mere decoration, referenced in presentations while real power remains with those who control the chips, data, and deployment teams. AI will be used in hospitals, schools, and tax offices. When it arrives, power will rest less with codes of ethics and more with the terms hidden in service contracts.

FLUX.2 is Black Forest Labs’ new image model suite built for creators who want precision. It supports multi reference inputs, JSON prompts, exact hex colors, and 4MP outputs. The system is designed for product photography, ads, infographics, UI layouts, and scenes where consistency matters.

Core functions (and how to use them):

Multi reference: Upload several images of the same person or product. Tell FLUX.2 which parts to keep consistent. This creates reliable character identity, fashion looks, and product angles.
JSON prompting: Use a structured JSON block to control subjects, camera angle, lighting, composition, and color palette. This is the fastest way to produce repeatable, professional shots.
Hex colors: Insert brand hex codes like “#02EB3C” directly in the prompt to match real product colors, packaging, or UI themes.
Infographic layouts: Use the “type: infographic” JSON template to generate clean charts, icons, and text blocks without manual layout work.
UI and product shots: Combine JSON structure with 4MP output for ads, hero images, mockups, and magazine style compositions.

Try this yourself:

Create a structured JSON prompt for a product you own. Add a “subjects” block describing the item, a “color_palette” with its real hex values, and a “camera” section defining angle and depth of field. Then add a second reference image to test character or product consistency. Compare how FLUX.2 preserves shape, color, and style across variations.

Reply

Avatar

or to participate

Keep Reading