Special highlight from our network
Replacing Copilots ≠ Securing AI
Enterprises are rushing to deploy Microsoft Copilot and similar tools — chasing productivity gains. But security, compliance, and visibility often take the back seat.
🔍 59.9% of enterprise AI traffic is already being blocked for policy violations.
Swapping AI tools won’t fix the root issue. You need:
✅ Role-based access
✅ Prompt-level observability
✅ Governance-first architecture
Tumeryk Co-Pilot brings you secure productivity without compromising trust.
Google’s Gemini 2.5 Computer Use Aims at Real Browser Control

Image source: Google
Google released Gemini 2.5 Computer Use in preview, a model that operates web pages by reading screenshots and issuing clicks, typing, and navigation. Google says it beats rivals on web and mobile tests at lower latency, with the current focus on browsers rather than desktop control.
What matters:
Access: Available through the Gemini API in Google AI Studio and Vertex AI.
Control loop: Takes a screenshot, selects an action, client runs it, then repeats until the task ends or is stopped.
Guardrails: Can request end-user confirmation for sensitive steps such as purchases.
Scope: Optimized for web today, early promise on mobile, not tuned for OS-level control.
If the claims hold outside demos, this can remove a lot of grunt work from form fills, bookings, and routine web ops. The test is reliability on messy sites with pop-ups, paywalls, captchas, and logins where most automations still fail.
People Are Now Ditching Lawyers for ChatGPT in Court

Image Source: NBC News
Across the U.S., self-represented litigants are turning to ChatGPT and Perplexity to draft filings, research statutes, and plan appeals. One tenant, Lynn White, used the tools to overturn an eviction and avoid about $55K in penalties plus $18K in rent after identifying procedural errors and mapping an appeal plan.
What to know:
How it works: People upload filings and orders, ask for procedural checks, pull statutes, and draft motions and replies.
Documented win: Lynn White credits AI tools with the research and drafting that reversed her eviction and wiped major penalties.
Real risks: Courts have sanctioned filings that cited nonexistent cases or incorrect facts, and providers warn against relying on AI for legal advice.
Where it fits: Lawyers see promise in small claims and routine matters, while complex cases still require deep expertise and local-rule fluency.
The new playbook is speed and verification. AI can surface options fast, yet courts move on proof, citations, and clean procedure. People who try this path should cross-check every case, track deadlines with care, and call counsel when the stakes reach homes, liberty, or long-term rights.
You’ve done the hard part: building something that teaches. Now it’s time to let it shine.
The new GenAI Academy is taking off. Our first drops already reached over 5,000 learners.
We’re now inviting instructors to submit their AI courses for a chance to be featured and shared with GenAI.works’ 13M+ global community.
👉 Submit your course by October 21st and be part of how the world learns AI.
Robin Williams’ Daughter Asks Public To Stop Sending AI Videos

Image: Getty Images
Zelda Williams, daughter of Robin Williams, has asked people to stop sending her AI-generated videos that mimic her late father. She called the clips “gross” and “dehumanizing,” saying they distort his legacy and turn grief into entertainment. Her post joins a wider conversation about how digital replicas of public figures are being shared without consent.
What to know:
The post: “Please, just stop sending me AI videos of Dad,” she wrote on Instagram.
Emotional toll: She said the videos are distressing and disrespectful to both her family and her father’s memory.
Earlier comments: In 2023 she described AI voice clones as “personally disturbing” and supported creative unions seeking consent protections.
Health background: Robin Williams died in 2014 from complications linked to Lewy body dementia.
Families often hold rights over name, image, and voice, and several states recognize posthumous publicity rights, so consent and estate approval matter. Platforms are adding labels and provenance tools, but the most humane choice right now is simple: share the real work, support LBD research, and let the people who lived these lives set the terms of how they are remembered.
Duke’s TuNa-AI Designs Smarter Nanoparticles for Drug Delivery

Source: Duke University
Duke researchers introduced TuNa AI, a robotics plus machine learning platform that designs and optimizes drug-carrying nanoparticles. In new studies, the system ran high-throughput experiments, raised build success, and sharpened delivery for cancer drugs in cells and mice.
What TuNa-AI delivered:
Scale of testing: 1,275 formulations screened by automated lab robots.
Higher hit rate: 43% boost in successful nanoparticle creation versus standard workflows.
Leukemia example: Venetoclax wrapped in new particles dissolved better and killed more cancer cells in lab assays.
Safety tweak: For trametinib, a potentially toxic ingredient dropped by 75% while keeping efficacy in mice.
For AI builders, this reads like AutoML for wet labs: a closed loop that proposes recipes, runs them on robots, and learns from the outputs. If follow-up studies hold, the approach could revive shelved compounds and push drug delivery closer to a compile-run cycle for biology.

🚀 Boost your business with us. Advertise where 13M+ AI leaders engage!
🌟 Sign up for the first AI Hub in the world.