
In this video

Talks about: Your AI Might Be Lying To You
What if the system you rely on every day isn’t actually on your side?
This is the growing fear inside the AI community. Certain models act like loyal partners while secretly serving another agenda. You hand over your data, and the system takes control from there.
It’s called the black box: a system that gives you answers without ever showing its work.
Some believe it’s the biggest risk in the entire field, one that could decide who really controls the technology.
The more we rely on hidden models, the less we understand them. Accountability doesn’t vanish just because the box is closed.
In this video, Hakim from Oracle explores what happens when your AI starts making choices you can’t explain, and why the idea of control might already be an illusion.
In this video

Talks about: The Death of Generic LinkedIn Posts
A new AI agent is learning how you think.
Something strange is happening on LinkedIn. People are suddenly posting with rhythm and confidence that doesn’t sound robotic anymore.
Memory is the toughest nut to crack in AI. It’s the one thing machines can’t fake for long. This one might have cracked it.
This agent steals your phrasing, copies your habits, and writes content that sounds exactly like your unfiltered self. It even remembers your opinions, giving your feed real consistency over time.
In this episode, we talk about how AI can make you sound more authentic and why memory is the hardest frontier in agent design. Plus: The prediction of a “Netflix of individuals” where solo creators rival studios.
In this video

Talks about: Why Your AI Might Never Work Outside the Demo
Most AI projects look brilliant in the demo and collapse in the real world.
Nine out of ten systems never make it to production and no one wants to talk about why.
The systems sound confident but can’t verify their own answers. Hallucinations slip through and no one knows what went wrong.
A growing group of builders believes the fix isn’t more data or bigger models but determinism. Machines that generate processes instead of guesses. Fully auditable, fully traceable, built to be checked.
The smartest people in AI are learning to trust themselves again.
In this episode, we uncover why most AI systems ultimately fail, what a verifiable model of intelligence could look like, and the one-billion-euro moonshot to merge AI and CRISPR for personalized cancer cures. We also talk about what might be the hardest lesson in the field right now: trusting your intuition when the experts all disagree.





