đź‘€ Can AI Big Brother Hack Your Brainwaves?

Forget cookies and cameras. Your neurons are the next data goldmine.

Ever thought of something you needed while daydreaming on the sofa? Then you pick up your smartphone to doom scroll on Instagram, and now you’re getting ads about that specific item?

Most of us have experienced something like this before and wondered jokingly:

âťť

Is my phone reading my mind?

That question will soon sound a lot less crazy once you’ve finished reading this newsletter. You’d be amazed how far AI scientists have come with reading our minds. 

I’m not talking about making guesses from your browser history or those late-night doom scrolls. I mean actual thoughts. Yes, the mental musings, emotions, and flickers of imagination that, until recently, were yours and yours alone. 

Mind-Reading: Not Just Sci-Fi Anymore

Researchers at the University of Texas used fMRI scans to train AI models to understand the gist of what people were hearing, thinking, or imagining. These models don’t spit out a perfect transcript, but they get eerily close. The researchers claimed to reach up to 82% accuracy—all without drilling into your skull.

Using patterns from brain activity, AI can reconstruct what you saw, what you remembered, or what you imagined. The things you didn’t even say out loud might soon be viewable and searchable.

The Goldmine of Brainwaves

Brain-Computer Interfaces (BCIs) make this all possible. They pick up on brain signals that are not just personal but also uniquely identifying. It’s like a fingerprint for your thoughts. This neurodata can predict behavior and emotional states and even reveal information you didn’t know about yourself.

According to the Agencia Española de ProtecciĂłn de Datos (Spanish Data Protection Agency), this technology could even read people’s thoughts without their consent. 

🤔 Let’s think about that for a second. This isn’t your FitBit tracking your heart rate. This is your mind, wrapped in a data packet that someone else could access or hack.

The Last Frontier of Privacy

We’re used to giving up data. 

We accept terms of service, click past cookie banners, and shrug when Alexa accidentally hears us. But our thoughts? That was supposed to be the safe zone: the final frontier. Thanks to the fast-blurring line between AI and neurotechnology, those days might be numbered.

Researchers and ethicists warn that mind-reading technology could erode mental privacy as we know it. Once neurodata becomes commodified, every marketer, employer, and government agency will want a peek.

And who’s going to stop them from looking? Sure, we’ve all accepted that Instagram and TikTok spy on us as we use the apps, and government agencies are probably listening to our phone calls. But imagine:

  • Hiring decisions based on brain scans

  • Insurance premiums set by predicted mental health risk

  • Targeted ads based on your unconscious preferences

Believe it or not, some companies already market AI systems as lie detectors based on brainwave activity. This tech is sprinting towards potential dystopia while regulations barely crawl.

Mind Over Manipulation

It doesn’t stop at privacy. This tech challenges autonomy itself.

How do we maintain freedom of thought when our thoughts are no longer private?

Some ethicists propose neurorights as a solution. Neurorights identify a set of cognitive liberties, including mental privacy, agency, and protection from algorithmic manipulation. 

Researchers in 2024 pointed to Chile as their use case. But why Chile? 

UNESCO calls the country “pioneers of protecting human rights.” As far back as 2021, this South American nation became the first country in the world to amend its constitution to protect “brain rights” or neurorights. In fact, the bill received unanimous support from Chile’s Senate.

As companies experiment with neuro-profiling and governments prepare to get their hands on neural surveillance data, protecting these rights is now more urgent.

Data Privacy and the AI Misalignment Problem

Computer science researchers, such as Eliezer Yudkowsky, have long warned about the AI misalignment problem. This speaks to the fact that creating AI to be smarter without focusing on aligning it to our social values will cost us in the long run. 

Yudkowsky’s non-profit research organization makes a bold (and frankly horrifying) statement. It says:

The problem isn’t so much that AI itself is harmful, but how long can we control something smarter than us? And once it escapes from its current server confinements, humans might not be part of its master plan. After all, compared to robots and artificial intelligence, we are:

  • Less efficient

  • Less productive

  • Not standardized

  • More unpredictable

  • More unreliable

Yudkowsky asks us to consider a future where we’re up against something smarter and faster than us that never sleeps. I pose an additional question:

âťť

If the worst-case scenario happens, can you imagine protecting yourself against something that can also read your mind?

This is the kind of thing dystopian writers might’ve warned us about. But hey, at least your employer will know when you’re mentally checked out in Zoom meetings.

How to Prepare for Mind-Reading Robots

We’re a little too far gone to stop scientists, companies, and governments from pursuing mind-reading tech. And to be fair, it does have some pretty solid use cases, such as for coma patients (I’ll discuss this in my next post!). However, there are steps we can take to safeguard human privacy and autonomy.

Technical Fixes

BCI has so much promise that this technology is likely to take off. We can do our due diligence by baking BCI into the design from day one. We should also offer users full control and transparency—and maybe a panic button.

Legal Moves

More countries should follow Chile’s lead and bake neurorights into national constitutions. They should also create laws that protect cognitive liberty and forbid unauthorized neurodata harvesting. Additionally, we must push for global frameworks before cross-border misuse becomes the norm.

Ethical Standards

We should establish ethics boards for artificial intelligence in general, but especially for the use of neurodata. Additionally, we should certify BCI devices the way we certify medical tools. Finally, we should fund public education so people know what's at stake before signing away their brainwaves.

The Takeaway

So, is your smartphone reading your mind? In the not-so-distant future, the factual answer to that question could be yes.

âś… The good news? That could mean revolutionary advances in healthcare and communication.

❌ The bad news? It opens the door to misuses that we’re only beginning to imagine.

Just know that once our thoughts are in the cloud, we can’t take them back.

About the Author

Tessina Grant Moloney is an AI ethics researcher investigating the socio-economic impact of AI and automation on marginalized groups. Previously, she helped train Google’s LLMs—like Magi and Gemini. Now, she works as our Content Product Manager at GenAI Works. Follow Tessina on LinkedIn!

Want to learn more about the socio-economic impacts of artificial intelligence? Subscribe to her AI Ethics newsletter at GenAI Works.

🚀 Boost your business with us—advertise where 10M+ AI leaders engage

🌟 Sign up for the first AI Hub in the world.

📲 Our Socials

Reply

or to participate.