Your Deepest Thoughts in the Open: The Danger of Brain-Computer Interfaces

Imagine if you could connect your brain to the Internet, find everything you need instantaneously and download the knowledge directly into your mind. You wouldn’t even need to type anything, just think it and get it.

Want to learn a new language? Piano? Kung Fu? You got it.

(1999) The Matrix

Brain computer interfaces (BCI) would enable it. This would be the biggest hardware upgrade we’ve ever had — in fact we haven’t got an upgrade for 50 million years. This could enhance the abilities of the entire human race and put us to the same pace technology has evolved.

I know, it sounds crazy, but the day before something is a breakthrough, it’s a crazy idea. I love the idea of having a BCI, however, everything has a downside.

For decades, governments and companies have tried to understand how the mind works, so they can control the masses. This isn’t new, though, what we’ve failed to see is how technology is allowing them to achieve their highest ambitions faster and further than ever before.

This is leading us to an emotional puppet show. We’re already in a puppet show. But, what will happen when corporations and governments manipulate us to such a level we can’t tell — or even worse think we’re the creators of the thoughts they put into our minds?

What will happen when they have access to our minds, when they can read your deepest thoughts?

Why is this happening? And what does this have to do with privacy?

Let’s go back a few decades ago in order to understand the big picture.

What can we learn about war?

One of the few good things about growing up in a small village as I did (2.000 people so far), is that you get to meet a lot of elderly people. Sometimes they talked too much, but too often I’d learned something about the past — especially about war times. Because most of them had lived the Spanish Civil War in 1936. And a couple of them even the First World War (!)

1940s WW2 British propaganda

Early on I discovered the discipline they had to learn the hard way. Because if they weren’t obedient and followed the rules, they would’ve been killed. And you know how they would also die? If they’d say something in public they shouldn’t have. They knew the value of their privacy and having their own thoughts with a lock. Those who didn’t wouldn’t be around for so long.

They valued privacy not just for their own interest (very legitimate though), but also for the good of their people. They knew that there were spies everywhere, and they had to be careful with what they said, so they could be safe from their own kind, but also from the other side.

That’s something we seem to have forgotten: the value of our own privacy and the collective good.

Shifting gears, another thing I learned from them (and maybe this wasn’t explicit, I discovered it, they didn’t tell me) is how TV started to change the opinion of the masses.

Consider the Vietnam War and how the US government used TV to show the need for them to get into war. In the end, it was so blatant that there were protestants in the White House who burnt themselves. Actually I’ve been in Ho Chi Minh City in Vietnam and seen the massacre they did that I was completely shocked.

The thing is, in the 1960s they didn’t know much about how the brain works, and how to influence the masses. In fact, it’s incredible how much they got away with so little knowledge.

Fast forward to the 21st century, the US did get away with the Iraq war and the massacre they have been doing for years in those countries. The difference after 9/11 was that they knew how the mind works and knew how to “change the conversation” pretty well.

Today we have recent events that prove this. But a lot has changed since the Civil War in Spain or the Vietnam war. And it’s worth remarking a couple of things:

  1. They know how to control the masses better than ever before. There’s more knowledge and tools that allow them to put that knowledge into practice.
  2. We think we’re at “peace time” and we no longer care what we say in public, or in other words, we don’t care about our digital footprint. But everything has its consequences — now or in the future.

This has been happening for years, but it’s the scope of their reach that scares the heck out of me.

How the mind works

It always has surprised me how companies and governments can get away after a shitstorm. The one that surprised me the most was the 2007 financial crisis… How is it possible that these people get away with it? How is it possible that they get to keep their power and stay in the same place?

Credits

People automatically would respond to that with “they’ve got money!” And that’s true. But the reason they do get away with it is because they can control the communication process. Thus, if they know how to communicate and persuade the human brain, then they can get away with it.

Before we get into brain computer interfaces and how it could be the best or worse thing that ever happens to us, we need to understand a few things first.

We’re not rational machines — we’re not even close

Being human is painfully tough. We’re just emotional machines, doing irrational things all the time. The truth is we don’t have an actual control of the decision making process, but we believe we do.

In 2011 Daniel Kahneman (who was awarded the Nobel Prize in Economics) published his book Thinking, Fast and Slow. Kahneman, where he shares his central thesis about the division of two modes of thought: System 1 and System 2.

System 1 is the automatic response of the brain. It’s our instinctive reaction to things. It’s fast and emotional.

System 2, is the rational one. It’s slow (because it uses the neocortex — the rational brain), but we’re aware of this one. System 2 is the voice in our heads.

In the end, we believe System 2 is the one in control, but it’s System 1 the one behind the steering wheel.

How we make decisions?

When I studied consumer neuroscience, I noticed how vulnerable we are, and how little we have to do with the decision making process. That was when I knew for sure that we’re not even close to control our own decisions. We think we do, but we’re not even close.

Don’t judge me yet. First, let’s consider this experiment on pricing to see if we’re actually in control of our own decisions.

Dan Ariely ran a experiment to analyze the pricing structure of The Economist.

There were three options:

  1. Web subscription: $59.00
  2. Print subscription: $125.00
  3. Print + web subscription: $125.00

Ariely ran a study with 100 MIT students, and 16% of them wanted the digital version and 86% wanted the combo deal — nobody chose the middle option. Apparently the middle option is useless.

So he ran another study, with another 100 MIT students. But this time he removed the “useless” middle option. Guess what, this time 68% chose the web version for $59.00 and 32% chose the dual combo. It turns out that the useless option wasn’t so useless after all. (And those were MIT students!)

So the question that pops out right away is: Do we actually have free will? Can we actually make decisions by ourselves?

Is there such a thing as free will?

It turns out that your brain knows things, but it doesn’t tell them to you right away — sometimes not at all. There’s a delay between when your brain makes a decision, and when you’re aware of it.

Moran Cerf in a talk at Google, shared one of his experiments where he and his colleagues had some patients to play a game about making simple choices. They gave them a wooden box with two buttons, and for 20 minutes they ask them to press the button on the left or right in random order.

They also told them they wanted to save each choice, so when they’re saving it, a red light will turn on while it’s processing it. And they couldn’t touch any button while the light was on. Otherwise, it wouldn’t work. (It was fake by the way.)

After a few trials, they had already decoded their thoughts way before the patient was about to execute them. It took four seconds from the time the decision was made, to the time the patient became aware of the thought. Here’s where it gets funny. They would wait 3.9 seconds and turn on the red light. Every time the patient reached out the box to push the button, it was already red. So imagine the patient’s experience when they want something to happen, that has already happened.

We call this the illusion of free will.

It’s the gap between the moment you would say something happen, and when it actually happened.

We believe we’re in control of things, but we’ve got no idea how or when a decision takes place in our brain. And in this part of the experiment, all the patients were thinking that they were making the decision of pushing one of the two buttons.

Sam Harris, neuroscientist and American author, in his book Free Will, states that free will is an illusion: “Free will doesn’t correspond to any subjective fact about us. It doesn’t come from a conscious point in our minds.”

“I cannot decide what I will next think or intend”, Harris says, “until a thought or intention arises. What will my next mental state be? I do not know — it just happens. Where is the freedom in that?”

Whether we like it or not, we don’t have the freedom we think we have. Corporations and governments know this by heart. This is the reason they can get away with almost anything they want — they just have to play with us. And while this has always happened, today it gets to a whole new level, because they have access to more data than ever before. Data that didn’t exist 10 years ago.

Digital footprints: Is my data that important?

The thing playing against us here is that, regardless of all the evidence, we still believe we’re the ones controlling our minds. We deny our reality. It’s hard for us to admit we’re not the authors of our own thoughts. I get it, it’s painful.

(2018) Do You Trust This Computer?

Corporations and governments have a huge advantage with our natural naivety. And even though people hear about data all the time, most of them don’t know what happens with it.

Jerry Kaplan, professor at Stanford explain in Do You Trust This Computer? (documentary) what happens behind the scenes:

“People don’t realize they’re constantly being negotiated with, by machines. Whether that’s the price of products in your Amazon cart. Whether you can get on a particular flight. Whether you can reserve a room at a particular hotel. What you’re experiencing are machine learning algorithms that determine that a person like you is willing to pay two cents more, and it’s changing the price.”

Michal Kosinski, professor at Stanford, also in the documentary Do You Trust This Computer? explains what these algorithms are capable of:

“A computer looks at millions of people simultaneously, for very subtle patterns. You can take seemingly innocent digital footprints, such as someone’s playlist on Spotify or stuff they bought on Amazon, and then use algorithms to translate this into a very detailed and accurate intimate profile.”

We have reached a point where these corporations know more about you, than you know yourself.

Most people just think in terms of ads and the fact that companies want you to spend more money. But this information has other uses too.

When I deleted my Facebook account people thought I was being too drastic — how Facebook could even be dangerous? But what we’re discovering, the hard way, is that the digital footprints you leave on social media, the filter bubbles, and the greed of corporations and governments can have an impact on you, on your society, you’re not even aware of. An impact we can’t even imagine.

In the end, social media (and other platforms) have turn into a tool that control the masses. They don’t just connect the world, but control it too. Because once you have that kind of information on people, you can play with their illusion of free will and get an outcome people wouldn’t accept consciously.

The 2016 election got us by surprise

With the 2016 election we’ve seen how vulnerable we are. Cambridge Analytica had access to detailed information from millions of profiles (around 100 million they say), and what they did with that data was to create “the ultimate algorithm” that can predict human behavior pretty accurately. And that ended up being a tool of mass control.

After all we’re emotional machines, so they know that if they push our emotional button we’d become their puppets. It’s not coincidence that right away Brexit happened and also got us by surprise.

They can predict what you’re going to do, before you’re even aware of it (if you’re aware at all). And that’s without having an electroencephalogram on your head. What will happen when they have access to your deepest thoughts? (More on this in a second.)

We’re being given the truth in a platter, but we’re being too soft, because we don’t understand that we’re prostituting ourselves. People don’t care, though. And if at some point people do care, corporations and governments will have a plan B ready, so they can “change the conversation”.

One way they can get away if things turn nasty

There are some breadcrumbs leading us to the path of selling our data. Lots of startups jump into blockchain’s hype and try to create alternatives. But that’s not a valid solution, and many people are starting to be aware of it — like Jennifer Lyn Morone.

Jennifer Lyn Morone is an artist who, in order to highlight our state of data slavery, turned herself into a registered company: Jennifer Lyn Morone™ Inc.

“My data is a resource for me to exploit,” says Morone in her video. “I want to make me, my life and my experience my business.”

So she created a portfolio of products where she sells her data in packages.

“There’s a whole rule book that other people are playing with and we don’t understand,” says Morone. “I’m not saying this shouldn’t be done — I’m saying it is being done already.”

This whole project is of course sarcastic, but she’s showing us how we’re turning into a state of extreme capitalism and we’re not even aware of. And people actually buy it.

Too often the argument turns around and people just go with the most obvious answer: well, if they’re going to use my data they should pay me for it. And that’s actually what they’ll do if regulations get tough on them. They’ll do it until people see that selling their data is just a pain reliever, not a cure.

The solution is not to make a few bucks from selling your data. That’s far from the point. It’s, in fact, damaging. Because the act of selling data contributes to build the story of “selling your data is okay”.

Again and again, we see how they play with our minds, probing we don’t actually have any free will. Drip by drip, they push a change into society — a change most people don’t understand. It goes from “hey, it’s free”, to “I’m making some dollars with this thing that seems useless”.

Certainly Facebook and Google wouldn’t want to buy our data. In the end, they’re getting it basically for free. They won’t give up their business model of targeted advertising so easily. In fact, in 2017 they put their hands on $135 billions in ad dollars. That’s a lot of dough to give up.

Nevertheless, Google and Facebook aren’t like other corporations. What difference them from other dinosaurs is that they adapt. But unless they see the guillotine around the corner, they won’t.

The Economist recently said:

“If 2018 will be remembered as the year of Cambridge Analytica — a British data-mining company that allegedly influenced both British and American elections by targeting voters using personal data — it will also be remembered as the year that privacy law finally started catching up to the Internet.”

I’m not a pessimist guy, but this situation requires to think that we’re in war times. As they say in the army, hope for the best and prepare for the worst. So, what if 2018 will be remember as the year where privacy was not actually a public concern?

(1987) Full Metal Jacket

I worry that just like with the 2007 financial crisis, or as has happened with almost every war, the countries that play the lead role shift the conversation and distract us from what really matters. They’ve done it a hundred times, why would it be different now?

Where do brain interfaces enter the scene?

Nobody talks about neural interfaces but everybody’s working on them.

Facebook is working on it, and you can bet the other tech giants won’t be left behind. You can bet that Google, Apple, Amazon, Microsoft, go down the list, all of them are working on neural interfaces.

The most notorious ones right now are Neuralink (founded by Elon Musk) and Kernel (founded by Bryan Johnson).

Baidu (the Chinese tech giant) is also working on brain-inspired neural chips under China’s state-run umbrella. But also governments — China and USA mainly, but you can bet Israel, Russia and a bunch of others are involved too.

So, the question that immediately arises is:

What would happen if we continue with our current privacy standards and our thoughts are out there in the open?

Here’s what happens. We’ve got a long track of making humans irrelevant. And this little history track shows us the real intentions behind corporations and governments: money and power.

So, the real question arising here is: If you have a brain interface that reads your thoughts and uploads them to the cloud, who owns that data? And regardless the ownership, who can access and use that data?

This is the million dollar question.

Bryan Johnson, founder of Kernel, wrote in his newsletter:

“Imagine you had a brain interface that could read all of your thoughts, conscious and subconscious. Who would own that data? Who would you give access to? Who could make money on it?

“Given that we’re building brain interfacing technology at Kernel, and others are also building this technology, this is an important thought exercise that has been weighing heavily on my mind.

“Our lives are captured digitally; search history, what we read/write, where we go, what we do, how fast we walk, what we buy, where we live, habits, preferences, religion, politics, and thousands more intimate details. Facebook, Google and others have been mining, monetizing, and profiting from this information. It’s unsettling to think that our raw brain data could be treated the same way.

“Today, your thoughts are your private domain. You are the only person with access to your brain. This is the only data that you still control. But unless we make some big changes, that will soon no longer be the case.

“The implications are more serious than I hear anyone talking about.”

The fact that somebody who happens to be building a brain interface says that, it’s scary.


It’s not coincidence that the two countries having a shot at ruling AI, China and USA (China has more chances to win this war) have brain initiatives.

As you can see in my articles, my examples are usually about China, but it’s not the only one testing new technology in this field.

DARPA, the Defense Advanced Research Projects Agency, is the agency responsible for the development of new technologies for the military in the US. Back in 2016 they created the Neuro Function, Activity, Structure, and Technology (Neuro-FAST) program. A program to use neurotechnology in order to help soldiers “perform better” under circumstances of extraordinary stress.

That’s what they say. It’s not hard to imagine that the episode from the science fiction series Black Mirror, Men Against Fire, maybe it’s not so science fiction after all.

(2016) Black Mirror, Men Against Fire

Also remember that Barack Obama introduced in 2013 the BRAIN Initiative:

“There’s this enormous mystery waiting to be unlocked and the brain initiative will change that by giving scientists the tools they need to get a dynamic picture of the brain in action and better understand how we think and how we learn and how we remember” (Obama in 2013)

The BRAIN Initiative is a six billion dollar fund, which goal is to learn how to map the activity of an entire brain, and build methods to read and write the neural activity. So, eventually, it could help patients with neurodegenerative diseases — and of course at that point, a lot of doors will unlock.

There’s a lot of interest in getting to know the human brain. But how far are we? You might be thinking that for decades we’ve tried to understand how the brain works, but we’ve never gotten any close to a decent result…

That’s about to change though.

How far are we?

In 2017 Professor Rafael Yuste and his team recorded the activity from every neuron of a jellyfish called Hydra. It might seem like it’s not a big deal, and of course the human brain is way more complex than the Hydra, but this is a huge step forward. Because, once you know how to read neural code, you can write it too.

“In a way you could argue that we’re trying to read the hydra’s mind because we can measure the activity of every neuron in Hydra while the Hydra is behaving.” Rafael Yuste said in an interview with The Economist. “Can we input thoughts into a Hydra? Can we write a patterns of activity and change the behavior of the animal? We’re trying to do this in Hydra and we’re trying to do this in mice. We can imagine that you could do this with humans in the future.”

NASA

This is just a matter of time until we get there. And we’ll get there. History has shown us that if we put enough money on a problem, we can find a way. The Apollo program is proof of that. The US government spent billions of dollars to prove they could get to the moon before the Russians — and they did. (There’s a great talk called Secret History of Silicon Valley, where Steve Blank explains the reason the US government had to get to the moon first.)

Whether we want it or not, this is a timely issue. It’s coming faster than we think. And we’re already half-way through there: on hardware and software.

On one hand, technology is here. There are ways to read and analyze the brain activity. The challenge here is not whether we’ll come up with the right technology. The challenge is how to come up with an interface that’s user friendly. Because, so far, the options are quite intrusive (like drilling a hole in your head).

One of the biggest discoveries we’ve seen in this century is that everything is an algorithm. We’re the result of an algorithm. Actually there are several companies working on how to hack the human code. But also companies storing digital data on DNA. They say that 1kg of DNA could store all the world’s data.

So coming up with an interface is the hard part, and reading and writing code for the brain can be done — it’s a matter of time.

On the other hand, what feeds the technology that allows digital intelligence is our data. And whether we give it away for free or for a few bucks, we’re still summoning the demon. In the end, history repeats itself and you can sense what this race for hacking the human brain is about.

So we can do two things here: (1) We do nothing and hope for the best, or (2) We prepare for the worse and make sure we come up with a way to recognize data as our personal property, and that companies can’t trick us to exchange it for a few cheap dopamine hits.

And when I say data, I don’t just mean some digital footprint you leave on the Internet. Your personal data is everything about you: knowledge, skills, decision making process, your own biases, everything. All of them give you value as a human being. And right now things are starting to get out of hand with biometric data.

The stage is set

Brain interfaces are coming. But are we ready?

If we unlock this door we’ll get to a whole new level, and the privacy problems we know today will seem lesser to what could come. That would be much worse. Because in the end what’s at stake are the most deeper contents of your mind.

This could be the best thing that happens to us, or the worst one. There’s no middle ground here. So it’s critical that we build the foundation of privacy as a structure of basic human rights so we can build things on top of that.

There’s no version of the future, worth living, in where we don’t own our data.

That’s the first step. It doesn’t matter if corporations and governments have good intentions (our current track says otherwise). In order to avoid our path to irrelevancy, we have to own our data. We have to own our value as human beings.

For decades corporations and governments have tried to understand how the mind works, so they can manage us however they want. What happens is that every time it gets more complicated to get out of their puppet show. And if we follow this path, eventually, we won’t even know what’s reality and what’s not.

This is an essential and timely issue. And we have to make sure we get this right. We have to stop the intrusion of privacy and the consequences that comes with it, because in the end, the future of the human race is at stake.

It’s worth emphasizing this again: There’s no version of the future, worth living, in where we don’t own our data.

Objects in mirror are closer than they appear

We seem to forget how fast technology comes. It wasn’t that long ago when cell phones were this clunky and expensive “new technology” (around $7.000–$8.000). And believe me, this changes are coming. Fast. The thing about technology is that it comes exponentially, not linearly.

(1987) Wall Street

A bad scenario isn’t imminent, but with all the advances and history track, we must work on this right now before it’s too late. Just like an architect would say, it’s only at the early stages where you get to build the structure of a building — once it’s set, you can’t change it. So we’re better off if we get this right.

However, what worries me the most isn’t technology, but how the people in power uses that technology and the potential harm it can have on us. We can’t leave to chance something as important as this. So let’s hope for the best and prepare for the worst.

Join The PrivateID Mission.