When Elon Musk gave the world a demo in August of his latest endeavor, the brain-computer interface (BCI) Neuralink, he reminded us that the lines between brain and machine are blurring quickly.
Though Neuralink and BCIs alike are still likely many years away from widespread implementation, their potential benefits and use cases are tantalizing, especially as the technology eventually evolves from stage 1 applications, such as helping those with spinal cord injuries, to more complex ones, such as controlling multiple devices.
It bears remembering, however, that Neuralink is, at its core, a computer — and as with all computing advancements in human history, the more complex and smart computers become, the more attractive targets they become for hackers.
To be sure, the consequences of high-level hacking today are severe, but we’ve never before had computers linked to our brains, which seems a hacker’s ultimate prey.
Our brains hold information computers don’t have. A brain linked to a computer/AI such as a BCI removes that barrier to the brain, potentially allowing hackers to rush in and cause problems we can’t even fathom today. Might hacking humans via BCI be the next major evolution in hacking, carried out through a dangerous combination of past hacking methods?
To better understand how hacking the brain could happen, let’s first examine how the relationship between humans, computers and hacking has evolved over time.
1980s To Mid-1990s: Hacking Tech To Get Human Data
Though hacking has been around since the 1960s, the modern age started in the 1980s when personal computers — and then hackers — made their way into homes.
Hacking took advantage of new and emerging technology that was easily manipulated. Hackers’ treasure during this time was mainly personal and financial information, such as credit card details, and they leveraged technology to get it.
The 1992 film Sneakers — about a black box capable of breaking any encryption code, ensuring there were “no more secrets” — helped popularize and reveal some of the hacking techniques used at the time, such as infiltration, physical intrusion and backdoor access. During this time, computers were the conduit to human data.
Mid-1990s To Today: Hacking Tech Via Humans
As technology became more accessible, humans began storing more of their private, sensitive information within technology, which now held the keys to hackers’ treasure.
While the core theme of Sneakers was to use a black box to cryptographically decipher systems, social engineering was heavily used to gain access to the box — a tactic that has grown exponentially as hackers shift their approach. Instead of breaking into the technology itself, hackers began preying on the vulnerabilities of human behavior (the weakest link) to get into the tech we rely on to store our vital information.
This period has been dominated by phishing and all forms of social engineering — hackers’ psychological manipulation of humans to persuade them into doing the hackers’ bidding. During this period, humans have been the conduit to technology.
The Future: Hacking Humans Via Tech
Previous eras were defined by obstacles between hackers and their targets, which were in place due to the inherent physical disconnect between humans and technology. However, what happens when that disconnect between humans and tech is blurred? When they’re essentially one and the same?
This is a top security concern of BCI tech like Neuralink. The technology’s core promise — enabling the brain to communicate directly with computers — might also turn out to be its biggest security flaw. There would no longer be a separation between humans and computers that requires some form of authentication and judgment.
Should a computing device literally connected to the brain, as Neuralink is, become hacked, the consequences could be catastrophic, giving hackers ultimate control over someone.
If Neuralink penetrates deep into the human brain with high fidelity, what might hacking a human look like? Following traditional patterns, hackers would likely target individuals with high net worths and perhaps attempt to manipulate them into wiring millions of dollars to a hacker’s offshore bank account. Executives in boardrooms could be hacked into making decisions, resulting in significant financial consequences.
In a more alarming scenario, should a hacker take control of a large population of people, they could manipulate them to vote for a certain candidate, party or issue, covertly toppling governments and entire state infrastructures. And in the most severe scenario, hacking a Neuralink-like device could turn “hosts” into programmable drone armies capable of doing anything their “master” wanted. Autopilot software features in cars have already resulted in deaths; imagine what a hacked army of sentient beings could do.
Though the above scenarios are far-fetched, and Neuralink may still be far off, it’s never too early to examine how the inevitable hacking could play out. Some experts believe that the singularity — the point at which artificial intelligence reaches that of human intelligence — will happen by 2045. And, as cybersecurity professionals know all too well, hackers are usually one step ahead of security protocols, so it’s not a matter of “if” but “when” they will attack a Neuralink-type device.
To be clear, technological progress is fundamental to human progress. It always has and always will be. BCIs hold tremendous potential for good. However, technological progress must be done thoughtfully, keeping in mind one critical aspect of the “human element” of security — ethics. I’m reminded of one of Sun Tzu’s strategic tenants, “悬权而动,” which says you should always “think deep and carefully deliberate” before you make your strategic move. Now is the time to develop a robust set of responsible big data, AI ethical frameworks and governance that companies must follow when developing such intrusive technology like BCIs.
Finally, for those aspiring to venture into the BCI space, I would like to leave you with some powerful words from chess grandmaster Garry Kasparov, who has had much of his career challenged by machines and AI: “We have free will, our machines do not. … We have to have human accountability, human ethics, built in from the start.”