top of page
Benjamin Powers

Brain-computer interface industry gathers pace, directly connecting machines and minds


On the 10th Floor of a nondescript building at Columbia University, test subjects with electrodes attached to their heads watch a driver’s view of a car going down a street through a virtual reality headset. All the while, images of pianos and sailboats pop up to the left and right of each test subject’s field of vision, drawing their attention.

The experiment, headed by Paul Sajda, a biomedical engineer and the director of Columbia’s Laboratory for Intelligent Imaging and Neural Computing, monitors the subjects’ brain activity through electroencephalography technology (EEG), while the VR headset tracks their eye movement to see where they’re looking — a setup in which a computer interacts directly with brain waves, called a brain computer interface (BCI). In the Columbia experiment, the goal is to use the information from the brain to train artificial intelligence in self-driving cars, so they can monitor when, or if, drivers are paying attention.

BCIs are popping up in a range of fields, from soldiers piloting a swarm of drones at the Defense Advanced Research Projects Agency (DARPA) to a Chinese school monitoring students’ attention. The devices are also used in medicine, including versions that let people who have been paralyzed operate a tablet with their mind or that give epileptic patients advance warning of a seizure. And in July 2019, Elon Musk, the CEO and founder of Tesla and other technology companies, showed off the work of his venture Neuralink, which could implant BCIs in people’s brains to achieve “a symbiosis with artificial intelligence.”But some experts are feeling wary of the fledgling industry thanks to the lack of clarity as to which devices qualify as BCIs, a regulatory gray area for consumer versions, and hype that outpaces the science.

Particularly in question are unsubstantiated company claims and how the devices may affect users — issues that have some academics contemplating a new legal code to help protect people against a future where the data their brains generate might fuel a lucrative data market.

Every step in producing BCIs involves “ethical, legal, and social issues,” says David Winickoff, a senior policy analyst researching converging technologies at the Organization for Economic Co-operation and Development in Paris. Some BCIs are cause for “obvious concerns with privacy,” he adds. “There is also the potential for the alteration of one’s sense of self, or identity, which raises questions of autonomy, or the capacity for self-direction of one’s life.” FOR EVERY THOUGHT, feeling, and movement, our brains emit electrical signals. BCIs are designed to read these signals — sometimes through EEGs, which generally entail placing electrodes along the scalp, and sometimes by implanting electrodes — often about 16 to 64, although some models can have up to 256 — in strips along the surface of the brain.

Neuralink plans to take this even further by inserting a chip with as many as 3,072 thin metal threads into people’s brains. However the BCIs read brain signals, they use the information to make a computer or machine do something — for instance, type a sentence or make a drone alter its flight.Still, while some BCIs work in limited settings, Sajda is dubious of others because there are still too many questions about how the human brain works. “The hype” for the devices, he says, “is orders of magnitude beyond where the science is.”Columbia University researcher Jennifer Cummings explains the findings from a recent paper out of Paul Sajda’s lab, evaluating the performance of people using a BCI.

Scientists have been working on BCIs since the 1970s for medical purposes. But starting in 2007, the technology moved into the consumer sector with increasing fanfare. With advances in BCI technology and the advent of smartphones, consumer versions of BCIs and other technology that interacts with minds in different ways are becoming more common. For instance, NeuroSky, a company that sells consumer wearables and mobile device apps based in San Jose, California, is marketing brain sensors that let people feed relevant data into smartphone apps.

Thync, another wearables company based in California, suggested their BCI app could change users’ moods, encourage sleep, and even treat the skin condition psoriasis. And Emotiv, a bioinformatics company in San Francisco, sells a $700 portable EEG headset that allegedly monitors a user’s attention, and can collect brain data for experiments.Many such products “often fall into a regulatory gap,” says Winickoff.In the U.S., the Food and Drug Administration regulates medical devices — including some BCIs, which the agency approves on a case-by-case basis.

But many BCIs aren’t technically classified as medical devices, and instead are marketed directly to the consumer market. In 2015, for instance, the FDA declined to regulate Thync’s products as medical devices.“From the consumer side, it’s the wild west,” Sajda says. “There is no direct regulation.”When reached for comment, the FDA pointed Undark to a February 2019 statement,which updated guidelines for developers on what human and nonhuman tests could be used to develop implantable BCI devices for patients with paralysis or amputation. But the statement doesn’t provide clarity for consumer BCIs.“The hype is orders of magnitude beyond where the science is.”

As consumer BCIs grow in number and scope, Sajda worries about the regulatory gaps. One example that he says is particularly troubling is a treatment that stimulates specific parts of the brain to improve specific functions, including memory, language, and attention. This type of BCI uses a technique called transcranial direct stimulation, and it could help boost physical movement and reaction time. But the scientific literature on the benefits of transcranial direct stimulation are mixed. Despite this, groups as diverse as the U.S. Navy’s SEAL Team Six and the NBA’s Golden State Warriors are testing transcranial direct simulation to improve performance, and home enthusiasts are even making DIY versions to try to improve memory, reaction times, or even treat depression, sometimes burning their scalps in the process.There are even murkier claims.

Musk, for instance, suggests BCIs may be combined with AI, although Neuralink has not released specifics on how this would work in practice.Marcello Ienca, a senior research fellow at the Health Ethics & Policy Lab of ETH Zurich in Switzerland, warned that, while clinical trials require approval by ethics review committees who assess the risks and benefits before they can enroll participants, if companies in the future made hyperbolic claims about the benefits of invasive procedures, it could convince people to enroll in clinical trials with unrealistic expectations.AS SOME SCIENTISTS and companies work to develop BCIs, ethicists and others are trying to figure out legal frameworks to help prevent the exploitation of brain data. Monitoring brain activity with BCIs produces a lot of information — a prime target for everything from advertising to political campaigns.

In Sajda’s experiment at Columbia, for instance, his team is trying to learn where subjects focus their attention in the environment, which could have applications in advertising. Meanwhile, other BCI research could be used to optimize internet ads or TV spots.To help address these concerns, Ienca proposes a legal approach that he calls a “jurisprudence of the mind.” The framework applies human rights to ensure that the data BCIs collect, and the ways the devices might fundamentally alter humans, are understood and protected.A subject using a brain-computer interface. As they fly a simulated aircraft in virtual reality, a machine monitors their physiology and gives feedback to improve performance.

Laboratory for Intelligent Imaging and Neural ComputingIn a 2017 paper published in Life Sciences, Society, and Policy, Ienca identified “four new rights that may become of great relevance in the coming decades: the right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.” By doing so, he says that BCI users have the explicit basis for exerting legal control over their brain data, as well as protection against misuse.

Already, technology companies are able to access droves of sensitive data thanks, in part, to opaque terms of service, which users have to accept to use a product. Brain data could heighten these issues. “Most consumer BCI applications simply rely on the acceptance of the terms of service by individual users,” says Ienca. Such terms, he adds, are “rarely even read and understood.”The need for legal clarity will increase as BCIs improve. The technology may eventually change how a person’s brain functions — or even, as some research suggests, how users perceive their own identity. For instance, a small 2019 study by Frederic Gilbert, a senior lecturer in ethics at the University of Tasmania, looked at how six people with epilepsy reacted to BCIs, which were surgically implanted to help warn when a seizure was about to start. Some users found that the BCI made them feel empowered, but others said it reinforced and emphasized the fact that they had a disability.

One reported an adverse reaction to the implant, noting that it “made me feel like I was just sick all the time… It also made me feel like I was different to everyone else.”That last subject experienced what Gilbert described as a new identity. “She became a new person after implantation,” he says. “So in terms of human rights, in terms of the legality of becoming a new person, the question is: What are the moral obligations that a company or regulator have to protect — to maintain this new personal life of a subject or maintain this new person?”

So far, the answer is unclear. In general, technology outpaces U.S. regulators and legislators.There are signs that the FDA is considering BCI regulation — the agency’s most recent update was a 2019 white paper on medical devices that recognized a need for greater regulation of software like AI. But so far, it’s not clear how or whether the FDA or federal representatives are looking at the issues raised by Gilbert and others.Regulation and legislation is also slow-moving beyond the U.S., although some neuro-ethicists are working to fill the legal gaps.

For instance, Rafael Yuste, a neurobiologist at Columbia, helped the Chilean government to back an agenda to protect brain data as a human right, called NeuroProtection, one of the first examples of such legislation in the world.Approaches to other technologies could help guide additional policies. Ienca points to times where the United Nations has adapted to technology and science developments, like in 1998, when the organization adopted the Universal Declaration on the Human Genome and Human Rights to stop genetic information from being misused.

In 2003, those stipulations expanded through the International Declaration on Human Genetic Data, which laid out more specific guidelines on collection and manipulation of human biological samples and genetic information.Ienca and others suggest BCIs could follow a similar approach. Yuste, for instance, wants to address the technology by amending the Universal Declaration of Human Rights: “We’re pushing for new human rights that we call neurorights.”

90 views

Related Posts

See All
Screenshot 2023-11-06 at 13.13.55.png
bottom of page