Will Brain-computer Interfaces Make Knowledge Streamable?

bci
With a "language chip" implanted in our brains, will we be able to "stream" language, or just download it directly into our brains, on demand, 24/7? LeoSad/Shutterstock

For years, researchers have been working to develop technology that would enable the human brain to connect to a computer and transmit electrical impulses, often via a brain implant, that can be translated into language. Brain-computer interfaces, or BCIs, offer the promise of improving life for people with injuries or neurological disorders that prevent them from speaking or typing, as this November 2022 article in health and medicine news publication Stat describes. Several companies, including Elon Musk's six-year-old startup Neuralink, have been working to develop such devices, according to the Washington Post.

But once communication via brain implants becomes a practical reality, that raises the possibility of giving implants not just to those with disabilities, but to fully abled people to enable them to communicate with computers as well and enhance their performance.

Advertisement

The History of Brain-computer Interfaces

As this 2022 U.S. government report details, some BCIs are built into wearable devices, but others are surgically implanted directly to brain tissue. Subjects who receive BCIs often undergo a training process, in which they learn to produce signals that the BCI will recognize. The BCI, in turn, uses machine learning, a form of artificial intelligence, to translate the signals.

BCIs have been around for decades, though they remain largely experimental. Researchers first tested a wearable BCI in the early 1970s, and surgically implanted the first device in a human in the late 1990s. Since then, fewer than 40 people around the world have received neural implants, according to the report.

Advertisement

"One of the main obstacles to BCI development is that each person generates unique brain signals," the government report notes. "Another is the difficulty of measuring those signals."

In an October 2022 article for engineering publication IEEE Spectrum, Dr. Edward Chang, chair of neurological surgery at the University of California, San Francisco, describes an experiment that enabled a patient who had not spoken in 15 years to communicate simple messages containing entire words. First, a thin, flexible array of electrodes was draped over the surface of the patient's brain, but didn't actually penetrate it. The array consisted of several hundred electrodes, each of which could record signals from thousands of neurons. The array sent those signals to a device that decoded them and translated the signals into the words that the patient wanted to say.

To capture impulses related to speech, the researchers are focusing on parts of the brain's motor cortex that tell the muscles of the face throat, mouth and tongue how to move to make sounds, according to the IEEE Spectrum article. Studies were conducted with volunteers wherein specific sounds and words were recorded and the neural patterns matched with the movement of their tongues and mouths. Advances in AI have helped to identify the neural activity connected to speech.

While advances in neural implants hold great promise for helping people unable to speak, some worry that neurotechnology also brings possible perils.

In a December 2022 article for The Conversation, Nancy S. Jecker, a professor of Bioethics and Humanities at the University of Washington School of Medicine, and UW associate professor of neurological surgery Dr. Andrew Ko described a future scenario in which soldiers have tiny computing devices injected into their bloodstreams and guided to their brains. Implants could enable soldiers to control weapons systems that are thousands of miles away by thinking, they wrote. But such technology also theoretically could communicate messages back into to the soldiers' brains, enabling the military to suppress fear and anxiety, or manipulate their behavior by anticipating what they might do in certain situations.

Advertisement

Ethical Considerations and a Neurological Bill of Rights

We spoke to Jecker, who says that she's also concerned about how BCIs might be used to steal information from people's brains, or for suppressing emotions and controlling them.

"I think it's really imperative to think now in advance about the ethical implications of neurotechnology," she says.

Advertisement

Jecker advocates establishing the equivalent of a neurological bill of rights, which guarantees people "cognitive liberty," including a right to mental privacy and a ban on unreasonable interference with their mental state. Protecting the right to have "a coherent sense of our identity and who we are" is another must, she argues.

A World in Which Language Isn't Learned, but Streamed

Another expert already is envisioning a world in which people still use their mouths to speak but are assisted — or controlled — by technology.

Vyv Evans is a former linguistics professor at Bangor University and other institutions in the U.K. who's an expert on the evolution of digital communication, and a columnist for Psychology Today. In an upcoming science fiction novel, "The Babel Apocalypse," Evans depicts a future in which most people no longer learn language, but instead use neural implants to stream their vocabulary and grammar from the cloud — that is, until a massive cyberattack causes a catastrophic global language outage.

Advertisement

"Think of it this way," Evans says via email. "Today, we stream anything from movies, to books, to music, to our 'smart' devices, and consume that content. Smart devices use streaming signals — data encoded in IP data packets — encoded and distributed via wi-fi internet. Language streaming would work, in principle, in the same way. With a 'language chip' implanted in our brains, we will be able to 'stream' language from internet-in-space on demand, 24/7, direct to our heads. And based on an individual's level of subscription to a language streaming provider, they would be able to stream any language they chose, with any level of lexical complexity."

In Evans' fictional future, being able to stream language has rendered the study of different languages obsolete. "Rather than having to learn a new language, the individual would just draw upon the words and grammar they need, to function in the language, by syncing to a language database, stored on a server in space," he explains. "And call it up, over the internet, in real time, as they think and talk." As a result, "adding a new language to one's subscription would allow a resident of the U.S. or the U.K. to instantly understand and produce say, Japanese, and work in Tokyo." Similarly, the author imagines lawyers, rocket scientists and brain surgeons subscribing to cloud databases and downloading the specialized terms needed in their professions.

In Evans' novel, to make this all work, people have an assortment of devices implanted in their bodies, including a wi-fi receiver in their ear that would connect to a global network of satellites, and in turn also communicate with another chip implanted in their brains.

Such technology could also pick up and relay nonverbal communication, such as pictures or sounds, or physical gestures.

How long will it be until we all have cpus implanted in our brains for learning and language and what are the ethical implications of that?
How long will it be until we all have chips implanted in our brains for learning and language, and what are the ethical implications of that?
agsandrew/Shutterstock

Advertisement

The Downsides Could Be Significant

If you're addicted to electronic gadgetry, this all might sound pretty cool. But there would be some significant downsides. For example, in Evans' speculative future, the number of languages used worldwide would shrink, as the tech companies that owned language servers began to drop tongues that weren't used as much as, say, English or Chinese. Poorer people might be forced to become monolingual.

Additionally, "regional accents and dialects, being non-standard, would require more expensive streaming subscriptions — this would mean that regional accents would become status symbols," Evans says. "The working classes would be, in effect, priced out of their own local language varieties. The range and variety of human language would be erased at a stroke. This has implications for identity, ethnicity, etc."

Advertisement

Streaming language of the sort envisioned by Evans might also pose a threat to freedom of speech, since big tech companies and governments literally could control what words you use and your ability to express ideas.

"Individuals become constrained by decisions made by big tech and governments, in terms of words and lexical choice," Evans explains. "As one example, imagine a particular state that outlaws abortion under all circumstances. Such a government might then proscribe the word "abortion" itself. Hence, say in the U.S., someone might stream English and not be able to describe the concept, using the word, which in effect outlaws the concept itself."

"There would then be the Kafkaesque situation whereby in another English-speaking territory, where abortion remains legal, language streaming providers censor the word in one state, but not in another," he continues. "This leads to a situation where autocratic regimes can abuse the technology for their own ends, controlling thought itself, by limiting freedom of expression in language."

Hopefully, that's a scenario that won't come to pass, if civil libertarians succeed in enacting sensible restraints upon neurotechnology that will prevent abuses, while enabling it to be used in ways that benefit people.

Advertisement

Advertisement

Loading...