The Ethical Challenges of Connecting Our Brains to Computers

Controlling animal movements with your thoughts alone. Monitoring a pupil’s attention in class with a headset that scans brain activity. And, of course, the much more familiar cochlear implants that help the deaf hear or deep-brain stimulators that assist people with Parkinson’s disease to regain functional mobility.


This is neurotech—new, potentially revolutionary technology that promises to transform our lives. With all the global challenges of today, we need revolutionary technology to help the world cope.


Neurotech is our, frankly, mind-blowing attempt to connect human brains to machines, computers and mobile phones. Although brain-computer interfaces (BCIs) are the heart of neurotech, it is more broadly defined as technology able to collect, interpret, infer or modify information generated by any part of the nervous system. Why? To develop therapies for mental illnesses and neurological diseases. Beyond health care, it could soon be used in education, gaming, entertainment, transportation and so much more.


But there are pitfalls: there are no widely accepted regulations or guardrails yet when it comes to neurotech’s development or deployment. We need them—we need them bad. We must have principles and policies around neurotech, technology safeguards, and national and international regulations.

WHAT IS NEUROTECH, ANYWAY?

There are different types of it—some is invasive, some isn’t. Invasive brain-computer interfaces involve placing microelectrodes or other kinds of neurotech materials directly onto the brain or even embedding them into the neural tissue. The idea is to directly sense or modulate neural activity. 

Such technology has already improved the quality of life and abilities of people with different illnesses or impairments, from epilepsy to Parkinson’s Disease to chronic pain. One day, we might implant such neurotech devices into paralyzed humans, allowing them to easily control phones, computers and prosthetic limbs—with their thoughts alone. In 2017, Rodrigo Hübner Mendes, a paraplegic, used neurotech to drive a racecar with his mind. Recently, an invasive neurotech device accurately decoded imagined handwriting movements in real time, at a speed that matched typical typing. Researchers have also showed how invasive neurotech allows users with missing or damaged limbs to feel touch, heat and cold through their prostheses.

There is also noninvasive neurotech that can be used for similar applications. For example, researchers have developed wearables to infer a person’s intended speech or movement. Such technology could eventually enable a patient with language or movement difficulties—say, someone with locked-in syndrome—to communicate easier and more effectively.

Noninvasive neurotech is also used for pain management. Together with Boston Scientific, IBM researchers are applying machine learning, the internet of things, and neurotech to improve chronic pain therapy.

All of this is already quite impressive, but there is also neurotech that really pushes the envelope. Not only it can sense or read neurodata but it can also modulate—invasively and noninvasively. This research is still in early stages, but it’s advancing rapidly.

One astounding example is the work of Rafael Yuste, a neurobiologist at Columbia University. His team has recorded the neuron activity of a mouse that was performing an action, such as licking for a reward. Later the researchers reactivated these same neurons and got the mouse to perform the same action, even if the rodent did not intend to do it at that moment. Other neuroscientists have used similar technologies to transfer learned tasks between two rodents brain-to-brain and implant false memories into an animal’s mind. It’s remarkable.

RISKS, ETHICS AND REGULATION

Still, neurotech is at the very dawn of its technological journey. As it becomes more commonplace, we must consider the risks it might present, the ethics around it, and the necessary regulation. We have to anticipate and deal with the implications related to the development, deployment and use of this technology. Any neurotech applications should consider potential consequences for the autonomy, privacy, responsibility, consent, integrity and dignity of a person.

What if someone were to face employment discrimination because the algorithms that power a neurotech application used for hiring misinterpret his or her neurodata? What if a criminal gets a hold of the previous or current neurodata of the secretary of defense and steals top secret information? Ethical concerns increase when we are not just monitoring someone’s neurodata but also interpreting it, decoding the person’s thoughts—with implications for accuracy and mental privacy.

One tricky aspect is that most of the neurodata generated by the nervous systems is unconscious. It means it is very possible to unknowingly or unintentionally provide neurotech with information that one otherwise wouldn’t. So, in some applications of neurotech, the presumption of privacy within one’s own mind may simply no longer be a certainty.

As new, emerging technology, neurotech challenges corporations, researchers and individuals to reaffirm our commitment to responsible innovation. It’s essential to enforce guardrails so that they lead to beneficial long-term outcomes—on company, national and international levels. We need to ensure that researchers and manufacturers of neurotech as well as policymakers and consumers approach it responsibly and ethically.

Let’s act now to avoid any future risks as neurotech matures—for the benefit of humanity.

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :