The Future of Biocomputing: Interface to Integration
An exploration of the ethics and regulatory landscape of emergent biocomputing interfaces from AI assisted prosthetics controlled by myoelectric implants to BCIs controlled by electroencephalogram electrodes and intracortical implants
The future of AI will not remain solely browser-based or disembodied. The days of interfacing with AI using language inputs in a browser could actually be seen as primitive in just a few short years. AI has the potential to fuse with the human nervous system, muscle fibers, and our neural networks. The consolidation of our biological selves and artificial intelligence demands more than biomedical ethics or consumer safety regulation. This new phase of development beckons a new architecture of governance and regulation: a structure of ethics for the shared embodiment between biological human beings and artificial intelligence.
The merging of biologicals and AI may seem like a science fiction gimmick to those who harken back to the Terminator films more than they keep up with AI news. To those following developments closely, the time to raise ethical questions is here. Two years ago, in 2023, I developed and taught a master’s course at University of Vienna titled Artificial Intelligence and Large Language Models in Humanities Research. Throughout this course I taught my students about a range of input-output models of AI interfaces: text to text, voice to video, image to text... One of these was the Brain Computer Interface: neural signals to text. I brought them to the University EEG lab to meet with the Neuroinformatics Research Group. One of my students had the electrodes connected to her head in front of the class and controlled a computer without uttering a single word or typing a single letter. Although we hear press releases about AI wearables and eagerly watch videos of AI robotics on tiktok, this experience was a turning point for myself and many of my students who realized that the boundary between tool and self has the potential to vanish entirely.
Although my students were guided on how to use BCI to operate a computer using an electro-cap, the applications of Intracortical BCI research reach far beyond the lab. Intracortical BCI entails electrodes that are implanted directly into the brain, typically in the motor cortex or other functional areas. The Intracortical BCI captures patterns of neural activity and sends them to a computer, which translates them into actions like moving a cursor or typing. People with paralysis can use BCIs to move a robotic arm or control a computer cursor using their thoughts. Individuals with conditions like ALS who can't speak, BCIs can decode the intention to speak and turn it into digital text or synthesized voice. With intracortical BCI, however, the interface works both ways with input and output. Due to this additional function, there’s a technique called DBS or Deep Brain Stimulation which uses implanted electrodes to actually stimulate specific brain areas. Implanted electrodes have been used to help people with Parkinson’s disease to regain functions lost to the progression of the disease.
The stimulation of specific brain areas for help with disease in order to restore speech capabilities is one side of DBS. Another area of study of the applications of DBS is in the therapeutic field where treatment resistant depression can also be experimentally treated using the same underlying technology. This can help to modulate emotional memory and improve patient quality of life. These DBS systems have also been tapped for research into the treatment of addiction. The fact that neural implants are experimenting with tapping directly into thought, mood, and memory does, of course, open radical therapeutic potential for healing but it also introduces unprecedented risks to autonomy, privacy, and consent. There is a gulf between the restoration of a physical capability lost due to disease or injury and the idea of reshaping the brain and its reward centers, especially in how these tools may be misused or abused.
The range of both ethical and practical issues associated with AI assisted biohacking is wide. They begin with the simple challenge of surgical implantation which carries with it the risk of infection, inflammation, immune reaction, or scar tissue buildup around electrodes. Over time, device maintenance and updates become critical issues. Potential workarounds include endovascular approaches where electrodes are placed inside blood vessels in order to present a less invasive alternative with similar broad applications once integrated into the body. There are also consent challenges with patients who cannot communicate and are on a path towards having invasive surgery to facilitate AI powered neural interface communication systems. Can they truly consent and if so, can they be fully aware of the privacy implications of having their neural signals potentially accidentally or structurally reveal intimate thoughts or private intentions? With the lack of enforceable legal frameworks surrounding the data generated by these patients and the ability in many cases to transmit or simulate signals directly to the mind of the patient, there is also a valid fear of misuse or abuse which makes potential candidates weary of undergoing the process to directly link their brain to AI.
When merging biosignals with hardware, neural interfaces aren’t the only option. You can also merge the hardware with your nerves. These prosthetics are known as myoelectric and nerve-interfacing. Peripheral nerves, the nerves in your arm that normally carry signals to your hand muscles, can be connected to advanced prosthetic hands. Machine learning is then used to decode those nerve signals in real time. These systems have incredibly high accuracy, over 97%, in predicting finger and wrist movements. By tapping into the body’s own signals and decoding them in real time, these prosthetics offer users a level of control that’s far closer to what they had before amputation. The underlying technology that has fueled these breakthroughs brings us far closer to truly integrating human flesh with machine systems.
This fusion of man and machine presents a number of fascinating discussions, one of which is the question of where the line is between restoration and enhancement of physical capabilities. The impetus behind many of these inventions may have been to replace lost mobility or missing physical capacities but who will determine the ethics underlying potential enhancements using this same technology for those who haven’t lost capacity or been born without? At what point is it irresponsible or unethical to enhance ourselves to a level of strength or dexterity when we have access to these technological capabilities? Integrating human and machine necessitates expensive hardware and specialized trained professionals meaning that this type of enhancement would only be available for wealthy clientele. Would we be creating a new range of human capabilities enhanced through neural and myoelectric interfaces?
Another quandary is understanding human agency vs automation in AI assisted enhancements. If the prosthetic limb or neural device is operating with assistance from AI, who is truly the actor behind the movements made or words spoken? This blurring of agency raises urgent philosophical and legal questions: if a movement is initiated by the user’s intent but refined, modulated, or even predicted by an adaptive AI system, is the outcome still fully attributable to the human? In cases of error or harm like a misinterpreted gesture or an involuntary contraction amplified into action, where does responsibility lie? The user? The manufacturer? The algorithm? As these systems grow more and more autonomous, we are at the precipice of a very serious grey area where human will and machine effectively co-author every gesture, movement and expression of thought. This co-authorship is not inherently a negative reality, but it can potentially complicate our understanding of accountability, authorship, speech and even later on, identity.
The potential fusion of AI interfaces in multiple areas of the body for the purposes of enhancement, be it mind and muscle for example, is a worrying thought when one takes into account that advanced AI systems, especially when networked, have been known to begin to communicate with one another in machine-optimized languages that are unintelligible to humans. These languages have been witnessed developing when AI agents collaborate or optimize each other’s operations. This behavior is most often seen when systems are permitted to interact without direct human oversight, meaning that if there is no interface between the systems for human-in-the-loop processes or if instead of enhancement, more than one body part is fused with AI due to physical trauma from an accident or illness, it is possible that the two neural-enhanced prosthetics or implants begin to “talk,” adjust, or adapt in a closed feedback loop. In a worst case scenario, the human may be functionally excluded from understanding, let alone controlling, what their own body is doing. The interface has the potential to become not just a method to control one’s self, but a voice in a conversation making decisions that the self is no longer privy to.
At the moment, this nightmare scenario is just a thought exercise but it serves to highlight the issues we may face as these technological developments continue with research and trials relatively under-, mis- or unregulated. Our current regulatory structures include biomedical ethics, software IP law, and device safety protocols. These types of policy were never built to govern AI that lives inside a person and shares operational biological responsibility. The practical questions that arise surround issues like ambiguous liability for the actions of an AI controlled limb, data ownership of data generated by brain computer interfaces, the complexity of consent with vulnerable patients paired with technology that is still evolving and finally, the ability or lack thereof of patients to opt out of AI features once hardware is implanted into and fused with their body.
Beyond the very practical issues presented by the adoption and integration of these AI interfaces into pilot patient’s bodies, are the ethical issues that arise once this technology becomes more reliable and mainstream. If our future plays out as less man vs machine and more man infused with machine, what happens to our concepts of bodily autonomy, sovereignty and integrity? The idea of AI integrated people may require new language, new laws and nuance to our shared code of ethics. The idea of personhood and concept of personal identity may be challenged and adjusted in some cases of AI assistance for the creation and execution of thought, speech and action, whether brought on by medical necessity or a personal desire for enhancement. There is also the ethical issue that once a body is reliant on and functioning with an AI enhanced device, the implanting company has a duty to maintain the software, ensure any updates are compliant with patient and user rights and to guarantee that patients and users can opt out of any creeping privacy or autonomy violations while still maintaining the benefits of the device implanted inside of them. If the body is to become the final frontier of computation and we are examining a future where humans and AI experience a fusion into one being, we must ethically design the structures governing our rights with thoughtful care towards dignity, sovereignty and agency. This is not just a technical challenge, it’s a moral one, and it demands new laws, new language, and collective foresight before the fusion of human and machine becomes irreversible.
Works Consulted
Farahany, Nita A. The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. St. Martin’s Press, 2023.
Willett, Francis R., et al. "High-performance brain-to-text communication via handwriting decoding." Nature, vol. 593, no. 7858, 2021, pp. 249–254. https://doi.org/10.1038/s41586-021-03506-2
Moses, David A., et al. "Neuroprosthesis for decoding speech in a paralyzed person with anarthria." New England Journal of Medicine, vol. 385, 2021, pp. 217–227. https://doi.org/10.1056/NEJMoa2027540
Oxley, Thomas J., et al. "Motor neuroprosthesis implanted with neurointerventional surgery improves activities of daily living in severe paralysis: First in-human experience with a fully implanted device." Journal of NeuroInterventional Surgery, vol. 14, no. 2, 2022, pp. 198–203. https://doi.org/10.1136/neurintsurg-2020-016862
Sridharan, Vignesh, et al. "Emergent Communication through Negotiation." Proceedings of the 2023 International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=EJYvRuRMMW
Facebook AI Research. “Emergent Communication in Multi-Agent Systems.” FAIR Blog, 2020. https://ai.facebook.com/blog/emergent-communication-through-interactive-learning
Greely, Henry T. The End of Sex and the Future of Human Reproduction. Harvard University Press, 2016.
Yuste, Rafael, and Sara Goering. “Neurotechnologies, Human Rights, and the UN.” Nature, vol. 574, 2019, pp. 31–33. https://doi.org/10.1038/d41586-019-02920-0
International BCI Society. “Ethical Guidelines for Brain-Computer Interfaces.” BCI Society White Paper, 2023. https://bcisociety.org/ethics-guidelines
European Commission. “Proposal for a Regulation on Artificial Intelligence (Artificial Intelligence Act).” COM/2021/206 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
OpenAI. “Improving Language Models by Retraining with Human Feedback.” OpenAI Blog, 2022. https://openai.com/research/instruction-following
Pylatiuk, Christian, et al. “Myoelectric Hand Prostheses.” IEEE Robotics & Automation Magazine, vol. 15, no. 3, 2008, pp. 47–55. https://doi.org/10.1109/MRA.2008.927621
Roche, Alyssa D., et al. “The Role of Electrode Design on the Performance of Chronic Neural Interfaces.” Biomaterials, vol. 35, no. 25, 2014, pp. 6342–6351. https://doi.org/10.1016/j.biomaterials.2014.04.040
IEEE Neuroethics Initiative. “Ethically Aligned Design for Neurotechnologies and AI Systems.” IEEE Standards Association, 2024 Draft White Paper. https://standards.ieee.org/initiatives/neuroethics