This recent initiative is part of a growing global campaign to establish boundaries in a rapidly evolving sector: technology that utilizes data from the brain and nervous system.
UNESCO has developed a set of international standards aimed at the ethical use of neurotechnology, a discipline often likened to “a bit of the Wild West.”
“We cannot control it,” stated Daphna Feinholz, UNESCO’s chief bioethics officer. “It is essential to educate people about the risks, potential advantages, and available alternatives so they can choose whether to proceed or not.”
Feinholz noted that the new guidelines were prompted by two significant trends in neurotechnology. One is artificial intelligence (AI), which presents immense potential for interpreting brain data, and the other is the rise in consumer neurotechnology products, like earphones and glasses that claim to monitor brain activity and track eye movements.
The standards introduce a new data category termed “neural data,” proposing guidelines for its safeguarding. A comprehensive list of over 100 recommendations addresses rights-based issues and even scenarios that currently seem to belong to the realm of science fiction, such as companies potentially using neurotechnology to target subconscious marketing in dreams.
“While neurotechnology could herald a new era of human advancement, it carries inherent risks,” remarked UNESCO Director-General Audrey Azoulay. She emphasized that the new standards will “entrench the inviolability of the human heart.”
Billions of dollars have been invested in neurotechnology ventures, from Sam Altman’s investment in August Labs to Merge Labs, a rival of Elon Musk’s Neuralink, and Meta’s recent foray into this field. There is also a wristband that enables users to operate their smartphones and AI Ray-Bans by interpreting wrist muscle movements.
Such investments have led to an increasing demand for regulatory measures. A report released by the World Economic Forum last month called for a privacy-centered framework. Following this, U.S. Sen. Chuck Schumer introduced the MIND Act in September, inspired by similar legislation from four states aimed at protecting “neural data” starting in 2024.
Advocates for neurotechnology regulation stress the critical importance of safeguarding personal information. UNESCO’s standards highlight the necessity of “mental privacy” and “freedom of thought.”
Nonetheless, some critics argue that legislative measures often stem from dystopian anxieties, potentially hindering meaningful medical progress.
“This bill is fueled by fear. People are concerned about the possibilities this technology brings. The notion of using neurotechnology to read minds is alarming,” commented Kristen Matthews, a mental privacy attorney at Cooley in the U.S.
Technologically speaking, neurotechnology has existed for over a century. For instance, brain waves (EEG) were first documented in 1924, and brain-computer interfaces emerged in the 1970s. Yet the latest surge in investment is likely propelled by advancements in AI that enable the interpretation of extensive data, including brain waves.
“The integration of AI is what has sparked privacy concerns surrounding this technology,” Matthews explained.
Certain AI-driven neurotechnology innovations could significantly transform medicine, aiding in the treatment of conditions from Parkinson’s disease to amyotrophic lateral sclerosis (ALS).
A study published this summer in *Nature* discusses an AI-enabled brain-computer interface capable of decoding sounds from paralyzed patients. Additional research suggests that it might one day be able to “read” your thoughts or at least reconstruct your images based on your focus.
The excitement surrounding some of these developments often generates fear that may not align with the actual risks involved, Matthews argued. For example, the MIND Act claims that “corporate vertical integration” of AI and neurotechnology could foster “cognitive manipulation” and undermine “individual autonomy.”
“I’m not aware of any companies engaging in such actions. It’s unlikely to happen, certainly not within the next 20 years,” she stated.
The current forefront of neurotechnology is enhancing brain-computer interfaces. With the advent of consumer devices, Matthews noted that they could provoke privacy issues that are central to UNESCO’s standards. However, she contends that the concept of “neural data” is too broad a strategy for addressing these concerns.
“This is the type of issue we wish to tackle: monetization, behavioral advertising, and the application of neural data. Yet the existing laws fail to grasp what we’re truly worried about. They’re too vague.”
Source: www.theguardian.com
Discover more from Mondo News
Subscribe to get the latest posts sent to your email.













