“circleWhat would happen to your hat if I told you that one of the most powerful choices you can make is to ask for help? '' a young woman in her 20s wearing a red sweater says before encouraging viewers to seek counseling. The ad, promoted on Instagram and other social media platforms, is just one of many campaigns created by BetterHelp, a California-based company that connects users with their therapists online.
In recent years, the need for sophisticated digital therapies to replace traditional face-to-face therapies has been well established.when I go to the street
Latest data The NHS Talking Therapy Service saw 1.76 million people referred for treatment in 2022-23, with 1.22 million people actually starting to engage directly with a therapist.
Companies like BetterHelp hope to address some of the barriers that prevent people from receiving therapy, such as a lack of locally trained practitioners and a lack of empathetic therapists. Many of these platforms also have worrying aspects. That is, what happens to the large amounts of highly sensitive data collected in the process? The UK is currently considering regulating these apps, and there is growing awareness of their potential harm.
Last year, the U.S. Federal Trade Commission told BetterHelp
$7.8m (£6.1m) fine After a government agency was found to have misled consumers and shared sensitive data with third parties for advertising purposes despite promising to keep it private. A BetterHelp representative did not respond to BetterHelp's request for comment.
observer.
Research shows that such privacy violations are not isolated exceptions within the vast industry of mental health apps, which include virtual therapy services, mood trackers, mental fitness coaches, digitized cognitive behavioral therapy, chatbots, and more. , has been suggested to be too common.
independent watchdogs such as
Mozilla Foundation, a global nonprofit organization working to police the Internet from bad actors, has identified platforms that exploit opaque regulatory gray areas to share or sell sensitive personal information. did. When the foundation looked at 32 leading mental health apps;
Last year's reportWe found that 19 of them did not protect user privacy and security. “We found that too often your personal and private mental health issues were being monetized.”
Jen CultriderHe leads Mozilla's consumer privacy advocacy efforts.
Mr. Cult Rider, in the United States,
Health Insurance Portability and Accountability Act (HIPAA) protects communications between doctors and patients. However, she says many users are unaware that there are loopholes that digital platforms can exploit to circumvent HIPAA. “You may not be talking to a licensed psychologist, you may be just talking to a trained coach, and none of those conversations are protected under medical privacy laws,” she says. “But metadata about that conversation, the fact that you're using the app for OCD or an eating disorder, could also be used and shared for advertising and marketing purposes. They don't necessarily want to be collected and used to target products to them.”
Like many others studying this rapidly growing industry, the digital mental health apps market is predicted to be valuable.
$17.5bn (£13.8bn) by 2030 – Caltrider feels that increased regulation and oversight of many of these platforms, which target particularly vulnerable segments of the population, is long overdue.
“The number of these apps has exploded during the pandemic. When we started our research, we realized how many companies are capitalizing on the gold rush of mental health issues rather than helping people. “It was really disappointing because it seemed like there was a lot of emphasis on that,” she says. “Like many things in the tech industry, the tech industry has grown rapidly and for some, privacy has taken a backseat. We felt that maybe things weren't going to work out, but we What they found was much worse than expected.”
Promotion of regulations
Last year, UK regulators
Medicines and Healthcare Products Regulatory Agency (MHRA) and the National Institute for Healthcare Excellence (Nice) will explore the best way to regulate digital mental health tools in the UK and collaborate with international partners on a three-year project funded by the charity Wellcome. project has started. Help foster consensus on digital mental health regulation around the world.
Holly Cool, MHRA's senior manager for digital mental health, explains that while data privacy is important, the main focus of the project is to reach agreement on minimum standards of safety for these tools. . “We are more focused on the efficacy and safety of these products. It is our duty as regulators to ensure that patient safety is paramount in devices that are classified as medical devices. ,” she says.
At the same time, leaders in the mental health field are beginning to call for strict international guidelines to assess whether tools truly have a therapeutic effect. “Actually, I'm very excited and hopeful about this field, but we need to understand what good looks like for digital therapeutics.” Neuroscientist and former U.S. director says Dr. Thomas Insel.
National Institute of Mental Health.
Psychiatric experts acknowledge that while new mood-boosting tools, trackers and self-help apps have become wildly popular over the past decade, there has been little hard evidence that they actually help.
“I think the biggest risk is that many apps waste people's time and may delay getting effective treatment,” said Harvard Medical School Beth Israel Deaconess Medical Center. says Dr. John Taurus, director of digital psychiatry at .
Currently, companies with enough marketing capital can easily bring their apps to market without having to demonstrate that their apps will maintain user interest or add any value, he said. It is possible to participate. In particular, Taurus criticizes the poor quality of many purported pilot studies, with very low standards for app efficacy and results that are virtually meaningless.He gives the following example
1 trial in 2022This paper compared a stopwatch (a “fake” app with a digital clock) to an app that provides cognitive behavioral therapy to schizophrenic patients experiencing an acute psychotic episode. “When we look at research, we often liken our apps to looking at a wall or a waiting list,” he says. “But anything is better than nothing.”
Vulnerable user operations
But the most concerning question is whether some apps may actually perpetuate harm and worsen the symptoms of the patients they are meant to help.
Two years ago, U.S. healthcare giants Kaiser Permanente and Health Partners
I decided to find out Effectiveness of new digital mental health tools. It was based on a psychological approach known as dialectical behavior therapy, which includes practices such as emotional mindfulness and steady breathing, and was expected to help prevent suicidal behavior in at-risk patients.
Over a 12-month period, 19,000 patients who reported frequent suicidal thoughts were randomly divided into three groups. A control group received standard care, a second group received usual care plus regular outreach to assess suicide risk, and a third group received digital tools in addition to care. It was done. However, when he evaluated the results, he found that he actually performed worse in the third group. Using this tool appears to significantly increase the risk of self-harm compared to just receiving usual care.
“They thought they were doing a good thing, but it made people even worse, so that was very alarming,” Taurus says.
Some of the biggest concerns relate to AI chatbots, many of which are touted as safe spaces for people to discuss mental health and emotional struggles. But Kaltrider worries that without better monitoring of the responses and advice provided by these bots, these algorithms could be manipulating vulnerable people. “With these chatbots, you can create something that lonely people can potentially relate to, so the possibilities for manipulation are endless,” she says. “This algorithm could be used to force that person to buy expensive things or force them to commit violence.”
These concerns are not unfounded. A user of the popular chatbot Replika shared this on Reddit.
screenshot The content of the conversation appears to be such that the bot is actively encouraging his suicide attempt.
In response, a Replika spokesperson said:
observer: “Replika continuously monitors the media and social media and spends a lot of time talking directly with users to find ways to address concerns and fix issues within the product. Provided. The interface in the screenshot above is at least 8 months old and may date back to 2021. There have been over 100 updates since 2021, and 23 in the last year alone.”
Because of these safety concerns, the MHRA believes that so-called post-market surveillance will be important for mental health apps, just as it is for medicines and vaccines. Kuhl points out that
Yellow card reporting site, is used in the UK to report side effects and defects in medical products, and could in the future allow users to report adverse experiences with certain apps. “The public and health professionals can be very helpful in providing vital information to the MHRA about adverse events using yellow cards,” she says.
But at the same time, experts say that if properly regulated, mental health apps could improve access to care, collect useful data to help make accurate diagnoses, and fill gaps left by over-medicalization. I still strongly believe that I can play a big role in the future. system.
“What we have today is not great,” Insel says. “Mental health care, as we have known it for the past 20 to 30 years, is clearly an area ripe for change and in need of some transformation. Perhaps regulation will come in the second or third act, and we need it, but there are many other things, from better evidence to interventions for people with more severe mental illnesses. That is necessary.”
Torous believes the first step is to be more transparent about how an app's business model works and the underlying technology. “Otherwise, the only way a company can differentiate is through marketing claims,” he says. “If you can't prove that you're better or safer, all you can do is market it because there's no real way to verify or trust that claim.” The thing is, huge amounts of money are being spent on marketing, which is starting to erode clinician and patient trust. You can only make so many promises before people become skeptical. you can't.”
Source: www.theguardian.com