How AI-Powered Mental Health Apps Are Handling Personal Information

Article In The Thread
Illustration of a human engaging in conversation with an AI-powered therapy chatbot on their phone.
inspiring.team/Shutterstock.com
Aug. 13, 2024

In recent years, the COVID-19 pandemic has fueled a surge in demand for mental health services, putting immense pressure on the existing system—and with the demand for treatment having far outpaced the supply of psychologists, people in need have turned to less traditional options for care.

Among these alternatives are mental health applications that use chatbots powered by artificial intelligence (AI). Initially limited to mood tracking and basic symptom management advice, these apps now leverage advanced AI to simulate patient-therapist interactions and bridge gaps in underfunded regions, like low-income areas and schools.AI chatbots in mental health apps are trained to understand behavior and respond to individuals. Users can discuss sensitive issues with these chatbots, such as suicidal thoughts or self-harm.

However, there are concerns regarding the handling of the data that users share with the chatbots. Some apps share information with third parties, like health insurance companies—a move that can impact coverage decisions for people who use these chatbot services. They’re able to do this because Health Insurance Portability and Accountability Act (HIPAA) regulations don’t fully apply to third-party mental health apps. Unlike more traditional healthcare providers, these apps can operate with varying levels of transparency and protection for sensitive patient data.

So, what exactly happens with user data collected by mental health apps? A deep dive into various apps like Elomia, Wysa, Mindspa, and Nuna provides a troubling answer: It depends on the app. While many mental health applications perform similar essential functions, their approach to data collection and secure storage can differ significantly. These differences may have an impact on patient data security, so it’s crucial that companies incorporating AI into mental health services are well-versed in existing privacy policies and adhere to best practices in safeguarding user data.

How Mental Health Apps Collect, Use, and Store Our Data

Apps collect two types of important user details: personal information and sensitive information. Personal information, such as one’s birthday, is “used to distinguish or trace an individual’s identity.” Sensitive information, on the other hand, includes data—such as a formal diagnosis—that, if mishandled, could compromise an individual’s privacy rights.

The Elomia app, for instance, lacks contextual distinctions in its privacy policies, failing to distinguish between ordinary- and crisis-related sensitivities. In contrast, Wysa clearly delineates its protection measures, particularly for health-related data—separating personal and sensitive data.

Applications collect personal and sensitive information through account creation and application usage. Some apps, such as Mindspa, prohibit users from deleting certain information—like name, gender, or age—unless a user deactivates their account. The app notes “name” and “email address” as required data fields for account access. It may also request physical and mental health data—which users can refuse, but at the expense of user capabilities.

AI-powered mental health apps use collected data in various ways, and some are more upfront about their usage than others. Elomia features an AI chatbot reportedly trained on therapist consultation data, but lacks transparency about how it uses this information in implementation. While its privacy policy guarantees non-disclosure of personal information to third parties, Elomia’s Apple Store listing notes individuals’ data may be used for advertising purposes.

Limbic, on the other hand, primarily uses human therapists, but it encourages these therapists to save time by using its AI-based referral assistant to collect demographic information and determine current risks and harms to users prior to a conversation. The company is also expanding to incorporate an AI-based therapy assistant, Limbic Access, and encourages therapists to use this chatbot first. The primary difference is that while the current Limbic intends to use AI as a secondary measure, Limbic Access prioritizes an AI-first approach.

Data retention policies also differ significantly among platforms—some apps retain data for as few as 15 days or as long as 10 years. Some apps, including Wysa, establish clear retention periods, whereas others, like Nuna, retain data without specifying clear timelines for deletion. And Mindspa allows users to request data deletion, but lacks explicit assurances.

Pushing Forward for Stronger Privacy Policies

Companies looking to improve their standards and prioritize consumer privacy can take specific actions. It’s crucial for platforms to clearly differentiate between personal and sensitive information and maintain transparency about their handling procedures. In the United States, companies can adopt stricter measures that prioritize user well-being, creating a standard that treats all data types equally. Improving privacy practices involves creating transparent definitions of sensitive health information and maximizing data protection measures.

Mozilla's evaluations of mental health applications, including those centered on AI, highlight companies already on the right track. Similar audits can encourage mental health apps to fortify their privacy policies further, safeguarding consumer interests. It is essential for industry stakeholders to come together and establish standardized data transparency akin to nutrition labels—proposed by OTI in 2009—detailing what data health platforms collect and how it’s used.

If you're considering an AI-powered mental health app but have concerns about data privacy, consult user reviews and assessments from privacy watchdogs like Mozilla. Platforms like Google Play Store clearly present privacy information, detailing data collection and usage policies—making it easy for consumers to digest what privacy information is being collected. As trivial and time-consuming as this may seem, it's crucial for making well-informed decisions about where you should entrust your data.

You May Also Like

AI, Apps, and the Mental Load (Open Technology Institute, 2024): New America and the Fair Play Policy Institute discuss the promises and limitations of new gender equality technologies with thought leaders working within the fam tech space.

An AI Future Worth Building (Open Technology Institute, 2024): Lilian Coral outlines three crucial lessons from history to help us effectively harness and maximize the benefits and minimize the risks of AI.

How Mental Health Apps Are Handling Personal Information (Open Technology Institute, 2024): Erika Solis explores the privacy policies of mental health apps with AI chatbots and discusses what makes for strong privacy protections.


Follow The Thread! Subscribe to The Thread monthly newsletter to get the latest in policy, equity, and culture in your inbox the first Tuesday of each month.