
AI and the growing no-code movement are changing industries fast, offering efficiency and accessibility that wasn’t possible before. But their quick adoption in sensitive areas like mental healthcare brings cybersecurity risks that are often overlooked. For security professionals, understanding these new vulnerabilities matters- a lot. The convenience of rapid deployment too often overshadows the real security complexities in a sector that handles some of the most valuable data for malicious actors.
The appeal of quickly deploying mental health applications is obvious. However, this convenience frequently masks underlying security challenges. Healthcare data-especially psychological and behavioral information-is one of the most prized targets for cybercriminals. When compromised, it can trigger severe ethical, legal, and reputational damage. Unfortunately, many of these “easy-to-build” applications are developed or used by people who lack basic cybersecurity expertise, creating dangerous blind spots.
These risks aren’t hypothetical. Consider this:
Case in Point: FTC vs. Cerebral (2025)
In February 2025, the U.S. Federal Trade Commission (FTC) proposed a landmark order against Cerebral, a major online mental health service. The company unlawfully disclosed sensitive health information- including mental health conditions and treatment details-to third-party advertisers without patient consent. The order bans Cerebral from using or disclosing sensitive data for most advertising purposes and requires the company to pay over $7 million in penalties and refunds to affected consumers. This case shows the real-world consequences of inadequate privacy and compliance in mental health tech.
The no-code security paradox: building without a secure foundation
No-code platforms empower non-developers, democratizing application creation. But this democratization bypasses the rigorous Secure Development Lifecycle (SDL) and Threat Modeling processes-steps that are essential for any system handling sensitive data. When mental health professionals, or even general tech enthusiasts, build applications for emotional tracking, journaling, or AI-powered support, they create systems that collect, store, and transmit Protected Health Information (PHI). Such data falls under strict regulatory frameworks, including HIPAA in the U.S., GDPR in Europe, and numerous other national privacy laws.
Without specialized cybersecurity knowledge, crucial security elements get neglected:
• Data residency and jurisdiction: Where exactly is the PHI physically hosted? Are cloud providers actually compliant with local data sovereignty laws?
• Encryption at rest and in transit: Is all sensitive data properly encrypted with strong algorithms and proper key management, both when stored and when moving across networks?
• Logging and audit trails: Is comprehensive logging enabled for all data access and system events, allowing for forensic analysis should a breach occur?
• Backup and deletion policies: Are secure, immutable backup strategies in place, and do data deletion policies comply with “right to be forgotten” principles and data retention regulations?
• Access control and least privilege: Are granular access controls implemented to ensure only authorized personnel or systems can access specific data segments? Is the principle of least privilege actually enforced?
An application, even one built with good intentions, can quickly become a data exfiltration vector if not properly secured. Scenarios like leaked therapy session transcripts, exposed patient identities, or unauthorized third-party access to sensitive content aren’t theoretical anymore. They represent tangible, high-impact breach risks rooted in poor security practices.
Illustrative Breach: Yale New Haven Health System (April 2025)
A ransomware attack compromised sensitive data of 5.5 million individuals, including names, dates of birth, Social Security numbers, and medical record numbers. Hackers copied the data during the breach, though patient care wasn’t disrupted. Affected individuals received credit monitoring, and the incident underscores the severe impact of ransomware on healthcare organizations. This incident reminds us what can happen when security-by-design principles are ignored.
AI agents in sensitive contexts: unforeseen attack vectors
Integrating AI-powered conversational agents (“chatbots”) into therapeutic contexts introduces new security concerns beyond traditional data storage. These agents process user input in real time, often containing highly personal and vulnerable information. This creates novel attack vectors and pathways for data leakage:
Prompt injection and model inversion attacks: Malicious actors can exploit AI models through sophisticated prompt injection techniques. They can force the AI to reveal sensitive training data, expose internal system configurations, or even manipulate responses to elicit confidential user information.
Recent AI Privacy and Security Incidents: Artificial Intelligence Index Report 2025 (Stanford University)
AI privacy and security incidents surged by 56.4% in 2024, with 233 reported cases ranging from data breaches to algorithmic failures and prompt injection attacks. The report specifically highlights a growing number of prompt injection and model inversion attacks, where adversaries extract sensitive information from AI systems, including those used in healthcare and mental health applications. This spike shows that vulnerabilities in AI-powered mental health tools aren’t just theoretical-they’re being actively exploited.
Unintended data exposure: Beyond malicious attacks, well-meaning professionals can inadvertently expose sensitive data through AI tools simply due to lack of cybersecurity awareness. I once encountered a psychologist who, trying to protect patient privacy, built a GPT agent to anonymize therapeutic information. She then shared it with colleagues, completely unaware that feeding raw, sensitive data directly into the AI system fundamentally compromised privacy and data security.
Data inference attacks and regulatory scrutiny: Even if direct data leakage is prevented, advanced attackers could use inference attacks to deduce sensitive attributes about users based on AI interactions or aggregated data.
Recent GDPR Enforcement: Replika AI Companion GDPR Fine (May 2025)
Italy’s data protection authority fined Luka, Inc. (developer of the Replika AI chatbot) €5 million for GDPR violations. The chatbot, which simulates emotional relationships and is used for mental health support, was found to process sensitive behavioral data without proper legal basis, lacked transparency, and failed to protect minors from inappropriate content. This enforcement action reflects growing regulatory scrutiny of AI models that handle sensitive personal and behavioral data.
Backdoor vulnerabilities: Should AI models or their underlying infrastructure be compromised, attackers could establish backdoors to gain persistent access to conversational data or manipulate therapeutic outputs.
Lack of emergency protocols: From an operational security (OpSec) standpoint, the absence of automated mechanisms for identifying high-risk user input (e.g., suicidal ideation) and securely triggering human intervention or emergency protocols represents a critical failure point.
Regulatory gaps vs. undeniable due diligence
While security failures abound, the regulatory landscape surrounding AI in healthcare remains fluid, with specific mandates for AI tools used by therapists often nascent or absent. However, this regulatory vacuum doesn’t absolve developers, organizations, or practitioners of responsibility. On the contrary, it demands an even higher degree of due diligence and proactive security measures.
Any entity developing or deploying AI-driven tools that handle PHI faces significant legal exposure in the event of a data breach or patient harm. This includes potential class-action lawsuits, substantial regulatory fines (like GDPR fines reaching up to 4% of global annual revenue or millions of euros), and severe reputational damage. In March 2023, the U.S. Federal Trade Commission fined BetterHelp $7.8 million for sharing users’ sensitive mental health data with advertisers. This clearly demonstrates that even without AI-specific legislation, liability still applies. The absence of specific regulation doesn’t mean absence of liability. Professional bodies will also scrutinize whether security-by-design and privacy-by-design principles were actually integrated.
Mitigating the risk: a cybersecurity roadmap for AI in mental health
AI’s potential in mental health is undeniable, but realizing it requires a strong security posture. Here’s a roadmap:
Mandatory security assessment: Prioritize comprehensive security assessments (penetration testing, vulnerability scanning, code reviews) by independent cybersecurity firms before any AI application or no-code tool goes live. This includes third-party vendor risk management for any AI models or platforms used.
Strong data governance: Implement strict data governance policies for PHI handling, covering data collection, storage, processing, transfer, and deletion. This also means data minimization-collecting only what’s absolutely necessary.
End-to-end encryption: Ensure strong end-to-end encryption for all data, both in transit (TLS 1.2+) and at rest (AES-256 or higher), with proper key management procedures in place.
Identity and access management (IAM): Deploy solid IAM solutions featuring multi-factor authentication (MFA) and role-based access control (RBAC) to enforce the principle of least privilege.
Audit logging and monitoring: Implement centralized, immutable logging for all system interactions and data access attempts. Utilize Security Information and Event Management (SIEM) solutions for continuous monitoring and alert generation.
Incident response planning: Develop and regularly test a comprehensive incident response plan specifically tailored for data breaches involving sensitive health information.
Secure AI development practices: For AI model developers, integrate AI security best practices such as adversarial robustness testing, model interpretability (explainable AI), and careful safeguarding against data poisoning and prompt injection attacks.
Vendor security vetting: Conduct thorough security vetting of all third-party AI and no-code platform vendors. Ensure their compliance with relevant security standards and their willingness to sign Business Associate Agreements (BAAs) if handling PHI.
Conclusion: securing the future of AI-driven mental healthcare
AI offers significant opportunities to enhance mental healthcare delivery, but this innovation comes with serious cybersecurity responsibilities. Developers, security professionals, and healthcare organizations must work closely together. Their goal: to ensure that the rapid adoption of AI never compromises patient privacy or data integrity. By embedding security-by-design and prioritizing strong data governance and continuous threat monitoring, we can harness AI’s power while safeguarding our most sensitive information: the human mind. The message is clear: just because a platform makes it easy to build, doesn’t mean it makes it secure to deploy.
_______
Shayell Aharon is a cybersecurity consultant and security researcher at Knostic, an AI cybersecurity startup focused on securing emerging technologies. With a background as a clinical psychologist, she brings a unique perspective to safeguarding sensitive data in complex environments like mental healthcare.
Join our LinkedIn group Information Security Community!
















