Understanding the present, shaping the future.

Search
07:58 AM UTC · THURSDAY, MAY 14, 2026 XIANDAI · Xiandai
May 14, 2026 · Updated 07:58 AM UTC
Cybersecurity

Custom AI-built medical app exposed sensitive patient data to open internet

A medical professional used AI coding agents to build a patient management system that left unencrypted medical records and voice recordings accessible to anyone online.

Ryan Torres

2 min read

Custom AI-built medical app exposed sensitive patient data to open internet
Concept of medical data exposure and cybersecurity

A medical professional recently deployed a custom-built patient management system that left sensitive health records and audio recordings completely exposed to the public internet. The incident, documented by researcher Tobru, revealed that the application lacked any form of encryption or access control.

Driven by the ease of modern AI coding tools, the practitioner used an AI coding agent to build the application from scratch. The system included features to record patient appointments and transmit the audio to US-based AI services for automatic summarization.

Upon inspecting the application, the researcher discovered full read and write access to all patient data within thirty minutes. The backend database service was configured with zero access control, meaning the data was accessible via a simple command.

Critical security failures

The application's architecture relied entirely on client-side JavaScript for security logic, rendering it useless against even basic attacks. All patient information and voice recordings were stored on a US-based server without a Data Processing Agreement.

"The data wasn't just wide open: it was stored on a US server without a Data Processing Agreement, voice recordings were being sent to major US-based AI companies," the researcher stated. The researcher noted that the setup likely violated the Swiss New Federal Act on Data Protection (nDSG) and potentially professional secrecy laws.

When notified of the vulnerability, the developer responded with an entirely AI-generated message. The response thanked the researcher and claimed that basic authentication had been implemented and access keys rotated.

Despite these patches, the researcher noted that the developer lacked an understanding of the underlying software architecture. The incident highlights the risks of 'vibe coding,' where users rely on AI to generate functional code without understanding the security implications of the resulting structure.

Comments