Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

429+ Articles
114+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. News
  3. AI Chat App Exposes 300 Million Private Messages from 25
AI Chat App Exposes 300 Million Private Messages from 25
NEWS

AI Chat App Exposes 300 Million Private Messages from 25

A misconfigured Google Firebase backend in the Chat & Ask AI app exposed 300 million private chatbot conversations from 25 million users, including...

Dylan H.

News Desk

February 13, 2026
4 min read

Massive AI Conversation Leak

Security researchers have discovered a misconfigured Google Firebase backend in the popular Chat & Ask AI app (50M+ downloads), exposing 300 million private chatbot conversations from approximately 25 million users. The exposed data includes conversations across ChatGPT, Claude, and Gemini models — representing one of the largest AI-related data exposures to date.


What Was Exposed

Data TypeVolumeRisk
Private conversations300 million messagesCritical
User accounts25 million usersHigh
TimestampsPer-message timing dataMedium
Model settingsTemperature, system promptsMedium
Chatbot namesCustom bot configurationsMedium
AI model identifiersChatGPT, Claude, Gemini usageMedium

The Root Cause: Firebase Misconfiguration

The breach was caused by Firebase Security Rules set to public, allowing anyone to:

  • Read all stored conversation data without authentication
  • Modify existing records
  • Delete data from the database

This is a common but critical misconfiguration in Firebase-backed applications. Firebase Security Rules default to restrictive access, but developers must explicitly configure them — and in this case, the rules were set to allow unrestricted public access.

// Vulnerable Firebase rules (what was likely configured)
{
  "rules": {
    ".read": true,    // Anyone can read ALL data
    ".write": true    // Anyone can modify ALL data
  }
}

Why AI Conversation Data Is Uniquely Sensitive

What People Tell AI Chatbots

Users often share highly sensitive information with AI assistants that they wouldn't share elsewhere:

  • Medical symptoms and health concerns — Seeking health advice
  • Legal questions — Describing legal situations in detail
  • Financial information — Asking for tax, investment, or debt advice
  • Personal relationships — Discussing private matters
  • Business strategies — Sharing confidential business plans
  • Code and credentials — Pasting API keys, passwords, and proprietary code
  • Mental health — Discussing anxiety, depression, and personal struggles

Third-Party App Risk

This incident highlights the risk of using third-party AI wrapper apps instead of official platforms:

  • Thousands of apps proxy ChatGPT, Claude, and Gemini APIs
  • Security varies wildly between developers
  • Users trust the AI brand but security depends on the app developer
  • No standardized security requirements exist for third-party AI apps

Impact

For Affected Users

  1. Privacy violation — Personal conversations exposed to potential bad actors
  2. Social engineering — Conversation content can be used for targeted phishing
  3. Credential exposure — Any API keys, passwords, or tokens shared in conversations are compromised
  4. Reputational risk — Sensitive or embarrassing conversations could be leaked publicly
  5. Corporate espionage — Business-related AI conversations may contain trade secrets

Regulatory Implications

  • GDPR — 300 million conversations from EU users triggers significant compliance obligations
  • CCPA/CPRA — California residents' AI conversations are protected data
  • AI-specific regulations — EU AI Act and emerging frameworks may apply

Recommendations

For Chat & Ask AI Users

  1. Stop using the app immediately until security is confirmed
  2. Review your conversations — Consider what sensitive information you shared
  3. Change related passwords — If you discussed or pasted credentials in chats
  4. Monitor accounts — Watch for targeted phishing or social engineering
  5. Use official AI apps — Access ChatGPT, Claude, and Gemini through their official applications

For All AI Users

  1. Be cautious what you share — Treat AI conversations as potentially public
  2. Use official platforms — Official apps from OpenAI, Anthropic, and Google have stronger security
  3. Avoid sharing credentials — Never paste passwords or API keys into AI chats
  4. Review app permissions — Understand what data third-party AI apps collect

Sources

  • Malwarebytes — AI Chat App Exposes 300 Million Conversations
  • 404 Media — Massive AI Chat App Leaked Millions of Conversations
  • CyberSecurityNews — AI Chat App Data Exposure

Related Reading

  • Substack Discloses Data Breach After 100-Day Undetected
  • IDMerit KYC Data Breach Exposes 1 Billion Records Across 26
  • Louis Vuitton, Dior, and Tiffany Fined $25 Million Over
#Data Breach#AI#Privacy#Firebase#ChatGPT#Claude

Related Articles

OpenAI Says ChatGPT Ads Are Not Rolling Out Globally For Now

OpenAI confirmed that ChatGPT ads remain a U.S.-only pilot for Free and Go plan users, despite a global privacy policy update that alarmed international...

6 min read

Cegedim Santé Breach Exposes 15.8 Million French Healthcare Records Including HIV Status

A cyberattack on French healthcare software vendor Cegedim Santé exposed 15.8 million patient records from 3,800 doctors, with leaked data including...

4 min read

Shadow AI in SaaS: How Hidden AI Agents Are Enabling Catastrophic Breaches

A new Grip Security report analyzing 23,000 SaaS environments finds 100% of companies operate shadow AI they cannot see or control — with a 490% spike in...

7 min read
Back to all News