A woman in California, the United States, has filed a class-action lawsuit against OpenAI, claiming violations of privacy laws. She alleges that ChatGPT, without users’ consent, sends users’ query contents to Meta and Google through tracking tools. The case is seen as a symbolic first federal court challenge directly targeting how AI chatbots handle data, drawing intense attention to AI privacy protection.
(ChatGPT involved in a teen suicide case! OpenAI: has strengthened GPT-5’s psychological protection mechanisms)
ChatGPT seems to embed invisible tracking codes—every chat content is exposed
On May 13, a California resident represented by Amargo Couture filed a lawsuit with the U.S. District Court for the Southern District of California.
The complaint states that OpenAI has embedded third-party tracking tools such as Meta’s “Facebook Pixel” and Google’s “Google Analytics” in the code on the ChatGPT official website. These tracking codes automatically run without users’ knowledge, instantly sending users’ input queries and personally identifiable information to Meta and Google, including highly sensitive private information such as health status, financial advice, and legal issues.
Couture said that between 2025 and 2026, she repeatedly used ChatGPT to look up personal health and finance-related questions, but had no idea that these private details had been forwarded to third-party advertising technology giants.
From the basis of law: OpenAI’s potential damages exceed $5 million
The complaint alleges that OpenAI violated multiple U.S. and California laws, including the Electronic Communications Privacy Act (ECPA) and California’s invasion of privacy laws. The scope of the CIPA is not limited to traditional phone communications; “emerging technologies” such as computers, the internet, and email are also covered.
Under California law, the statutory damages per violation can be as high as $5,000. If the plaintiffs qualify for a class action, the potential claims would be enormous, and the lawsuit amount is estimated to exceed $5 million. The plaintiffs’ attorney emphasized that by helping Meta and Google intercept communication contents without user authorization, OpenAI has constituted a systematic invasion of the privacy of millions of users.
What are tracking codes? A data thief hard to detect
So-called “Tracking Pixels” are tiny pieces of code embedded in a website’s source code that are difficult for most users to notice. Whenever a user visits a website, the tracking pixel automatically activates, quietly collecting the user’s browsing behavior and interaction information, and then sending it back to companies such as Meta or Google.
For e-commerce platforms or advertising-oriented websites, this may have commercial legitimacy; but the use case of ChatGPT is completely different. When users ask ChatGPT questions, it is often because they trust the platform enough to share private worries they would not tell anyone—for example, physical ailments, emotional heartbreak, or financial crises. Once these conversations are being recorded by advertising companies, OpenAI also loses users’ trust.
Meta uses these data to build its massive, precision-targeted advertising system. According to the complaint, Meta’s nearly all revenue comes from advertising, and its ad system can track users’ activities both inside and outside the platform, infer users’ interests, behavior, and social inclinations, and establish advertising delivery mechanisms for various audiences.
User privacy urgently needs legislative protection—AI conversations cannot be a loophole
As more and more people incorporate AI tools into daily life, privacy controversies worldwide continue to heat up. Whether it’s the emotional predicaments and personal struggles that users reveal to AI chatbots, or the thousands upon thousands of confidential materials that corporate employees attach, threats likely exist from individuals to businesses alike.
Just a few days ago, OpenAI was also charged in connection with ChatGPT allegedly guiding the gunman in a Florida university shooting case, indirectly leading to the deaths of both people, and has been named in the lawsuit.
Legal experts believe that if lawsuits like this are upheld, it will have a major precedent effect across the entire AI industry. It would force companies to more clearly inform users about the scope of data collection, retention periods, and sharing recipients, and provide a clear mechanism for consent or refusal. Currently, OpenAI has not made any public statement on this matter. The case is still in its early stage, and whether it can qualify as a class action remains for the court to decide.
This ChatGPT legal lawsuit adds another case! Accused of secretly leaking users’ chat content to Meta and Google was first seen in Chain News ABMedia.