Microsoft and OpenAI sued for $3 billion over privacy concerns
Summary
Microsoft and OpenAI are facing a lawsuit filed by a group of anonymous individuals, alleging that the companies' AI products, including ChatGPT, collected and utilized personal data without adequate consent.
The lawsuit claims that Microsoft and OpenAI have embedded millions of individuals' personal data into their AI products, potentially putting many at risk of having their personal information disclosed.
The lawsuit seeks class-action status and is demanding damages of $3 billion. The complaint alleges that Microsoft and OpenAI have violated several laws, including America's Electronic Privacy Communications Act and the California Invasion of Privacy Act.
This lawsuit follows another ongoing case from last November where Microsoft, its GitHub subsidiary, and OpenAI were sued for allegedly violating licensing requirements by reproducing the code of millions of software developers through the Copilot service, which is based on an OpenAI model.
What is happening:
In a recent development, Microsoft and OpenAI have been hit with a lawsuit by a group of anonymous individuals, alleging serious privacy violations. The lawsuit, filed in a federal court in San Francisco, accuses the tech giants of utilizing personal data without sufficient consent for the development of their artificial intelligence (AI) products, including the widely known ChatGPT.
The plaintiffs have issued strong allegations against the tech firms, accusing them of sidestepping legal pathways to acquire data and resorting to unauthorized methods. They assert that the companies have systematically scraped over 300 billion words from the Internet, a process that included personal information obtained without express permission. It's also suggested that OpenAI undertook these activities covertly, failing to register as a data broker as required by law.
The lawsuit further claims that Microsoft and OpenAI, through their AI products, are engaged in the collection, storage, tracking, sharing, and disclosure of personal data from millions of people. This data allegedly encompasses a broad range of information, including account details, names, contact information, login credentials, emails, payment information, browser data, social media information, chat logs, and more.
According to the lawsuit, the personal information of millions, reflecting interests, religious beliefs, political views, and other data gleaned from online interactions, has been embedded into Microsoft and OpenAI's AI products. Notably, OpenAI has created a suite of text-generating large language models, including GPT-2, GPT-4, and ChatGPT, all of which have been heavily promoted by Microsoft and integrated across its various platforms, from Windows to Azure.
The plaintiffs argue that the defendants have failed to adequately filter out personally identifiable information from their training models, thereby putting millions at risk of having their information disclosed to strangers worldwide. The complaint, however, lacks specific instances of harm. For the 16 plaintiffs, the expectation was that their digital interactions on platforms like Reddit and ChatGPT would not be incorporated into an AI model.
The lawsuit is currently seeking class-action certification and damages of $3 billion, though this figure is likely a placeholder, with actual damages to be determined if the plaintiffs prevail. The complaint alleges that Microsoft and OpenAI have violated multiple laws, including America's Electronic Privacy Communications Act, the California Invasion of Privacy Act, and the Illinois Biometric Information Privacy Act, among others.
Neither Microsoft nor OpenAI has issued a comment on the lawsuit. This lawsuit follows another ongoing case in which Microsoft, its GitHub subsidiary, and OpenAI were sued last November for allegedly violating licensing requirements by reproducing the code of millions of software developers through the Copilot service, which is based on an OpenAI model.
With this case yet to be resolved, it remains to be seen how the allegations will play out in court and what impact, if any, the lawsuit will have on the broader AI industry. As the technology evolves, so too must the discussions and regulations around data privacy and AI ethics.
- AI Copilot user found that chatbot starts acting like a psychopath if asked not to use emojis