A class action lawsuit was initiated by a victim of Jeffrey Epstein against Google, asserting that the company's AI Mode feature improperly published personal information about Epstein's victims. This legal action follows a troubling pattern where sensitive information was disclosed during the Department of Justice's (DOJ) release of over three million pages of evidence in the Epstein case. While some names of predators were redacted, the identities of several survivors were exposed, raising significant privacy concerns.
In the lawsuit filed in the U.S. District Court for the Northern District of California, the plaintiff states, "The United States, acting through the DOJ, made a deliberate policy choice to prioritize rapid, large-volume disclosure over the protection of Epstein survivors’ privacy." As a result, survivors not only faced the trauma of reliving their experiences but also became targets of harassment after their information was made public.
Although the DOJ rectified these errors by removing the sensitive information, the lawsuit claims that Google’s AI search function, known as AI Mode, continued to display this information online. The lawsuit contends, "Even after the government acknowledged the disclosure violated the rights of survivors and withdrew the information, online entities like Google continuously republish it, refusing victims' pleas to take it down." The plaintiff, referred to as "Jane Doe," indicated that upon searching for her name and those of other victims involved in the lawsuit, Google's AI Mode revealed their full names, contact details, cities of residence, and their connections to Epstein. In one instance, the AI generated a hyperlink that allowed anyone to send emails directly to the plaintiff with one click.
The lawsuit claims the plaintiff contacted Google multiple times over the past two months regarding this issue but received no response. "Despite receiving actual notice of the violations, the substantial harm caused by its continued dissemination, and the status of many Class members as sexual abuse survivors entitled to heightened privacy protections under the law, Google has failed and refuses to remove, de-index, or block access to the offending materials," the lawsuit asserts. It also notes that other AI tools, including ChatGPT and Claude, did not produce any victim-related information during similar tests.
The lawsuit argues that unlike a typical search, AI Mode is not a neutral search index; it acts as an active recommender and content generator, which could be interpreted as "actionable doxxing." This legal action comes amid a week where tech companies' liabilities for online content have been scrutinized. Recent court rulings found Meta and Google liable in separate cases related to social media addiction and child safety, respectively, marking significant challenges to the protections offered by Section 230 of the Communications Decency Act.
Section 230 currently shields large tech companies from liability for content created by third parties. However, these recent cases may set a precedent that could alter the legal landscape regarding online free speech and the responsibilities of tech companies. The applicability of Section 230 to AI-generated content has been a contentious issue, with Senator Ron Wyden, a co-author of the law, stating that AI chatbots should not be protected under its provisions.
The DOJ and Google have not yet responded to requests for comments regarding this lawsuit.
Source: Gizmodo News