AI Against Humanity
← Back to articles
Privacy 📅 March 11, 2026

Grammarly Faces Lawsuit Over Identity Theft

Grammarly is being sued for using individuals' identities in its AI features without consent. This raises critical privacy and ethical concerns in AI deployment.

Grammarly is facing a class-action lawsuit filed by journalist Julia Angwin, who claims the company unlawfully used her identity in its 'Expert Review' AI feature without her consent. This feature, which was designed to provide AI-generated editing suggestions by mimicking the insights of real experts, has drawn criticism for violating privacy and publicity rights. Angwin discovered her likeness was used when another journalist revealed the issue, prompting her to take legal action against Grammarly. In response to the backlash, Grammarly's CEO acknowledged the misstep and announced the discontinuation of the feature, stating that the company would rethink its approach moving forward. This incident raises significant concerns about the ethical implications of AI technologies that exploit individuals' identities for commercial gain without permission, highlighting the need for stricter regulations and ethical standards in AI deployment.

Why This Matters

This article matters because it underscores the potential for AI technologies to infringe on individual rights and privacy. As AI systems become more integrated into everyday applications, understanding the risks associated with their deployment is crucial for protecting personal information. The legal actions taken against companies like Grammarly serve as a warning and a call for accountability in the tech industry. Addressing these issues is vital to ensure that AI advancements do not come at the cost of individual rights and ethical standards.

Original Source

One of Grammarly’s ‘experts’ is suing the company over its identity-stealing AI feature

Read the original source at theverge.com ↗

Topic