AI Against Humanity
← Back to articles
Ethics 📅 March 11, 2026

Grammarly says it will stop using AI to clone experts without permission

Grammarly has decided to discontinue its AI feature that misrepresented expert voices without permission, following significant backlash. This move highlights ethical concerns in AI.

Grammarly recently announced it will discontinue its 'Expert Review' AI feature, which had drawn criticism for misrepresenting the voices of real experts without their consent. The feature, launched in August, utilized publicly available information to generate writing suggestions based on the work of influential figures. Following backlash from experts who felt their identities were being exploited, Superhuman, the company behind the feature, acknowledged the concerns and committed to rethinking its approach. The decision to disable the feature reflects a growing awareness of the ethical implications of AI technologies, particularly regarding consent and representation. Moving forward, Superhuman aims to ensure that experts have control over how their knowledge is utilized and represented in AI applications, emphasizing the importance of collaboration and ethical standards in AI development.

Why This Matters

This article highlights the ethical risks associated with AI technologies, particularly the potential for misrepresentation and exploitation of individuals' identities without their consent. As AI systems become more integrated into various sectors, understanding these risks is crucial for fostering responsible AI development and ensuring that experts retain control over their intellectual contributions. The implications extend beyond individual cases, affecting trust in AI applications and the broader societal acceptance of such technologies.

Original Source

Grammarly says it will stop using AI to clone experts without permission

Read the original source at theverge.com ↗

Topic