Grammarly Scraps AI Feature That Let Users Get Writing Advice From Stephen King After Lawsuit

Just weeks ago, if you were struggling with a work email, you could theoretically ask for editing help from the likes of Stephen King or the late Carl Sagan. It sounded like a writer‘s dream or a bizarre parlor trick. But for the hundreds of journalists, authors, and academics whose names and styles were used without permission, it was a nightmare that has now culminated in a class-action lawsuit and the feature’s swift demise.

 

Grammarly’s parent company, Superhuman, has officially pulled the plug on its “Expert Review” function following a firestorm of criticism and legal action. The tool, which launched in August 2025, allowed paying users to receive AI-generated feedback “inspired by” the published works of famous writers and thinkers. The backlash was so severe that CEO Shishir Mehrotra took to LinkedIn not just to announce the feature’s removal, but to issue a direct apology to the very people the company had tried to digitize.

The Lawsuit That Broke the Algorithm

The tipping point arrived in the form of a federal lawsuit filed by investigative journalist Julia Angwin. She is leading a class-action complaint against Superhuman and Grammarly, alleging that the company misappropriated her name and identity for commercial gain. The lawsuit, which seeks damages exceeding $5 million, argues that it is “unlawful to appropriate people’s names and identities for commercial purposes,” especially when those individuals never gave the advice being attributed to them.

 

“I‘m suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me and many other writers and journalists without consent,” Angwin wrote on social media. The legal filing paints a picture of a company that allegedly scraped the internet for names and writing styles, building a roster of “experts” that included not only megastars like Stephen King and Neil deGrasse Tyson, but also working journalists from outlets like The Verge, Bloomberg, and various academic institutions.

 

For the journalists who discovered their own names attached to advice they never wrote, the reaction was visceral. Wes Fenlon, a gaming journalist whose persona was used in the tool, described the company’s initial solution as an opt-out email, as a “laughably inadequate recourse for selling a product that verges on impersonation”.

From ‘Opt-Out’ to ‘Shut Down’

Initially, Superhuman’s response seemed typical of a tech company caught off guard: they opened an email inbox for writers to request removal. But the sheer volume of anger, combined with the specific legal teeth of publicity rights laws in states like New York and California, forced a rapid reassessment.

 

“After careful consideration, we have decided to disable Expert Review as we reimagine the feature,” Ailian Gan, Superhuman’s director of product management, said in a statement. It was a significant retreat from the company’s earlier stance, acknowledging that the opt-out solution didn’t go far enough.

 

Mehrotra expanded on the decision in his LinkedIn post, admitting, “We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we’ll rethink our approach going forward”. He explained that the feature was originally designed to “help users discover influential perspectives,” but conceded that the execution had crossed a line, effectively misrepresenting the voices of living, breathing experts.

Why This Hit a Nerve

To understand the fury, one has to look at how the feature functioned. Grammarly wasn’t just offering generic style options like “formal” or “creative.” It was attaching specific names to specific suggestions. A user writing a science paper might get feedback “in the style of” Carl Sagan. A student drafting an essay might see suggestions attributed to a living professor who had no idea their name was being used to validate a machine’s output.

 

Academics, in particular, felt violated. John Kaag, a philosophy professor at the University of Massachusetts Lowell, contrasted Grammarly‘s approach with ethical AI usage. He pointed to platforms like Rebind, where he spent 30 hours recording his actual thoughts to train a chatbot that attributes its responses directly to his contracted commentary. “Grammarly is using named scholarly authority as a design choice to make the AI feedback feel more credible,” Kaag told Times Higher Education. “This is just garbage”.

 

The inclusion of deceased authors like Carl Sagan and historian David Abulafia, who passed away in January 2026, added a layer of existential unease . As one scholar noted, the feature seemed to confirm “a profound distrust of AI” within the humanities, demonstrating a technology that views a lifetime of work as mere data to be remixed without consent.

 

A Rare Admission in the Middle of a Fight

From a public relations perspective, Grammarly’s move is unusual. Companies rarely admit fault so clearly while actively facing a lawsuit. Mehrotra’s apology did not blame a bug or a misunderstanding; he acknowledged the feature was rolled out in a way that was harmful to the very people whose credibility the company sought to borrow.

 

However, the apology landed in a complex landscape. The company had rebranded its parent entity to Superhuman in October 2025, yet kept the familiar Grammarly name for its service. Critics argue the rebrand was an attempt to pivot toward AI dominance, but the Expert Review fiasco has exposed the ethical gap between what AI  can do and what it should do.

What Happens Now?

With the feature disabled, Superhuman is left to “reimagine” what expert interaction looks like. Mehrotra has hinted at a future model where “experts choose to participate, shape how their knowledge is represented, and control their business model”. This sounds like a pivot toward a marketplace or platform where writers can license their styles, similar to how musicians license their voices for AI synthesis.

 

But for the writers who saw their life’s work scraped and fed into a machine without a heads-up, let alone a paycheck, the damage is done. The class-action lawsuit will now determine whether the company’s apology is just the first chapter in a much more expensive lesson about consent in the age of AI.

 

As universities like Stanford and Boston University are both clients of Grammarly, watch the fallout. The case sends a clear message to the tech industry: a writing style is not public domain, and the dead cannot consent.

Share:

Related Blogs

Scroll to Top