Grammarly Is Facing a Class Action Lawsuit Over Its AI 'Expert Review' Feature
Source: Wired
Superhuman, the tech company behind the writing software Grammarly, is facing a class action lawsuit over an AI tool that presented editing suggestions as if they came from established authors and academicsnone of whom consented to have their names appear within the product.
Julia Angwin, an award-winning investigative journalist who founded The Markup, a nonprofit news organization that covers the impact of technology on society, is the only named plaintiff in the suit, which does not call for a specific amount in damages but argues that damages across the plaintiff class are in excess of $5 million. She was among the many individuals, alongside Stephen King and Neil deGrasse Tyson, offered up via Grammarlys Expert Review tool as a kind of virtual editor for users.
The federal suit, filed Wednesday afternoon in the Southern District of New York, states that Angwin, on behalf of herself and others similarly situated, challenges Grammarlys misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman.
The complaint comes as Superhuman has already decided to discontinue the feature amid significant public backlash. After careful consideration, we have decided to disable Expert Review as we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be representedor not represented at all, said Ailian Gan, Superhumans director for product management, in a statement to WIRED shortly before the claim was filed. We built the agent to help users tap into the insights of thought leaders and experts and to give experts new ways to share their knowledge and reach new audiences. Based on the feedback weve received, we clearly missed the mark. We are sorry and will do things differently going forward.
-snip-
Read more: https://www.wired.com/story/grammarly-is-facing-a-class-action-lawsuit-over-its-ai-expert-review-feature/
Earlier threads about what Grammarly was doing:
Grammarly Is Offering 'Expert' AI Reviews From Your Favorite Authors--Dead or Alive (Wired. & Verge staff were also used)
https://www.democraticunderground.com/100221078055
Grammarly ripped off famous writers using their names for AI-generated Expert Reviews. They're now allowing opt-outs
https://www.democraticunderground.com/100221086007
The lawsuit filed today says that "Contrary to the apparent belief of some tech companies, it is unlawful to appropriate peoples names and identities for commercial purposes, whether those people are famous or not."
erronis
(23,590 posts)They're pouring money into this snake oil and need to show profits.
highplainsdem
(61,596 posts)of them. I just wish a lot of them would serve prison sentences, with their built-on-theft tools destroyed and the companies having to start over with what's in the public domain plus what they can afford to pay for after the lawsuits. At which point it would be obvious to everyone that almost the entire value of genAI was in the stolen IP.
AZJonnie
(3,599 posts)This is why I often say that the biggest fuckup about this whole thing is there should have been hard, clear regulations and even criminal laws in some cases, about things like this sort of infringement on people's intellectual output, and indeed even their names/reputations, constructed at minimum before anyone could incorporate AI into their commercial products. And we needed like a serious, permanent Bureau of AI Oversight created, in as independent a way possible (like the CFPB was meant to be done, for example), and Congressional Committees for it too.
I know you are also deeply troubled by the theft aspect while building the models and of course I think that is also problematic but from a practical perspective, may have been tough sledding legislatively to regulate, but at minimum when it comes to actual commercial products doing shit like what Grammarly is doing here, THIS should have been a much easier piece to legislate and come to agreement on
and should've been done a LONG time ago!!!
Regulating how the "stolen" (to your thinking, and mine to a slightly lesser extent) information can be used in a for-profit setting, that should be no-brainer and easier to reach consensus upon vs. the review of materials to make the AI "smart-like" in the first place. I don't LIKE that part, but I do see 2 arguments about it. These Grammarly shenanigans, I do NOT