Conservative figure Robby Starbuck files lawsuit against Meta Corporation due to artificial intelligence generating responses related to him.
Sensational Allegations: Conservative Activist Sues Meta Over Alleged AI-Fueled Defamation
In the bustling heart of Los Angeles, conservative activist Robby Starbuck has taken a swipe at Meta, accusing the social media behemoth of propagating falsehoods about him through its AI chatbot. Starbuck claims the AI smeared him by suggesting he partook in the riot at the U.S. Capitol on January 6, 2021, among other accusations that have caused significant harm to his reputation and jeopardized his family's safety.
Starbuck, renowned for his crusade against corporate DEI programs, discovered these spurious claims while tackling "woke DEI" policies at motorcycle manufacturer Harley-Davidson in August 2024. He felt betrayed when one dealership, dissatisfied with him, posted a screenshot of the AI's assertions to attack him on online platform X. Shocked by the credibility of the falsehoods, Starbuck sought validation, only to find that the allegations were indeed true. Since then, he's been grappling with a barrage of damaging false accusations.
Starbuck admits that he was in Tennessee during the Capitol riot, but the lawsuit filed in Delaware Superior Court on Tuesday seeks damages exceeding $5 million.
A Meta spokesperson responded, stating that "as part of our continuous effort to improve our models, we have already released updates and will continue to do so."
This lawsuit isn't an isolated incident. In 2023, a conservative radio host in Georgia filed a defamation suit against OpenAI over similar claims, alleging that ChatGPT had provided false information about him embezzling funds from a gun-rights group.
James Grimmelmann, a professor of digital and information law at Cornell Tech and Cornell Law School, believes AI companies may face liability in such cases. Tech companies, he argues, can't escape defamation simply by adding disclaimers, as such tactics won't absolve them of all responsibility. "You can't say, 'Everything I say might be unreliable, so you shouldn't believe it. And by the way, this guy's a murderer.' It can help reduce the degree to which you're perceived as making an assertion, but a blanket disclaimer doesn't fix everything," Grimmelmann explained.
Grimmelmann likens the arguments tech companies make in AI-related defamation cases to those in copyright infringement cases, where newspapers, authors, and artists often argue they can't supervise every creation an AI might produce without compromising its functionality or shutting it down entirely.
"I think it is an honestly difficult problem, how to prevent AI from hallucinating in the ways that produce unhelpful information, including false statements," Grimmelmann admitted. "Meta is confronting that in this case. They attempted to make some fixes to their models, and Starbuck complained that the fixes didn't work."
When Starbuck uncovered the AI's falsehoods, he reached out to Meta in an attempt to rectify the problem, contacting managing executives, legal counsel, and even querying the AI itself on how to address the allegedly false outputs. The complaint alleges that Meta was unwilling to implement meaningful changes or take responsibility for its actions.
Instead, the suit claims, Meta allowed its AI to continue spreading false information about Starbuck for months after being made aware of the errors, eventually wiping his name from its written responses altogether.
Meta's Chief Global Affairs Officer, Joel Kaplan, addressed the issue in a statement on X, calling the situation "unacceptable." He expressed regret for the outcomes the AI shared about Starbuck and acknowledged that the remedy didn't adequately address the underlying problem. Kaplan pledged to work with Meta's product team to investigate the root cause and explore possible solutions.
In addition to claiming he partook in the Capitol riot, Starbuck alleges that Meta's AI falsely accused him of Holocaust denial and said he pleaded guilty to a crime despite never being arrested or charged in his life. Meta later "blacklisted" Starbuck's name, a measure that Starbuck contends did not eliminate the problem because users can still access information about him through news stories on the platform.
"While I'm the target today, a candidate you like could be the next target, and lies from Meta's AI could sway votes determining an election," Starbuck warned on X. "You could be the next target too."
Insights:
AI-generated content's implications for defamation law are still evolving, as courts grapple with issues such as designer liability, user warnings, and system intent. The recent Walters v. OpenAI case saw a Georgia court dismiss a defamation claim related to ChatGPT's output, stating that the system's responses are not deemed factual by design, making it unlikely they convey truthful information (Reuters.com, 2022). However, the specific legal outcome for cases like Starbuck's will depend on factors such as the AI's design, user warnings, and the intent behind generating the content. Legal consequences may extend to AI developers, operators, and users, depending on their level of involvement in creating the defamatory content (FindLaw.com, 2022). Stay tuned as the legal landscape of AI-related defamation cases continues to unfold.
References:
- FindLaw.com. (2022). How Defamation Law Applies to AI Content. [online] Available at: https://www.findlaw.com/how-a-lawyer-can-help/how-defamation-law-applies-to-ai-content.html [Accessed 30 Mar. 2023].
- Reuters.com. (2022). What the Walters v. OpenAI court case means for AI ethics and liability. [online] Available at: https://www.reuters.com/future-of-everything/ai/what-walters-vs-openai-court-case-means-ai-ethics-liability-2022-06-22/ [Accessed 30 Mar. 2023].
The media coverage of Starbuck's lawsuit against Meta adds to the general-news discussion about AI-generated content and its potential implications for defamation law. Politics also enters the fray, as questions arise about accountability for AI developers, operators, and users in cases of defamatory content. Furthermore, the technology sector faces increased scrutiny as it strives to improve its models while navigating legal and ethical complexities.