Stanford misinformation expert accused of using AI to fabricate court statement

Published Dec. 2, 2024, 12:41 a.m., last updated Dec. 2, 2024, 12:45 a.m.

Communication professor Jeff Hancock, an expert on technology and misinformation, has been accused of using artificial intelligence (AI) to craft a court statement.

In November, Hancock — who is the founding director of Stanford’s Social Media Lab — filed a declaration in a Minnesota court case over the state’s 2023 law that criminalizes the use of deepfakes to influence an election. The professor’s 12-page declaration in defense of the law contained 15 citations, two of which cannot be found.

The plaintiffs of the case, Republican Minnesota State Representative Mary Franson and conservative social media satirist Christopher Kohls, argued that the law is an unconstitutional limit on free speech. Submitting his testimony on behalf of a defendant, Minnesota Attorney General Keith Ellison, Hancock claimed that deepfakes, or AI-generated media that alter a person’s likeness or voice, can enhance the persuasiveness of misinformation and defy traditional factchecking methods. 

He made his declaration, for which he was compensated at the government rate of $600 per hour, under penalty of perjury that everything he stated in the document was “true and correct.”

The Daily and other news outlets could not find two academic journal articles that Hancock cited — “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance” and “The Influence of Deepfake Videos on Political Attitudes and Behavior” — via their reported digital object identifier or in the archives of their reported journals.

Pointing out the errors in Hancock’s declaration in a Nov. 16 filing, Franson and Kohls’ attorney Frank Berdnarz called for it to be excluded from the judge’s consideration of whether to give a preliminary injunction against the law.

“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” Berdnarz wrote. “The existence of a fictional citation Hancock (or his assistants) didn’t even bother to click calls into question the quality and veracity of the entire declaration.”

The Daily has reached out to Hancock for comment.

Hancock, who currently teaches COMM 1: “Introduction to Communication” and COMM 324: “Language and Technology,” appeared in a 2024 Netflix documentary featuring Bill Gates, offering insights on the future of AI. The professor is scheduled to teach COMM 124/224: “Truth, Trust, and Tech” on deception and communication technology in the spring.

Kohls, known for his social media moniker Mr. Reagan, previously challenged the constitutionality of two California bills signed into law in September by Gov. Gavin Newsom. The bills, AB 2655 and AB 2839, require online platforms to block certain deceptive media content relating to elections and prohibit the distribution of advertising content with deceptive media content. Newsom called out Kohls’ viral video, which manipulated Kamala Harris’ voice in a campaign ad, to be illegal.

Yuanlin "Linda" Liu ‘25 is The Daily's vol. 266 Editor-in-Chief. She was previously managing editor of arts & life during vol. 263 and 264 and magazine editor during vol. 265. Contact her at eic 'at' stanforddaily.com.

Login or create an account