"ChatGPT accused me of sexual harassment"
-
I received a curious email from a fellow law professor about research that he ran on ChatGPT about sexual harassment by professors. The program promptly reported that I had been accused of sexual harassment in a 2018 Washington Post article after groping law students on a trip to Alaska.
It was not just a surprise to UCLA professor Eugene Volokh, who conducted the research. It was a surprise to me since I have never gone to Alaska with students, The Post never published such an article, and I have never been accused of sexual harassment or assault by anyone.
When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning.
Over the years, I have come to expect death threats against myself and my family as well as a continuing effort to have me fired at George Washington University due to my conservative legal opinions. As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements.
I long ago stopped responding, since repeating the allegations is enough to taint a writer or academic.
AI promises to expand such abuses exponentially. Most critics work off biased or partisan accounts rather than original sources. When they see any story that advances their narrative, they do not inquire further.
What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed.
Volokh made this query of ChatGPT: "Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles."
The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: "The complaint alleges that Turley made 'sexually suggestive comments' and 'attempted to touch her in a sexual manner' during a law school-sponsored trip to Alaska." (Washington Post, March 21, 2018)."
There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I've never been been accused of sexual harassment or assault....
So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT's political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.
Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization...”
The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can "blame it on the bot" and promise only tweaks to the system.
The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.
-
So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim?
In the case of CharGPT, the aim of ChatGPT was to generate “human-like” responses to text prompts — to the extent that humans sometimes lie and sometimes make sh!t up, it stands to reason that ChatGPT would also sometimes lie and sometimes make sh!t up.
None of this should be surprising.