ChatGPT, Bard, GPT-4, and the like are often pitched as ways to retrieve information. The problem is they'll "retrieve" whatever you ask for, whether or not it exists.
Tumblr user @indigofoxpaws sent me a few screenshots where they'd asked ChatGPT for an explanation of the nonexistent "Linoleum harvest" Tumblr meme,
He did run for president in 2012 with the 999 plan, though.
https://en.wikipedia.org/wiki/Herman_Cain_2012_presidential_campaign
Right, and to my knowledge everything else said about President Herman Cain is correct - Godfather's Pizza, NRA, sexual harassment, etc.
But notice… I keep claiming that Cain was President, and the bot didn't correct me. It didn't just respond with true information, it allowed false information to stand unchallenged. What I've effectively done is shown AI's inability to handle a firehose of falsehood. Humans already struggle with dealing this kind of disinformation campaign, now imagine that you could use AI to automate the generation and/or dissemination of misinformation.