“I once had ChatGPT insist that a particular composer wrote music for a game, even going so far as to list particular songs from the soundtrack that they were supposedly responsible for, and it helpfully provided hallucinatory citations when I asked for them (a broken link on the game publisher’s website and a link to Wikipedia, which did not in fact support its assertion either now or at any point in the article’s history). Nor could I find anywhere else on the internet where someone even mistakenly believed that that composer had worked on the game. ChatGPT lies not because it’s regurgitating falsehoods that it found on the internet - it lies because it invents new falsehoods on its own. It’s not just trained on stuff on the internet that’s wrong; it’s trained to be confidently wrong in general. It doesn’t know what facts are, it just knows how to produce things that are shaped like facts and shove them in fact-shaped holes. I personally wasted 30 minutes of my life fact-checking/“not believing everything it says”, when it confidently told me something surprising. My horizons were not broadened by exposing me to “different worldviews”. This was unequivocally a negative experience for me.”— comment on a MetaFilter post about AI: “My goal is to be helpful, harmless, and honest.”
I got someone to ask it about the difference between two grasses I know well, Festuca ovina and F. rubra, seeing as telling them apart was a vital part of my job, just to see what it was like. It took me a bit of thinking really hard to tell exactly how wrong it was, despite the fact I’m literally an expert in telling the difference between the two. It wasn’t just missing out on freely available information on reliably differentiating them, but it made up facts about one of the grasses that were just incorrect. E.g. saying F. rubra leaves were wider than those of F. ovina (true), but then saying they were up to 6mm wide (extremely not true, they max our around 1.5-2mm if we’re being generous). But it sounded so right that I had to spend several minutes picking apart what was incorrect and it turned out to be most of it.
StackOverflow similarly had issues when it was first released with people using it to generate answers that sounded like authoritative solutions but were, in fact, utter nonsense.
And that’s the issue, it can mimic the shape but it’s not intelligent. It’s not good at being right, it’s good at sounding right.
(via natalieironside)


![@FemboyPhysics tweeted: To my non-USA followers, that [sic] this is how we do volume.](https://64.media.tumblr.com/ba4100310cbd6218a8b81da1a0d43c1b/ef936e07a1e6145d-57/s500x750/330f4375761660e8628fb31b2c0b98ee8f86cd6f.jpg)















