The Picard Maneuver@startrek.website to Comic Strips@lemmy.world · 1 year agoThe entrance exam is hardcorestartrek.websiteimagemessage-square42fedilinkarrow-up1743arrow-down116
arrow-up1727arrow-down1imageThe entrance exam is hardcorestartrek.websiteThe Picard Maneuver@startrek.website to Comic Strips@lemmy.world · 1 year agomessage-square42fedilink
minus-squareIntralexical@lemmy.worldlinkfedilinkarrow-up7arrow-down1·1 year ago…Widespread knowledge of LLM fallibility should be a recent enough cultural phenomenon that it’s not in the GPT training sets? Also, that comment didn’t even mention mushrooms. I assume you fed it your own description of the conversational context?
minus-squareThe Picard Maneuver@startrek.websiteOPlinkfedilinkarrow-up14·1 year agoYeah, the prompt was something like “give an unconvincing argument for using AI to identify poisonous mushrooms”
minus-squareluciferofastoralinkfedilinkarrow-up1·1 year agoThey might have artificially augmented the training set with such things in an attempt to communicate “Look, even ChatGPT thinks it’s not reliable”. (If you’re about to point out that ChatGPT doesn’t think, you probably didn’t need to be told that in the first place)
…Widespread knowledge of LLM fallibility should be a recent enough cultural phenomenon that it’s not in the GPT training sets? Also, that comment didn’t even mention mushrooms. I assume you fed it your own description of the conversational context?
Yeah, the prompt was something like “give an unconvincing argument for using AI to identify poisonous mushrooms”
They might have artificially augmented the training set with such things in an attempt to communicate “Look, even ChatGPT thinks it’s not reliable”.
(If you’re about to point out that ChatGPT doesn’t think, you probably didn’t need to be told that in the first place)