Five@beehaw.org to Technology@beehaw.orgEnglish · 1 year agoChatGPT broke the Turing test — the race is on for new ways to assess AIwww.nature.comexternal-linkmessage-square133fedilinkarrow-up1198arrow-down12cross-posted to: singularity@lemmit.online
arrow-up1196arrow-down1external-linkChatGPT broke the Turing test — the race is on for new ways to assess AIwww.nature.comFive@beehaw.org to Technology@beehaw.orgEnglish · 1 year agomessage-square133fedilinkcross-posted to: singularity@lemmit.online
minus-squareMaestro@kbin.sociallinkfedilinkarrow-up5·1 year agoHow does ChatGPT do with the Winograd schema? That’s a lot harder to fake: https://en.m.wikipedia.org/wiki/Winograd_schema_challenge
minus-squareDroggl@lemmy.sdf.orglinkfedilinkarrow-up2·1 year agoI dont remember the numbers but iirc it was covered by one of the validation datasets and GPT 4 did quite well on it
minus-squareMaestro@kbin.sociallinkfedilinkarrow-up2·edit-21 year agoYeah, but did it do well on the specific examples from the Winograd paper? Because ChatGPT probably just learned those since they are well known and oft repeatef. Or does it do well on brand new sentences made according to the Winograd scheme?
How does ChatGPT do with the Winograd schema? That’s a lot harder to fake: https://en.m.wikipedia.org/wiki/Winograd_schema_challenge
I dont remember the numbers but iirc it was covered by one of the validation datasets and GPT 4 did quite well on it
Yeah, but did it do well on the specific examples from the Winograd paper? Because ChatGPT probably just learned those since they are well known and oft repeatef. Or does it do well on brand new sentences made according to the Winograd scheme?