You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Humans aren’t great at reliably knowing truth from fiction too

    You’re exactly right. There is a similar debate about automated cars. A lot of people want them off the roads until they are perfect, when the bar should be “until they are safer than humans,” and human drivers are fucking awful.

    Perhaps for AI the standard should be “more reliable than social media for finding answers” and we all know social media is fucking awful.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      The problem with these hallucinated answers that makes them such a sensational story is that they are obviously wrong to virtually anyone. Your uncle on facebook who thinks the earth is flat immediately knows not to put glue on pizza. It’s obvious. The same way It’s obvious when hands are wrong in an image or someone’s hair is also the background foliage. We know why that’s wrong; the machine can’t know anything.

      Similarly, as “bad” as human drivers are we don’t get flummoxed because you put a traffic cone on the hood, and we don’t just drive into tue sides of trucks because they have sky blue liveries. We don’t just plow through pedestrians because we decided the person that is clearly standing there just didn’t matter. Or at least, that’s a distinct aberration.

      Driving is a constant stream of judgement calls, and humans can make those calls because they understand that a human is more important than a traffic cone. An autonomous system cannot understand that distinction. This kind of problem crops up all the time, and it’s why there is currently no such thing as an unsupervised autonomous vehicle system. Even Waymo is just doing a trick with remote supervision.

      Despite the promises of “lower rates of crashes”, we haven’t actually seen that happen, and there’s no indication that they’re really getting better.

      Sorry but if your takeaway from the idea that even humans aren’t great at this task is that AI is getting close then I think you need to re-read some of the batshit insane things it’s saying. It is on an entirely different level of wrong.