The lawyer said that he had “no idea” ChatGPT could fabricate information and that he “deeply” regretted his decision.

  • roo@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    There should be a law that AI answers have a cryptic word hidden in plain sight to reveal itself if put to review. Something weird that only AI will pick up.

    • Flaky_Fish69@kbin.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The issue here is that most people will give it at least cursory read throughs, to make sure it passes the sniff test.

      The reality is it’s not the ai that’s submitting it. The human is blindly cutting and pasting, but the moment you add “this text was generated by ai”… literally or just as tags… then they’re going to start clipping or adjusting it

    • Jon-H558@kbin.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I saw a YouTube video (sorry can’t find the link now) on using hidden weighting to allow exam markers to detect ai. It was things like words with many synonyms always picking the third most popular or somthing so that over a 3 page essay if you happened to use the lightly off weighting on word choice the anti cheating software would pick up on it. The chances of a human weighing themselves that way would be rare as they would tend to work to one of its patterns where as AI was forced to use a few different patterns but in a deterministic way.

      • huntingrarebits@kbin.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Over time, wouldn’t you expect as people see more examples of text generated by these systems, that the general usage of the “third most popular” synonym would eventually eclipse the second or first? If the ranking of the synonyms were based solely on written texts, proliferation of generated texts with weighted word choices would also skew usage.