• Drewelite@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    So, if we put AI in an echo chamber it gets dumber? Wow it really does think like humans.

  • simple@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Feels like AI creators can only get away with using pre-2022 data for so long. At some point the information will be outdated and they’ll have to train on newer data, and it’ll be interesting to see if this is a problem that can be solved without harming the dataset’s quality.

    My guess is they’d need to have an AI that tries to find blatantly AI generated data and take it out of the dataset. It won’t be 100% accurate, but it’ll be better than nothing.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m surprised, these models don’t have something like a “ground truth layer” by now.

      Given that ChatGPT for example is completely unspecialized, I would have expected that relatively there’s a way to hand encode axiomatic knowledge. Like specialized domain knowledge or even just basic math. Even tieried data (i.e. more/less trusted sources) seem not to be part of the design.

      • Splodge5@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Because it’s not designed to be a knowledge base, it’s designed to imitate human communication. It’s the same reason why ChatGPT can’t do maths - it doesn’t “know” anything, it just predicts the most likely word/bit-of-a-word to come next. ChatGPT being as good as it is at, say, writing code given a natural language prompt is sort of just a happy accident, but people now expect that to be it’s primary function.

      • Drewelite@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I think this is something that’s easier said than done. Maybe at our current level, but as these AI get more advanced… What is truth? Sure mathematics seems like an easy target until we consider one of the best use cases for AI could be theory. An AI could have a fresh take on our interpretation of mathematics, where these base level assumptions would actually be a hindrance.

        • AggressivelyPassive@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I mean, let’s be honest here: AI will not be primarily used to find out new truthiness about the universe, but order butter at the right time. Or write basic essays, code, explain known things.

          That kind of knowledge could easily be at categorized.

  • sorrybookbroke@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    So what I’m hearing is that if we don’t like the direction AI is taking us, we should be littering the internet with as much AI text and art as we can while pretending it’s not AI.

    Separately, with how popular AI is obviously posed to become, does this mean we’ll stagnate culturally? With AI making the artist, the authors, the creatives job extremely difficult to monetize since their work will always be replicated quicker, cheaper, and in higher quantity by the bot than them these things will become much less human generated. If AI cannot get past this we’ll just be stuck here, with little cultural evolution.

    • Aaron@beehaw.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Throughout history, people have always been driven to create, and others have always sought out creative works. For that reason, I don’t think we’ll necessarily “stagnate culturally” in a broad sense.

      However, at least in the US, we’re already standing at the precipice of making creative work practically impossible. Our extremely weak (by peer nation standards) labor protection laws and social support systems tends to strip life of everything but the obligation to work.

      Our last bastion of hope for structural protection for creativity is the possibility that anyone could both create, and profit from it. Copyright law was, originally, intended to amplify that potential.

      I usually point to stock photography as an area where people used to be able to make at least modest money, but nowadays you’d be lucky to make poverty wages. The market was flooded by cheap, high-quality cameras, and thus cheap, high-quality images. AI will do the same thing for many other mediums.

      What has me really concerned is that the majority of really cool makers and creators I watch on YouTube are Canadian. I’ve convinced myself that this is because someone living in Canada can take the very real risk of sinking their life’s energy into starting a YouTube channel because at least they know that if they get cancer, they have somewhere to go.

      Not so here in America. If you aren’t working for an established employer, or sitting on quite a bit of cash for independent health insurance, you’re taking substantial risk in being unemployed for any length of time (assuming you have the choice). Even if you do “make it,” the costs of self-insurance for sole proprietors is no joke!

      So the only people taking their life in their own hands to create works of real cultural value are 1) the few percent who manage to get paid for it, 2) the independently wealthy and/or retired, and 3) the poor and desperate who would be just as precarious in either case.

      It’s not our finest hour here, if I do say so. I hope the rise of AI helps amplify this conversation. I am truly concerned about it.

  • Mothra@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Interesting article, I find it hard to believe the major AIs of today will collapse for such a reason. This gives me "year 2000 collapse " vibes, I’m by no means trashing the article, just saying I’m skeptical we’ll ever reach such a point. The article itself already mentions awareness of the feedback loop among devs as well as two possible ways to counteract it.

    • ilost7489@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Collapse is definitely a strong word to use for this. They no doubt will get worse as this kind of training simply reaffirms biases and incorrect data, but the AIs won’t suddenly collapse

  • Communist@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve heard the opposite is happening, the AI’s are training themselves because they can generate decent content, and by selecting better content that way they’re actually getting more intelligent, so, I dunno.