just think: an AI trained on depressed social justice queers

wonder what Hive is making of Bluesky

“you took a perfectly good pocket calculator and gave it anxiety”

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    10 months ago

    Well that sucks. I’ve always had a lingering respect for Mullenweg because I think Wordpress.com is a good service, I really like Simplenote, and I thought that Automattic taking over Tumblr could not make stuff worse after the service being gelded by the porn-haters.

    But his latest antics on moderation at Tumblr and now this (although he’s hardly alone, Reddit is also gonna sell its user’s contents to the AI mills) really showed his true face.

    (I’ll never forget laughing at him for losing tens of thousands worth of Leica gear in a lost/stolen luggage incident many years back)

  • rinze@infosec.pub
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 months ago

    We can’t have nice things.

    The full text describes clusterfuck after clusterfuck. It’s worth registering (it’s free to read) even just for this one.

  • dumpsterlid@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    10 months ago

    If AI is trained on some subset of human interactions and subjects, lets call it set A, and someone uses the AI to learn about a subset of human interactions and subjects, lets call it set B, then there necessarily must be some shared set C of subjects and interactions contained within both A and B. In some cases information may literally be mirrored or it may simply be memes or ideas that pop up over and over again. Note, I am talking about perspectives on things, meta data if you will about those things contained within the sets A and B, just as much as I am talking about the specific things themselves in A and B.

    The relative size of set C can be thought of as a practical measure of the magnitude difference between pattern matching and knowledge in a given context. AI design seems to treat set C as always trivial in size compared to set A or set B, and does not seem concerned with the possible cross-talk effects that arise from set A and set B not truly being linearly independent. Even worse, the cross-talk that happens creates an invisible distortion that degrades the usefulness of the AI, but that cannot be fundamentally distinguished (through inspection of the AI alone and not the data sets) from the correctly functioning aspects of the AI.

    The larger set C becomes, the exponentially quicker the collective wisdom of human conversations online is strip mined and obscured behind machine generated fluff.

    Everyone wants to talk about AI from the angle of the genius computer programmer making an intelligent machine because that is sexy, but what these “AI” really represent are expressions of the power of good data sets and the priceless value of human beings who methodically contribute high quality content to those data sets. In other words, AI and LLMs are about humans intelligently structuring set A and set B so that set C isn’t a problem. AI is not some magic thing that only needs humans to be trained on quality data sets to get started, rather AI is an expression of how powerful our collective conversations and creations are when we create structures out of them that computers can interface with.

    Sillicon Valley and the 1% are trying to convince us that the collective power of the crowd is actually something they own by slapping “AI” on it and calling it a day but even with the basic argument I made above (excluding the fact that AI just also hallucinates shit) there is just no way that AI in its current form along with simultaneous divestment and devaluing of the systems that created the high quality training data sets in the first place is long term sustainable. The sensemaking of AI HAS to collapse in on itself along any meaningful metric if things keep heading in this direction and the whiplash that is going to cause will hurt a lot of humans.

    picture of ouroboros, the serpent that is eating itself

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      definitely one of the longest ways to say “they’re thieving little shits trying to sell things back to us after slapping a lick of paint on it” I’ve seen in a long while, unfortunately there’s no achievement badge for that tho

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      this reads like taking the promises made by many different technologies I’ve read weird promises from and then extrapolating from the promises as if they were a description of reality

      (academic blockchain theorising has often been of this genre for example)

  • David Gerard@awful.systemsOPM
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 months ago

    no i don’t have an unpaywalled copy, but it’s 404 so i’ll assume the article delivers on the headline and intro paras

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      404 is the first time I’ve ever felt like a news source (that isn’t a journalist I know personally) has been consistently high quality enough to pay for it

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 months ago

    Imagine having to train your LLM on tumblr content, really scraping the bottom of the barrel in legibility (nothing against all those weird tumblr communities with their weird slang and stuff, prob going to make it the LLMs just sprout nonsense as the LLM doesn’t understand anything).

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      just think: an AI trained on depressed social justice queers

      “you took a perfectly good pocket calculator and gave it anxiety”

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      it still amazes me that nobody at OpenAI seemed to realize ChatGPT at release sounded exactly like a bottom-tier reddit poster, because of how much of Reddit’s corpus they had ingested. part of me can’t wait for gpt’s shitty neoliberal tumblr impression before they re-weight it to once again sound like the kind of English essay that gets “Apply Yourself!” written on it in red ink

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    10 months ago

    Wordpress and Docusign of all businesses are now in the AI training hype train.

    Because there’s nothing companies love more than torching their own good reputations for a bit of short term profit* at the expense of society!

    * probably not even

    • AcausalRobotGod@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      I can understand Wordpress doing it, but DOCUSIGN??? I’m not a federal judge or anything but I would immediately stop using their service for anything, sending stuff through the service is already a little sketchy, but I’m not going to give away my precious, precious boilerplate.