I refuse to sit here and pretend that any of this matters. OpenAI and Anthropic are not innovators, and are antithetical to the spirit of Silicon Valley. They are management consultants dressed as founders, cynical con artists raising money for products that will never exist while peddling software that destroys our planet and diverts attention and capital away from things that might solve real problems.

I’m tired of the delusion. I’m tired of being forced to take these men seriously. I’m tired of being told by the media and investors that these men are building the future when the only things they build are mediocre and expensive. There is no joy here, no mystery, no magic, no problems solved, no lives saved, and very few lives changed other than new people added to Forbes’ Midas list.

None of this is powerful, or impressive, other than in how big a con it’s become. Look at the products and the actual outputs and tell me — does any of this actually feel like the future? Isn’t it kind of weird that the big, scary threats they’ve made about how AI will take our jobs never seem to translate to an actual product? Isn’t it strange that despite all of their money and power they’re yet to make anything truly useful?

My heart darkens, albeit briefly, when I think of how cynical all of this is. Corporations building products that don’t really do much that are being sold on the idea that one day they might, peddled by reporters that want to believe their narratives — and in some cases actively champion them. The damage will be tens of thousands of people fired, long-term environmental and infrastructural chaos, and a profound depression in Silicon Valley that I believe will dwarf the dot-com bust.

And when this all falls apart — and I believe it will — there will be a very public reckoning for the tech industry.

  • jrs100000@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    7 days ago

    It will. So has just about every other major technical development ever. Eventually those lost jobs should be replaced by even more jobs made possible by the new technology, but in the meantime it will suck.

    Thats how you know its not just a gimmick. How many jobs did blockchain replace? Just about zero. How many jobs did computers or the Internet or the mechanical loom or the freaking steam engine replace? Tons.

    • amino@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      7 days ago

      except genAI has proven no purpose. this is like saying “look at how many jobs bankers replaced! we just used to eat for free, now we have to work our entire lives for it or starve!”

        • amino@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 days ago

          great, now enjoy that 10 times worse when genAI is used to make skeleton crews an even bigger issue and increase worker exploitation

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 days ago

        except genAI has proven no purpose

        Generative AI has spawned an awful amount of AI slop and companies are forcing incomplete products on users. But don’t judge the technology by shitty implementations. There are loads of use cases where when used correctly, generative AI brings value. For example, in document discovery in legal proceedings.

        • sem@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          But is it worth the cost, and is it the best option? Everyone knows that the generative models are heavily subsidized by VC.

          You could have other kinds of language processing and machine learning do document discovery better.

          • Greg Clarke@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            It is the best option for certain use cases. OpenAI, Anthropic, etc sell tokens, so they have a clear incentive to promote LLM reasoning as an everything solution. LLM read is normally an inefficient use of processor cycles for most use cases. However, because LLM reasoning is so flexible, even though it’s inefficient from a cycle perspective, it is still the best option in many cases because the current alternatives are even more inefficient (from a cycle or human time perspective).

            Identifying typos in a project update is a task that LLMs can efficiently solve.

            • sem@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              Yes I think it’s a good option for spell check, or for detecting when the word it sees seems unlikely given the context.

              For things where it’s generating text, or categorizing things, It might be the easiest option. Or currently the cheapest option. But I don’t think it’s the best option if you consider everyone involved.

              • Greg Clarke@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 hours ago

                But I don’t think it’s the best option if you consider everyone involved.

                Can you expand on this? Do you mean from an environmental perspective because of the resource usage, social perspective because of jobs losses, and / or other groups being disadvantaged because of limited access to these tools?

                • sem@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  Basically the LLM may make people’s jobs easier, for instance someone can get a meeting summary with less effort, but they produce worse results if you consider everyone affected by the work product, like considering whose views are underrepresented in the summary. Or, if you’re using it to categorize text, you can’t find out why it is producing incorrect results and improve it the way you could with other machine learning techniques. I think Emily Bender can do a better job explaining it than I can:

                  https://m.youtube.com/watch?v=3Ul_bGiUH4M&t=36m35s

                  check out the part where she talks about the problems with relying on LLMs to generate meeting summaries and with using it to clarify customer support calls as “resolved” or “not resolved”. I tried to get close to that second part since the video is long.

                  • Greg Clarke@lemmy.ca
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    3 hours ago

                    I agree and I think this comes back to execution of the technology as opposed to the technology itself. For context, I work as an ML engineer and I’ve been concerned with bias in AI long before ChatGPT. I’m interested in other folks perspectives on this technology. The hype and spin from tech companies is a frustrating distraction from the real benefits and risks of AI.