Ok let’s give a little bit of context. I will turn 40 yo in a couple of months and I’m a c++ software developer for more than 18 years. I enjoy to code, I enjoy to write “good” code, readable and so.

However since a few months, I become really afraid of the future of the job I like with the progress of artificial intelligence. Very often I don’t sleep at night because of this.

I fear that my job, while not completely disappearing, become a very boring job consisting in debugging code generated automatically, or that the job disappear.

For now, I’m not using AI, I have a few colleagues that do it but I do not want to because one, it remove a part of the coding I like and two I have the feeling that using it is cutting the branch I’m sit on, if you see what I mean. I fear that in a near future, ppl not using it will be fired because seen by the management as less productive…

Am I the only one feeling this way? I have the feeling all tech people are enthusiastic about AI.

  • taladar@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    10 months ago

    It’s not really certain when real AGI is going to start to become real, but it certainly seems possible that it’ll be real soon

    What makes you say that? The entire field of AI has not made any progress towards AGI since its inception and if anything the pretty bad results from language models today seem to suggest that it is a long way off.

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      10 months ago

      You would describe “recognizing handwritten digits some of the time” -> “GPT-4 and Midjourney” as no progress in the direction of AGI?

      It hasn’t reached AGI or any reasonable facsimile yet, no. But up until a few years ago something like ChatGPT seemed completely impossible, and then a few big key breakthroughs happened, and now the impossible is possible. It seems by no means out of the question that a few more big breakthroughs could happen with AGI, especially with as much attention and effort is going into the field now.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        It’s not that machine learning isn’t making progress, it’s just many people speculate that AGI will require a different way of looking at AI. Deep Learning, while powerful, doesn’t seem like it can be adapted to something that would resemble AGI.

        • mozz@mbin.grits.dev
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          10 months ago

          You mean, it would take some sort of breakthrough?

          (For what it’s worth, my guess about how it works is to generally agree with you in terms of real sentience – just that I think (a) neither one of us really knows that for sure (b) AGI doesn’t require sentience; a sufficiently capable fakery which still has limitations can still upend the world quite a bit).

          • jacksilver@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            Yes, and most likely more of a paradigm shift. The way deep learning models work is largely around static statistical models. The main issue here isn’t the statistical side, but the static nature. For AGI this is a significant hurdle because as the world evolves, or simply these models run into new circumstances, the models will fail.

            Its largely the reason why autonomous vehicles have sorta hit a standstill. It’s the last 1% (what if an intersection is out, what if the road is poorly maintained, etc.) that are so hard for these models as they require “thought” and not just input/output.

            LLMs have shown that large quantities of data seem to approach some sort of generalized knowledge, but researchers don’t necessarily agree on that https://arxiv.org/abs/2206.07682. So if we can’t get to more emergent abilities, it’s unlikely AGI is on the way. But as you said, combining and interweaving these systems may get something close.

          • taladar@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            a sufficiently capable fakery which still has limitations can still upend the world quite a bit

            Maybe but we are essentially throwing petabyte sized models and lots of compute power at it and the results are somewhere on the level where a three year old would do better in not giving away that they don’t understand what they are talking about.

            Don’t get me wrong, LLMs and the other recent developments in generative AI models are very impressive but it is becoming increasingly clear that the approach is maybe barely useful if we throw about as many computing resources at it as we can afford, severely limiting its potential applications. And even at that level the results are still so bad that you essentially can’t trust anything that falls out.

            This is very far from being sufficient to fake AGI and has absolutely nothing to do with real AGI.