The employee, Mrinank Sharma, had led the Claude chatbot maker’s Safeguards Research Team since it was formed early last year and has been at the company since 2023.

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma said, claiming that employees “constantly face pressures to set aside what matters most.”

He also issued a crypic warning about the global state of affairs.

“I continuously find myself reckoning with our situation The world is in peril. And not just from AI, or bioweapons,” he wrote, “but from a whole series of interconnected crises unfolding in this very moment.”

  • hansolo@lemmy.today
    link
    fedilink
    arrow-up
    17
    ·
    2 days ago

    “but from a whole series of interconnected crises unfolding in this very moment.”

    Uh… Yeah. No kidding. Not exactly cryptic, Captain Obvious.

  • FinjaminPoach@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 days ago

    “I continuously find myself reckoning with our situation The world is in peril. And not just from AI, or bioweapons,” he wrote, “but from a whole series of interconnected crises unfolding in this very moment.”

    It sounds to me like he’s annoyed at how AI and botfarms have been used to remotely influence politics in other countries


    Looking at some other parts of the article:

    “It kind of feels like I’m coming to work every day to put myself out of a job,” one staffer {at The Telegraph newspaper} said in an internal survey. “In the long term, I think AI will end up doing everything and make me and many others irrelevant,” another confided.

    I think I’m aligned with the prevailing view on lemmy when I say that, no, humans will not be made irrelevant by AI. In the case of the telegraph, it’s more likely that their boss will try and replace every human with AI and then come crawling back when the organisation collapses after 1-4 weeks.

    Others leave quietly, such as former OpenAI economics researcher Tom Cunningham. Before quitting the company, he shared an internal message accusing OpenAI of turning his research team into a propaganda arm and discouraging publishing research critical of AI’s negative effects.

    I somehow doubt any company has ever permitted critical research of their own product or services. If you have the ability to say ‘no, we’re not publishing that. YOU can publish it, but you’ll have to quit your job and do it without the company’s name attached.’ then you’re going to do that rather than slander your own product.

  • HP van Braam@mastodon.tmm.cx
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    @jaredwhite the real problem is that it is essentially impossible to determine whether this person knows something we don’t, or has used too much ai and it convinced them that they know something we don’t…

    • Jared White ✌️ [HWC]@humansare.socialOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      That was my concern at first, wondering if they’d been turned into a wild-eyed doomer from drinking too much of the Kool-Aid on the negative side…but my own conclusion is they sound reasonably level-headed and likely had an “Are We the Baddies?” awakening of some kind. I also would agree AI isn’t the only major “problem” facing the world, it’s merely part of a cluster of interconnected issues and I appreciated his acknowledgment of that.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        The negative side has Kool-Aid? I assume you just refer to the fringe that make outrageous claims and not the “ordinary” doomers that realize if we’ve thrown out safety for profit with “just” LLMs, we’re absolutely going to go in full throttle with anything more.

        I haven’t run across anyone involved in the safety aspects of AI in any form who is very happy or comfortable right now. There is a reason for that.

        • Jared White ✌️ [HWC]@humansare.socialOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          yeah, I meant the “old school” doomers who thought the AI would start to replicate and upgrade itself and turn into Skynet basically and humans would be helpless to stop it.

          Now the likely doom is just Elon Musk running the planet and turning forests into data centers for 3D waifus. 🙃

          • Rhaedas@fedia.io
            link
            fedilink
            arrow-up
            4
            ·
            2 days ago

            The mindless paperclip scenario is far more likely, and if looked at the right way, it’s not only happening metaphorically, most humans are helping it.

          • FinjaminPoach@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Elon Musk running the planet and turning forests into data centers for 3D waifus.

            He should probably get his porn addiction fixed. You’d think the richest guy on Earth woukd be in the best position to do so.