• Pohl@lemmy.world
    link
    fedilink
    English
    arrow-up
    385
    arrow-down
    26
    ·
    1 year ago

    If you ever needed a lesson in the difference between power and authority, this is a good one.

    The leaders of this coup read the rules and saw that they could use the board to remove Altman, they had the authority to make the move and “win” the game.

    It seems that they, like many fools mistook authority for power. The “rules” said they could do it! Alas they did not have the power to execute the coup. All the rules in the world cannot make the organization follow you.

    Power comes from people who grant it to you. Authority comes from paper. Authority is the guidelines for the use of power, without power, it is pointless.

    • FishFace@lemmy.world
      link
      fedilink
      English
      arrow-up
      145
      arrow-down
      6
      ·
      1 year ago

      Well, surely it’s premature to be making grand statements like this until it actually causes a reversal?

      • Tyfud@lemmy.one
        link
        fedilink
        English
        arrow-up
        81
        arrow-down
        2
        ·
        1 year ago

        Even if it doesn’t, the consequences of the board ignoring this is catastrophic to the company. One way or another, the workers will have a victory here.

        • Ullallulloo@civilloquy.com
          link
          fedilink
          English
          arrow-up
          27
          arrow-down
          4
          ·
          1 year ago

          If the workers actually quit and jump to Microsoft, they would be in a much worse position than they are currently in.

      • Potatisen@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        78
        ·
        1 year ago

        Yeah, but he’s like 15 years old. All the moral/ethical fallout he’s ever seen have been in movies and tv shows. Let the kid dream.

        • Melt@lemm.ee
          link
          fedilink
          English
          arrow-up
          65
          arrow-down
          2
          ·
          edit-2
          1 year ago

          People don’t need to be old to make a good point

    • meco03211@lemmy.world
      link
      fedilink
      English
      arrow-up
      91
      ·
      1 year ago

      Supreme executive power derives from a mandate from the masses, not some farcical aquatic ceremony.

    • EmergMemeHologram@startrek.website
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      2
      ·
      1 year ago

      We don’t yet know the cause of this power struggle, so hard to say of they were trying to stage a coup or trying to prevent something else.

      But regardless it appears they dun goofed

    • serial_crusher@lemmy.basedcount.com
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      16
      ·
      edit-2
      1 year ago

      Nah. Microsoft engineered this whole thing to weaken the boards power and ripen OpenAI up as a less expensive acquisition target.

    • Touching_Grass@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      58
      ·
      1 year ago

      The problem is the employees were paid too much. They have too much and aren’t desperate enough. Need to drop that pay going forward

      • Evie @lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        1
        ·
        1 year ago

        No working 8-5/6 pm employee, making under 100k a year, is being paid too much

        • rambaroo@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          ·
          edit-2
          1 year ago

          One engineer at a company like this produces literally millions of dollars in revenue and savings. Practically no one is paid “too much” and anyone who says they are doesn’t know what they’re talking about. Even if they make over 100k.

          I can assure even at 200k the company would still be extracting value from most employees. They pay as little as the market lets them get away with.

          The only people who are paid too much at these tech companies are the execs, especially the ones who have no clue what they’re doing and constantly fuck things up.

          • Evie @lemmy.world
            link
            fedilink
            English
            arrow-up
            27
            arrow-down
            1
            ·
            edit-2
            1 year ago

            No I wouldn’t be. I work in HR as a Generalist, BS Graduate for business management and human resources management, AND I do payroll…

            I see it first hand who is disadvantaged and who is privileged. I see who deserves raises and who didn’t do squat for the amount they have been paid (executive teams and presidents/owners) … Your opinions on who is paid too much, are misguided…

            Your username doesn’t check out as you clearly are not touching grass

  • ribboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    344
    arrow-down
    16
    ·
    1 year ago

    It’s rather interesting here that the board, consisting of a fairly strong scientific presence, and not so much a commercial one, is getting such hate.

    People are quick to jump on for profit companies that do everything in their power to earn a buck. Well, here you have a company that fires their CEO for going too much in the direction of earning money.

    Yet every one is all up in arms over it. We can’t have the cake and eat it folks.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      114
      arrow-down
      11
      ·
      1 year ago

      It’s my opinion that every single person in the upper levels is this organization is a maniac. They are all a bunch of so-called “rationalist” tech-right AnCaps that justify their immense incomes through the lens of Effective Altruism, the same ideology that Sam Bankman-fried used to justify his theft of billions from his customers.

      Anybody with the urge to pick a “side” here ought to think about taking a step back and reconsider; they are all bad people.

      • LeroyJenkins@lemmy.world
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        1
        ·
        1 year ago

        even outside the upper tiers, high paid tech workers do mental gymnastics to rationalize the shittiness they do via their companies while calling themselves liberal. motherfuckers will union bust for their company for a larger TC next year then go on LinkedIn or Facebook and spin it like “I successfully destroyed a small town’s economy, killed a union forming in the division I manage, and absolutely threw my coworkers under the bus this year. My poor father swept countless floors until his hands bled so I can be here today and that’s why I support the small working man and will never forget where I came from #boss”

    • rookie@lemmy.world
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      1
      ·
      1 year ago

      Well, here you have a company that fires their CEO for going too much in the direction of earning money.

      Yeah, honestly, that’s music to my ears. Imagine a world where organizations weren’t in the business of pursuing capital at any cost.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        I think what a lot of people object to is the speed and level of complete disorganization that this was done with. Why did Microsoft only get a 60 second warning.

    • PersnickityPenguin@lemm.ee
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      2
      ·
      1 year ago

      Sounds like the workers all want to end up with highly valued stocks when it goes IPO. Which is, and I’m just guessing here, the only reason anyone is doing AI right now.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      1 year ago

      This was my first thought… But then why are the employees taking a stand against it?

      There’s got to be more to this story

      • gmtom@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        arrow-down
        4
        ·
        edit-2
        1 year ago

        Bandwagoning. The narrative is so easy to spin "hey the evil board of directors forced our beloved CEO to leave. If they do that to /US/ we need to do it back to /them/.

        I think that would get most people with moral concerns on board, the rest are just tech bros and would fully support a money grubbing unethical CEO if they thought they might get a bigger bonus out of it.

        • Instigate@aussie.zone
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 year ago

          I mean, isn’t this just an attempt to instil democracy in their workplace? If the vast majority of employees want something, whether or not it is objectively in their best interest, shouldn’t leadership listen to them? Isn’t this just what unions do on the regular?

          I have no dog in this fight, I don’t know who’s a good person and who’s bad, but I believe in democracy even when it doesn’t produce the best result. I wish all companies acted upon the wishes of their employees rather than their shareholders, customers or consumers; that would make for far more cohesive and productive workplaces.

          • gmtom@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            edit-2
            1 year ago

            Democracy works best when the people voting are well informed. I’m saying it seems like people have been manipulated by a very easy “us vs them” narrative to get the lower employees on board with the wishes of the upper management. And if you poised the question of “what direction should the company take, to pursue ethical AI or to try and make profitable AI” or something similar you probably would get different results.

            Also this isn’t really democracy in the work place just people attaching their names to a letter. Of which I’m betting most didn’t even read themselves.

            • Instigate@aussie.zone
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              Sure, but the answer to a lack of an informed public is not reverting away from democracy; it’s trying to inform the voters. Very many people vote against their best interests on a regular basis in a political sphere, and we shouldn’t revoke their right to vote as a result. Democracy, as a principle, should still prevail.

              I don’t think it’s fair to infantilise people you’ve never met in the way that you are. What evidence do you have that the people who signed on to this letter didn’t read it? What evidence do you have that they’re either naïve or easily manipulated? I think they’re unfair assumptions. They may be true, but I have no idea if that’s the case.

              • gmtom@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I’m working on the assumption that the people working at the company are a fairly typical example of the general population.

                So I’m applying my experiences of people in general to them. It’s would be like assuming they didn’t read a software licence because most people don’t do that. And I know from previous experience that people would get an email asking them to put their name to this letter and would opt to do so based on their existing opinions, and wouldn’t take the time to actually read it. Of course some people did, but I think it’s a safe assumption to say most didn’t.

                • Instigate@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I’d argue that a group of new-tech employees is a specifically atypical example of the general population. They’re very likely tertiary qualified (minority), they’d all be earning more than six figures (minority), they’re likely on the lower end of the age bracket, and I doubt they’re representative with regards to gender and cultural background as that’s a known issue in tech. I’m not sure that cohort is in any way representative of the general population.

                  I’m not trying to take a stand here; I have no dog in this fight. I’m just trying to elucidate why making such an assumption might not be wise. As I’ve said before; it may be true, but I (and you) have no idea if that’s actually the case, so assuming it serves no real value.

            • Psychodelic@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I too think all the people I disagree with are simply stupid and ill-informed, as that is truly the highest form of intellectual integrity

                • Psychodelic@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Whatever I’m betting you didn’t even fully read my comment. You’re obviously not informed or are being manipulated. Maybe if someone just explained it to you differently you’d understand what my comment says and support it

    • justawittyusername@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      1 year ago

      I immediately thought that the board was bad, then read the context…

      so are the employees backing Altman because it means more money for the company/them? Or is there another reason?

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      2
      ·
      1 year ago

      Well, here you have a company that fires their CEO for going too much in the direction of earning money.

      I think this is very much in question by the people who are up in arms

      • ribboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        45
        arrow-down
        8
        ·
        1 year ago

        Altman went to Microsoft within 48 hours, does anything else really need to be said? Add to that, the fact that basically every news outlet has reported - with difference sources - that he was pushing in exactly in that way. There’s very little to support the fact that reality is different.

        • foofy@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          3
          ·
          1 year ago

          The board has given no real reasoning for why they fired him. Until they do, there’s no reason anyone should consider this anything other than an internal power struggle that resulted in a coup.

          And Sam didn’t have a job anymore. Why shouldn’t he go work for Microsoft? He was pushed out of OpenAI, is he contractually bound to never do something different?

        • archomrade [he/him]@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m not the one contesting it, but there’s a strong contingent of people who believe Altman’s interest is in developing AGI and little else. To them, him taking that position could be explained him positioning himself to affect broader influence.

          That’s not my personal interpretation, but it is at least a little surprising that the rift is between him and his BOD. Presumably they would all have the same financial incentive to monetize their project, not just Altman.

          Personally, I think people being quick to draw any conclusion from this are putting the cart before the horse. It’s not clear to me at all what the competing interests are, if it’s not just completely political posturing to begin with.

    • Obinice@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      Is that actually the case? I’ve not seen any actual information yet about what happened or why they did what they did.

      If they’ve actually stated that the guy was fired because the company was going too far down the focus on money making route, that would be huge news I’d be really interested in hearing.

    • Rooskie91
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      I’m sure some amount of the negative press is propaganda from corporations who would like to profit from using AI and are prevented from doing so by OpenAI’s model some how.

    • knotthatone@lemmy.one
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 year ago

      What we have here, is a company that fired its CEO for vague and cryptic reasons and a whole lot of speculation on what the real issue was. These are their own words:

      https://openai.com/blog/openai-announces-leadership-transition

      I’m not trying to defend Altman or the altruism of Microsoft. Although I would like to understand why this firing happened and why it was done in such an abrupt and dramatic manner.

    • ByteJunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      12
      ·
      1 year ago

      From the outside, this story plays out like a bunch of snivelling family members of a lottery winner who plotted to steal all his money and throw him out, because he’s “not candid”.

      The rest of the family, who also lived with the guy, clearly don’t agree and are now demanding that the thieves turn themselves in.

      I mean, sure they may even have real reasons to kick him out, but man did they fuck this one up…

        • ByteJunk@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          The analogy isn’t about stealing money, it’s about kicking the dude from the house.

          I could believe that they thought they had good reasons for it, but they’ve done such a bad job of explaining them that even their own employees aren’t buy it, as per original article.

          At least they got support in this thread, got to count for something.

  • conditional_soup@lemm.ee
    link
    fedilink
    English
    arrow-up
    242
    arrow-down
    1
    ·
    1 year ago

    I’d like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn’t have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I’m dying to know what it is.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      84
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Wanting to know why is reasonable but it’s sus that we don’t already know. Why haven’t they made that clear? How did they think they could do this without a solid explanation? Why hasn’t one been delivered to set the rumors to rest?

      It stinks of incompetence, or petty personal drama. Otherwise we’d know by now the very good reason they had.

      • Tangent5280@lemmy.world
        link
        fedilink
        English
        arrow-up
        44
        arrow-down
        1
        ·
        1 year ago

        If there was something illegal going on, then all parties involved would have incentive to keep it under wraps.

    • los_chill@programming.dev
      link
      fedilink
      English
      arrow-up
      57
      arrow-down
      6
      ·
      edit-2
      1 year ago

      Altman wanted profit. Board prioritized (rightfully, and to their mission) responsible, non-profit care of AI. Employees now side with Altman out of greed and view the board as denying them their mega payday. Microsoft dangling jobs for employees wanting to jump ship and make as much money possible. This whole thing seems pretty simple: greed (Altman, Microsoft, employees) vs the original non-profit mission (the board).

      Edit: spelling

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        That’s what I thought it was at first too. But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

        But do they know things we don’t know? They certainly might. Or it might just be bandwagoning or the likes.

        • los_chill@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

          I would have thought so too of the employees, but threatening a move to Microsoft kinda says the opposite. That or they are just all-in on Altman as a person.

    • Ullallulloo@civilloquy.com
      link
      fedilink
      English
      arrow-up
      34
      ·
      1 year ago

      The only explanation I can come up with is that the workers and Altman both agreed in monetizing AI as much as possible. They’re worried that if the board doesn’t resign, the company will remain a non-profit more conservative in selling its products, so they won’t get their share of the money that could be made.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      Yeah, the speed at which MS snapped him up makes me think of Zampella and West from Infinity Ward.

      • Chocrates@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        1 year ago

        Microsoft Stock dropped 2% with the announcement, hiring him was just to stop the hemorrhaging while they figure out what to do.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Isn’t that more because MS own lots of OpenAI stock? But then 2% is neither here nor there anyway. More background noise than anything.

      • Melt@lemm.ee
        link
        fedilink
        English
        arrow-up
        41
        arrow-down
        3
        ·
        1 year ago

        The tone of the blog post is so amateurish I feel like I’m reading a reddit post on r/Cryptocurrency

      • I_Clean_Here@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        4
        ·
        1 year ago

        Don’t get me wrong, this move from the board reeks of some grade A bullshit but this article is absolute crap. Is this supposed to be a serious journalism?

      • conditional_soup@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        1 year ago

        Thanks for sharing. That is… Weird in ways I didn’t anticipate. “Weird cult of pseudointellectuals upending the biggest name in silicon valley” wasn’t on my bingo board.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          14
          ·
          1 year ago

          IMO there are some good reasons to be concerned about AI, but those reasons are along the lines of “it’s going to be massively disruptive to the economy and we need to prepare for that to ensure it’s a net positive”, not “it’s going to take over our minds and turn us into paperclips.”

          • diablexical@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            The author did a poor job of explaining that. He’s referencing the thought experiment of a businessman instructing a super effective AI to make paperclips. Given a terse enough objective and an effective enough AI, one can imagine a scenario in which the businessman and the whole world in fact are turned into paperclips. This is obviously not the businessman’s goal, but it was the instruction he gave the AI. The implication of the thought experiment is that AI needs guardrails, perhaps even ethics, or else it can unintentionally result in a doomsday scenario.

      • Bal@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 year ago

        I don’t know a lot about the background but this article feels super biased against one side.

      • Coasting0942@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Can somebody explain the following quote in the article for me please?

        Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar.

        • vanquesse@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Imagine “roko’s basilisk”, but extended into an entire philosophy. It’s the idea that “we” need to anything and everything to create the inevitable ultimate super-ai, as fast as possible. Climate change, wars, exploitation, suffering? None of that matters compared to the benefits humanity stands to gain when the ultimate super-ai goes online

      • roguetrick@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        A duel between hucksters and the delusional makes sense. The delusional rely on the hucksters for funding whether they want to or not though. No heroes.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      I don’t think msft convinced him with money, but rather opportunity. He clearly still wants to work with AI and 2nd best place for that after openAI is Microsoft

      • SnipingNinja@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Second best would be Google, but for him it’s Microsoft because he’s probably getting a sweetheart deal as being in control of his destiny (not really, but at least for a short while)

        • morrowind@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          Microsoft has access to a lot of OpenAI’s code, weights etc. and he’s already been working with them. It would be much better for him than to join some other company he has no experience with.

          • SnipingNinja@slrpnk.net
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            He’s not the guy who writes code, he’s a VC or management guy. You might say he has good ideas, as ChatGPT interface is attributed to him, but he didn’t make it.

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    237
    arrow-down
    10
    ·
    1 year ago

    You’re not going to develop AI for the benefit of humanity at Microsoft. If they go there, we’ll know "Open"AI’s mission was all a lie.

    • Gork@lemm.ee
      link
      fedilink
      English
      arrow-up
      142
      arrow-down
      5
      ·
      1 year ago

      Yeah Microsoft is definitely not going to be benevolent. But I saw this as a foregone conclusion since AI is so disruptive that heavy commercialization is inevitable.

      We likely won’t have free access like we do now and it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

      • MeatsOfRage@lemmy.world
        link
        fedilink
        English
        arrow-up
        141
        arrow-down
        3
        ·
        edit-2
        1 year ago

        “Hey Bing AI can I get a recipe that includes cinnamon”

        “Sure! Before we begin did you hear about the great Black Friday deals at Sephora”

        “Not interested”

        “No problem. You’re using query 9 of 20 this month. Do you want to proceed?”

        “Yes”

        “Before we begin, Bing Max+ has a one month trial starting at just $1 for your first month*. Want to give that a try?”

        “Not now”

        “No problem. With cinnamon you can make Cinnamon Rolls”

        “What else?”

        “Sure! You are using query 10 of 20 this month. Before I continue did you hear the McRib is back for a limited time at McDonald’s. (ba, da, ba, ba, ba) I’m lovin’ it.”

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        54
        arrow-down
        6
        ·
        1 year ago

        You don’t have free access. The best models have always been safeguarded behind paywalls, you have access to parlor tricks and demo shows. This product was born enshittified already. It’s crap that’s only has passable use for mega corporations.

        • Gork@lemm.ee
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          For a while we did with ChatGPT 3.5 before 4.0 came out. I’m not sure what to make of Bing’s AI since they have ulterior motives and is likely a demo for their ultimate form.

      • extant@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 year ago

        We only have free access now because it’s still in development and they’re using our interactions to train from, but when they are on more solid ground I fully expect enshittification.

      • banneryear1868@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

        Yeah this is why I’m so skeptical about the way it will presumably change the world. It will change things, but the economic relations that determine it’s ability to do so will overrule the technological capabilities, since it will be infeasible or not economically viable to deliver on a lot of the hype.

    • bioemerl@kbin.social
      link
      fedilink
      arrow-up
      31
      arrow-down
      1
      ·
      1 year ago

      Open AI has been a farce ever since they disabled access to GPT3 for the sake of security.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        The way I understand it, Microsoft gave OpenAI $10 billion, but they didn’t get any votes. They had no say in their matters.

        • Alto@kbin.social
          link
          fedilink
          arrow-up
          14
          arrow-down
          3
          ·
          1 year ago

          On paper, sure. They gave them $10B. They absolutely have some sort of voice here

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        MS owns 49% of the for profit subsidiary and has no votes on the non-profit overseeing body.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      And if they don’t, we’re supposed to keep on believing all of this is somehow benefiting us?

  • just_change_it@lemmy.world
    link
    fedilink
    English
    arrow-up
    156
    arrow-down
    7
    ·
    1 year ago

    It’s supposed to be a nonprofit benefiting humanity, not a pay day for owners or workers. The board isn’t making money off of it.

    Giving microsoft control is a bad idea. (duh?)

    Giving a single person control is a bad idea, per sam altman.

    • slaacaa@lemmy.world
      link
      fedilink
      English
      arrow-up
      97
      arrow-down
      2
      ·
      edit-2
      1 year ago

      My take on what happened (we are now at step 8):

      1. Sam wants to push for more & quicker profit with MS and VC backing, but board resists, constant conflicts
      2. Sam aligns with MS, hatch a plan on how to gut OpenAI for its know-how, ppl, and tech, leaving the non-profit part bleeding out in the gutter
      3. Sam & MS set a trap: Sam crosses some red lines, maybe taking commercial decisions without board approval. Potentially there was also some whispering in key ears (e.g, Ilya) by seemingly helpful advisors/VCs to push & pull at the same time on both sides
      4. Board has enough after Sam doesn’t back down, fires him & other co-founder guy
      5. MS and VCs go full attack to discredit board. After some info gathering, they realize they have been utterly fucked
      6. Some chaos, quick decision of appointing/replacing ppl, trying to manage the fire, even talking to Sam (btw this might have been a fallback option for MS, that the board reinstates him with more control and guardrails, weakening the power of the non-profit)
      7. Sam joins MS, masks are off
      8. Employees on the sinking ship revolt, even Ilya realizes he was manipulated/fucked
      9. OpenAI dead, key ppl join MS, tech and rest of the company bought for scraps. Non-profit part dead. Capitalist victory

      Source: subjective interpretation/deduction based on the available info and my experience working as a management consultant for 10 years (dealing with lot of exec politics, though nothing this serious)

      • TotalCasual@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        1 year ago

        You’re wrong on point #1. This isn’t being done per Sam Altman for commercial purposes. It’s being done per Microsoft in an attempt to remove the OpenAI board completely. Facebook recently shutdown its AI Ethics division.

        All of this is happening in conjunction with each other. Large corporations are trying to privatize AI and using key personnel in the industry to make it seem like a good thing. This wasn’t just Sam Altman. Whoever drafted the letter demanding the board steps down is working with Microsoft to do this.

        More than likely, that group went around spreading doomsday to the other employees in an attempt to scare them into fleeing the company.

        Sam Altman is just a pawn.

        • people_are_cute@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          1 year ago

          Facebook recently shutdown its AI Ethics division.

          Meta is the only player that’s releasing its models to public. Ironically, it is the one being the most ethical in the AI space right now.

          “AI Ethics” teams in the Silicon Valley are nothing but rent-seeking doomer cults that leech off on the effort of others and hold back progress with bullshit gatekeeping. There was not a single positive contribution Facebook’s AI “ethics” team ever made.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        This is precisely the take I’ve been coming to on this. It fits all the fuckery going on. You can rest assured there is nothing in writing that can back this up, but one day there will be an unrelated lawsuit where it all comes out.

      • Eldritch@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        You might very well be correct. The thing that people need to remember is that just because something involves conspiracy doesn’t mean that it’s false. The more people required to be involved in a conspiracy is typically what makes it false. I think it is very within human nature. Especially those of programmers who have traditionally been better treated and paid than most other workers. To side with the profit motive against actual altruism. It’s the tech bro thing to do. I’m going to wait and see what happens. Not take any sides. Even though typically I’m always for supporting the workers.

  • NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    150
    arrow-down
    3
    ·
    1 year ago

    You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

    You are God damned right that shutting everything down is one of the roles of a non-profit Board focused on AI safety.

  • Jolteon@lemmy.zip
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    2
    ·
    1 year ago

    Later: All 195 employees of OpenAI in support of board of directors.

    • Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      1
      ·
      edit-2
      1 year ago

      So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

      Ugh.

      • Death_Equity@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        1 year ago

        They could have just given 4chan a $1 bounty per piece and they would have gleefully delivered until Lambo.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          “Above the median” should not be the standard for having to spend all day reading about racism and rape.

          • aidan@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I strongly disagree. I have read and seen a lot of messed up things on the internet, I much, much, prefer it to the couple weeks I spent helping out a friend at a part-time service job. (And I was doing it with good friends in a casual environment.)

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              You’re welcome to strongly disagree that this:

              One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

              Is not worth high pay, but I would say psychologically damaging your employees and then not even giving them the counseling tools to help them is absolutely worth high pay. You should not have to endure things like that for an ‘above the median’ wage in a country where ‘the median’ is still being very poor. I see this as not much better than defending other corporations making poor people in Africa work in mines for a decent wage relative to others in their country but not giving them safety equipment. And they still die poor.

              • aidan@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I obviously prefer people aren’t in poverty at all. But I have far more sympathy for the miner risking their lives than someone reading something disgusting/disturbing on the internet, it is not anywhere near close.

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  You don’t understand how massive psychological damage can be as bad as seriously endangering someone’s physical health?

                  Just because a graphic description of a dog being raped while a child watches doesn’t bother you doesn’t mean it won’t bother anyone else. In fact, I would wager that it would be pretty disturbing for most people to read that, let alone read that sort of thing for hours every day.

                  And then there are the ones who are just as low-paid but have to look at images instead. Again, you may not be bothered by CSAM, but I would wager that most people would find looking at that all the time very hard to deal with and it could easily result in PTSD.

          • DudeDudenson@lemmings.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            What about spending all day being abused by people in a call center?

            I mean sure we’d all like to make enough money to live a full life with any job but that’s sadly not a reality and the point you’re missing is that economies don’t work the same as the US in every country.

            I live in Argentina, I make 25k a year as a software developer and I’m on the top 1% of highest earners on the country

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              What about it? It’s nowhere near the same as spending all day reading graphic rape and racist screeds, let alone look at CSAM, which is what they’re paying them to do now. Did you miss the part where they are psychologically damaged from this work and the counseling they have been offered is insufficient? Call centers don’t usually result in that sort of thing.

              Also, maybe you shouldn’t expect and defend wages that low for being in the top 1%?

              • AutistoMephisto@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                They’re in the top 1% for Argentina, not globally. I mean, it would be nice if every worker made US wages. It’s kinda fucked though that even the lowest paid workers in America can live like kings in the Philippines. I make $42k/yr as an electrical assembler at a plant that manufactures environmental test chambers. If I take my PTO and go to almost any other country, especially Argentina, I can live like royalty for a week.

      • SeaJ@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        That’s actually about 3x what the average Kenyan makes, sadly.

    • GenesisJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff

      • JonEFive@midwest.social
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        No, you’re right, you should be. We don’t want to normalize this shit, it should continue to shock and offend.

        These are the dark sides of modern technology. The kids working cobalt mines. The workers being paid pennies to categorize data so bad that it is traumatic to even read it. I can’t imagine how the people who have to look at pictures can do it.

        I feel like I could handle some dark text here or there, but if I had to do it for 40-50 hours a week? Hundreds of passages every day. That would warp me pretty quickly.

        • SacrificedBeans@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!

        • Clbull@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?

          • Strawberry@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            1 year ago

            That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

            This is the quote in question. They’re talking about images

        • Meowoem@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          They could be working with the governments of relevant countries to develop filters and detection systems.

        • aidan@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          IIRC there are a few legitimate and legal reasons to seek CSAM, such as journalism, and definitely developing methods to prevent it’s spread.

        • smooth_tea@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

          • Floshie@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Consider the impact on human psychology. Not everyone has the guts to read and even look through these. And even though they appear to have, it still scars them inside.

            Maybe There is no alternative for now, but don’t do that to people with such low paycheck. Consider even the background of these people who may work on these tasks to not even live, but to survive. I would have preffered to wait 10 years than to indulge these horrifying tasks to those persons.

            I’m sure there are lots of people who are in jail for creating/sharing or even making a profit off of these content. They could do that work ? But then again, even though it bothers me less than people who has no choice to live their lives, that is still an Idea I find ethically very questionable.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Very much yes police authorities have CSAM databases. If what you want to do with it really is above board and sensible they’ll let you access that stuff.

            I don’t doubt anything that OpenAI could do with that stuff can be above board, but sensible is another question: Any model that can detect something can be used to train a model which can generate it. As such those models are under lock and key just like their training sets, (social) media platforms which have a use for these things and the resources run them, under the watchful eye of the authorities. Think faceboogle. OpenAI could, in principle, try to get into the business of selling companies at that scale models they can, and have, trained themselves, I don’t really see that making sense from the business POV, either.

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 year ago

          This is actually extremely critical work, if results are going to be used by ai’s that are going to be used widely. This essentially determines the “moral compass” of the ai.

          Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for “evil” ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).

          • ColdFenix@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it’s the only job available and otherwise they and their family might starve. Source This is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.

            • reksas@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              True. Though while its horrible for those people, they might be doing more important work than they or us even realize. I also kind of trust moral judgement of oppressed more than oppressor(since they are the ones who do the work). Though i’m definitely not condoning the exploitation of those people.

              Its quite awful that this seems to be the best we can hope for regarding this. I doubt google or microsoft are going to give very positive guidance whether its ok for people to suffer if it leads to more money for investors when they do their own labeling.

  • InvaderDJ@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    6
    ·
    1 year ago

    The biopic on this whole thing is going to be hilarious. The rumors are that the board didn’t like how fast the CEO is moving with AI and they’re afraid of consequences of possible AGI (which I don’t think these new LLMs are even close to) but that doesn’t feel like what modern boards of directors are so I don’t trust it.

    It’s just baffling how this golden goose was half way strangled in the nest.

    • chiliedogg@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      1 year ago

      Or this is essentially a hostile takeover by Microsoft. OpenAI is a non-profit with non-shareholders as it’s board. They don’t have a profit motive to develop AI quickly and without safety measures. But the tech they’ve developed has quickly become the hottest product on the planet.

      Microsoft was clearly prepared to take on all the employees the second this happened.

        • chiliedogg@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          1 year ago

          These will come at a premium. Not only are they high-demand jobs, but they’ll absolutely be sued by OpenAI if they hire away half the staff of a company with which they had a business relationship. Those legal fees alone will be 8 figures even if they win.

          • Phlogiston@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            I’m positive that lawyers will get super involved and a lot will depend on the various contracts which we don’t have any visibility into. But from an ethical standpoint, the openai board shat in the bathwater and can’t really complain if people get out and move over to a cleaner pool.

            • GreenM@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Maybe they are not doing it to move to cleaner water but maybe they were promised more fish if they do by certain fisherman conglomerate. But i could be wrong.

      • DragonTypeWyvern@literature.cafe
        link
        fedilink
        English
        arrow-up
        45
        arrow-down
        3
        ·
        edit-2
        1 year ago

        I find it interesting that you guys are assuming it is the board acting out of greed and not the employees.

        OpenAI was, shockingly, built as an open source non-profit. Under the CEO it became close-sourced and profit-driven thanks in large part to the investment from Microsoft.

        You will note this letter says nothing about the “mission” of OpenAI. It does, however, talk a lot about reach and being in a “strong position.”

        Translation, $$$.

        The board’s letter does, however, mention its goal to serve humanity, and its role as a non-profit, while being extremely clear the board members have no equity in the company.

        I find it very, very interesting that the employee letter mentions nothing of any greater responsibility.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          edit-2
          1 year ago

          You will note this letter says nothing about the “mission” of OpenAI. It does, however, talk a lot about reach and being in a “strong position.”

          The letter explicitly mentions the “mission” of OpenAI. It’s in three of the five paragraphs.

              • DragonTypeWyvern@literature.cafe
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                1 year ago

                That they will get what they want.

                Or that motives don’t matter, dealer’s choice, because I don’t believe either tbh.

                • the post of tom joad@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 year ago

                  So that didn’t take long. Would you care to discuss the reasons why i was right? It doesn’t take a Nostradamus, i just saw 500+ workers actually understanding their worth and showing their power.

                  And it took like a day. Though not every board member is leaving, yet, if the workers demand it, they will.

                  Isn’t it great when the parasite class gets shown who’s in charge?

                  I have an inkling its not that you didn’t think they’d succeed, but that they shouldn’t have. Why?

        • logicbomb@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          edit-2
          1 year ago

          I find it interesting that you guys are assuming it is the board acting out of greed and not the employees.

          I find it interesting that I simply said that there was “corruption”, and the comment I responded to simply said the organization was a “shitshow”, and you interpreted that to mean that one or both of us were saying that the board was acting out of greed.

          The great thing about the comment I replied to is that it’s correct really regardless of the situation. My comment was building on that, suggesting that the power of their product led to this, without directly saying who is responsible.

          I think you can tell a lot about a person based on how they respond to ambiguity. Do they assume the person is agreeing with them, or do they assume that the person is disagreeing with them?

          Edit: You can also tell a lot about a person based on whether they respond to criticism or simply try to silence it with a downvote, for example.

          • NeoNachtwaechter@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I think you can tell a lot about a person based on how they respond to ambiguity.

            You got that precisely correct, but I’m afraid it was too much for many of the simple minds that climb around in the trees here :-)

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    3
    ·
    1 year ago

    I feel like this is Satya’s wet dream. He woke up on Friday like normal and went to bed on Sunday owning what, 85% of OpenAI’s top people? Acquisitions aren’t usually that easy.

    It seems obvious Sam would want to grow his company to infinity. That’s what VC people do. The board expecting otherwise is strange in hindsight. Now they can oversee the slow, measured adoption of much smaller business while the rest of the team shoots for the stars.

    Anyways, RIP y’all. Skynet launches next year.

  • CorneliusTalmadge@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    1 year ago

    Image Text:

    To the Board of Directors at OpenAl,

    OpenAl is the world’s leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.

    The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAl.

    When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.

    The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

    Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.

    1. Mira Murati
    2. Brad Lightcap
    3. Jason Kwon
    4. Wojciech Zaremba
    5. Alec Radford
    6. Anna Makanju
    7. Bob McGrew
    8. Srinivas Narayanan
    9. Che Chang
    10. Lillian Weng
    11. Mark Chen
    12. Ilya Sutskever
  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    36
    ·
    1 year ago

    We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman.

    Let’s have all OpenAI employees move to Microsoft. What could possibly go wrong?