Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    ·
    3 months ago

    so mozilla decided to take the piss while begging for $10 donations:

    We know $10 USD may not seem like enough to reclaim the internet and take on irresponsible tech companies. But the truth is that as you read this email, hundreds of Mozilla supporters worldwide are making donations. And when each one of us contributes what we can, all those donations add up fast.

    With the rise of AI and continued threats to online privacy, the stakes of our movement have never been higher. And supporters like you are the reason why Mozilla is in a strong position to take on these challenges and transform the future of the internet.

    the rise of AI you say! wow that sounds awful, it’s so good Mozilla isn’t very recently notorious for pushing that exact thing on their users without their consent alongside other privacy-violating changes. what a responsible tech company!

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      We know $10 USD may not seem like enough to reclaim the internet with the browser we barely maintain and take on irresponsible tech companies that pay us vast sums of money. But the truth is that as you read this email, hundreds of Mozilla supporters worldwide haven’t realized we’re a charity racket dressed up as a browser who will spend all your money on AI and questionable browser plugins. And when each one of us contributes what we can, we can waste the money all the faster!

      With the rise of AI (you’re welcome, by the way, for the MDN AI assistant) and continued threats to online privacy like question like integrating a Mr. Robot Ad into firefox without proper code review, the stakes of our movement have never been higher. And marks supporters like you are the reason why Mozilla is in such a strong position to take on these challenges and transform the future of the internet in any way we know how – except by improving our browser of course that would be silly.

      (I’m feeling extra cynical today)

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 months ago

      upside of this: they’ll get told why they’re not getting many of those $10 donations

      downside of that (rejection): that could be exactly what one of the ghouls-in-chief there need to push some or other bullshit

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        the ability of Mozilla’s executives and PMs to ignore public outcry is incredible, but not exactly unexpected from a thoroughly corrupt non-profit

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 months ago

        Gaslighting? What are you talking about? There’s no such thing as gaslighting. Maybe you’re going crazy

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    3 months ago

    Today in you can’t make this stuff up: SpaceX invades Cards Against Humanity’s crowdfunded southern border plot of land.

    Article (Ars Technica) Lawsuit with pictures (PDF)

    Reddit Comment with CAH’s email to backers

    The above Ars Technica article also lead me to this broader article (reuters) about SpaceX’s operations in Texas. I found these two sentences particularly unpleasant:

    County commissioners have sought to rechristen Boca Chica, the coastal village where Johnson remains a rare holdout, with the Musk-endorsed name of Starbase.

    At some point, former SpaceX employees and locals told Reuters, Starbase workers took down a Boca Chica sign identifying their village. They said workers also removed a statue of the Virgin of Guadalupe, an icon revered by the predominantly Mexican-American residents who long lived in the area.

    Reading all of this also somehow makes Elon Musk’s anti-immigrant tweets feel even worse to me than they already were.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      Damn, 3 hours late to the party. Despite my disdain for their game, i can only recall enjoying CAH’s liberal antics.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        3 months ago

        CAH is definitely a game you only play with people you’ve known your whole life, isn’t it?

        Once played with randoms at a hacker con and almost died of embarrassment.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      Considering the style of humor they have and Musk tries to show, I do wonder how hurt Musk is over all this. And only a matter of time before his sycophants create ‘CAH is dying’ graphs and animal meme images with testicles.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      3 months ago

      Instead of improving LLMs, they are working backwards to prove that all other things are actually word prediction tasks. It is so annoying and also quite dumb. No chemisty isn’t like coding/legos. The law isn’t invalid because it doesn’t have gold fringes and you use magical words.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 months ago

      None of these fucking goblins have learned that analogies aren’t equivalences!!! They break down!!! Auuuuuuugggggaaaaaaarghhhh!!!

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 months ago

      The problem is that there could be any number of possible next words, and the available results suggest that the appropriate context isn’t covered in the statistical relationships between prior words for anything but the most trivial of tasks i.e. automating the writing and parsing of emails that nobody ever wanted to read in the first place.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      This is just standard promptfondler false equivalence: “when people (including me) speak, they just select the next most likely token, just like an LLM”

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 months ago

      I look forward to the ‘but we often disagreed’ non-apologies. With absolute lack of self reflection on how this helped push Sailer/Unz into the positions they are now. If we even get that.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        ·
        3 months ago

        Pinker: looking through my photo album where I’m with people like Krauss and Epstein, shaking my head the whole time so the people on the bus know I disagree with them

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      Who could have predicted that liberalism would lead into scientific racism and then everything else that follows (mostly fascism)???

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        Surely “scientific” is giving them far too much credit? I recall previously sneering at some quotes about skull sizes, including something like women keep bonking their heads?

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          I believe the term is not so much meant to convey properties of science upon them as to describe the particular strain of racist shitbaggery (which dresses itself in appears-science, much like what happens in/with scientism)

          • barsquid@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            3 months ago

            Oh, definitely. For clarity my intention was to riff off them and increase levels of disrespect towards racists. In hindsight, the question format doesn’t quite convey that.

    • ahopefullycuterrobot@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 months ago

      I’m mildly surprised at Krugman, since I never got a particularly racist vibe from him. (This is 100% an invitation to be corrected.) Annoyed that 1) I recognise so many names and 2) so many of the people involved are still influential.

      Interested in why Johnathan Marks is there though. He’s been pretty anti-scientific racism if memory serves. I think he’s even complained about how white supremacists stole the term human biodiversity. Now, I’m curious about the deep history of this group. Marks published his book in 1995 and this is a list from 1999, so was the transformation of the term into a racist euphemism already complete by then? Or is this discussion group more towards the beginning.

      Similarly, curious how out some of these people were at the time. E.g. I know that Harpending was seen as a pretty respectable anthropologist up until recently, despite his virulent racism. But I’ve never been able to figure out how much his earlier racism was covert vs. how much 1970s anthropology accepted racism vs. how much this reflects his personal connections with key people in the early field of hunter-gatherer studies.

      Oh also, super amused that Pinker and MacDonald are in the group at the same time, since I’m pretty sure Pinker denounced MacDonald for anti-Semitism in quite harsh language (which I haven’t seen mirrored when it comes to anti-black racism). MacDonald’s another weird one. He defended Irving when Irving was trying to silence Lipstadt, but in Evan’s account, while he disagrees with MacDonald, he doesn’t emphasise that MacDonald is a raging anti-Semite and white supremacist. So, once again, interested in how covert vs. overt MacDonald was at the time.

      • blakestacey@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        3 months ago

        Yeah, Krugman appearing on the roster surprised me too. While I haven’t pored over everything he’s blogged and microblogged, he hasn’t sent up red flags that I recall. E.g., here he is in 2009:

        Oh, Kay. Greg Mankiw looks at a graph showing that children of high-income families do better on tests, and suggests that it’s largely about inherited talent: smart people make lots of money, and also have smart kids.

        But, you know, there’s lots of evidence that there’s more to it than that. For example: students with low test scores from high-income families are slightly more likely to finish college than students with high test scores from low-income families.

        It’s comforting to think that we live in a meritocracy. But we don’t.

        And in 2014:

        There are many negative things you can say about Paul Ryan, chairman of the House Budget Committee and the G.O.P.’s de facto intellectual leader. But you have to admit that he’s a very articulate guy, an expert at sounding as if he knows what he’s talking about.

        So it’s comical, in a way, to see [Paul] Ryan trying to explain away some recent remarks in which he attributed persistent poverty to a “culture, in our inner cities in particular, of men not working and just generations of men not even thinking about working.” He was, he says, simply being “inarticulate.” How could anyone suggest that it was a racial dog-whistle? Why, he even cited the work of serious scholars — people like Charles Murray, most famous for arguing that blacks are genetically inferior to whites. Oh, wait.

        I suppose it’s possible that he was invited to an e-mail list in the late '90s and never bothered to unsubscribe, or something like that.

        • ahopefullycuterrobot@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          The Wikipedia article on the Human Biodiversity Institute cites the term human biodiversity as becoming a euphemism for racism sometime in the late 90s and Marks’ book is from 1995, so there was apparently a pretty quick turnover. Which makes me wonder if hijacking or if independent invention. The article has a lot of sources, so I might mine them to see if there’s a detailed timeline.

  • Mii@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 months ago

    Follow up for this post from the other day.

    Our DSO now greenlit the stupid Copilot integration because “Microsoft said it’s okay” (of course they did), and he also was on some stupid AI convention yesterday and whatever fucking happened there, he’s become a complete AI bro and is now preaching the Gospel of Altman that everyone who’s not using AI will be obsolete in few years and we need to ADAPT OR DIE. It’s the exact same shit CEO is spewing.

    He wants an AI that handles data security breaches by itself. He also now writes emails with ChatGPT even though just a week ago he was hating on people who did that. I sat with my fucking mouth open in that meeting and people asked me whether I’m okay (I’m not).

    I need to get another job ASAP or I will go clinically insane.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      I’m so sorry. the tech industry is shockingly good at finding people who are susceptible to conversion like your CEO and DSO and subjecting them to intense propaganda that unfortunately tends to work. for someone lower in the company like your DSO, that’s a conference where they’ll be subjected to induction techniques cribbed from cults and MLM schemes. I don’t know what they do to the executives — I imagine it involves a variety of expensive favors, high levels of intoxication, and a variant of the same techniques yud used — but it works instantly and produces someone who can’t be convinced they’ve been fed a lie until it ends up indisputably losing them a ton of money

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        Yeah, I assume that’s exactly what happened when CEO went to Silicon Valley to talk to “important people”. Despite being on a course to save money before, he dumped tens of thousands into AI infrastructure which hasn’t delivered anything so far and is suddenly very happy with sending people to AI workshops and conferences.

        But I’m only half-surprised. He’s somewhat known for making weird decisions after talking to people who want to sell him something. This time it’s gonna be totally different, of course.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          3 months ago

          The “important people” line is a huge part of how the grift works and makes tech media partially responsible. Legitimizing the grift rather than criticizing it makes it easy for sales folks to push “the next big thing.” And after all, don’t you want to be an important person?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      3 months ago

      He wants an AI that handles data security breaches by itself. He also now writes emails with ChatGPT

      He is the data security breach.

      E: Dropped a T. But hey, at least chatgpt uses SSL to communicate, so the databreach is now constrained to the ChatGPT trainingdata. So it isn’t that bad.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      It’s the exact same shit CEO is spewing.

      I have realized working at a corporation that a lot of employees will just mindlessly regurgitate the company message. And not in a “I guess this is what we have to work on” way, but as if it replaced whatever worldview they had previously.

      Not quite sure what to make of this TBH.

  • khalid_salad@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 months ago

    Every few years there is some new CS fad that people try to trick me into doing research in — “algorithms” (my actual area), then quantum, then blockchain, then AI.

    Wish this bubble would just fucking pop already.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 months ago

    Behind the Bastards is starting a series about Yarvin today. Always appreciate it when they wander into our bailiwick!

  • flavia@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    A lemmy-specific coiner today: https://awful.systems/post/2417754

    The dilema of charging the users and a solution by integrating blockchain to fediverse

    First, there will be a blockchain. There will be these cryptocurrencies:

    This guy is speaking like he is in Genesis 1

    I guess it would be better that only the instances can own instance-specific coins.

    You guess alright? You mean that you have no idea what you’re saying.

    if a user on lemmy.ee want to post on lemmy.world, then lemmy.ee have to pay 10 lemmy.world coin to lemmy.world

    What will this solve? If 2 people respond to each other’s comments, the instance with the most valuable coin will win. What does that have to do with who caused the interaction?

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 months ago

      Yes crypto instances, please all implement this and “disallow” everyone else from interacting with you! I promise we’ll be sad and not secretly happy and that you’ll make lots of money from people wanting to interact with you.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 months ago

      if a user on lemmy.ee want to post on lemmy.world, then lemmy.ee have to pay 10 lemmy.world coin to lemmy.world

      Note that you don’t need cryptocurrencies for this. I think Jaron Lanier talked about an idea like this ages ago, before people tried to put cryptocurrencies into everything.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      1 post 6 comments joined 3 months ago, “i’m naive to crypto” “I want to host an instance that serves as a competitive alternative to Facebook/Threads/X to the users in my country,”

      yeah he doesn’t even have to charge for interacting with him i’ll avoid him without it

  • flizzo@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    Orange site on pager bombs in Lebanon:

    If we try to do what we are best at here at HN, let’s focus the discussion on the technical aspects of it.

    It immediately reminded me of Stuxnet, which also from a technical perspective was quite interesting.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      technical aspect seems to be for now that israeli secret services intercepted and sabotaged thousands of pagers to be distributed for hezbollah operatives, then blew them up all at once. it does look like small, reportedly less than 20g each explosive charge, but orange site accepted truth is that it was haxxorz blowing up lithium batteries. israelis already did exactly this thing but with phone in targeted assassination, and actual volume of such bomb would be tiny (about 10ml)

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        Dunno but why not, after Nanowrimo claimed that opposing “AI” means you’re classist and ableist. Why not also make objecting be sexist, racist etc. I’m going to be ahead of the curve by predicting that being against ChatGPT will also be a red flag that you’re a narcissistic sociopath manipulator because uhh because abused women need ChatGPT to communicate with their toxic exes /s

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 months ago

        Considering how much the AI hype feels like the cryptocurrency hype, during which every joke you made had already been seriously used to make a coin and been pumped and dumped already, I wouldn’t be surprised at all.

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        Oh, I wonder if they are referring to this shit, where somone came to r/lgbt fishing for compliments for the picture they’d asked Clippy for, and were completely clowned on by the entire community, which then led to another subreddit full of promptfans claiming that artists are transphobic because they didn’t like a generated image which had a trans flag in it.

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          remembering the NFT grifter who loudly asserted that if you weren’t into NFTs then you must be a transphobe

          (it was Fucking Thorne)

          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 months ago

            fondly remembering replying to these types of people with screenshots from the wikipedia page on affinity fraud. they really hated that

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 months ago

      I suspect it’ll land somewhere above “halitosis” but below “wearing black socks with crocs”

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    Meanwhile, over at the orange site they discuss a browser hack: https://news.ycombinator.com/item?id=41597250 As in a hack that gave the attacker control over any user of this particular browser even if they only ever visited innocent websites, only needing to know their user ID.

    This is what’s known in the biz as a company destroying level fuck-up. I’m not sure this is particularly sneerable or not but I’m just agog at how a company that calls themselves “The Browser Company” can get the basic browser security model so incredibly wrong.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 months ago

      from their Wikipedia page I’m starting to get why I’ve never previously heard of The Browser Company’s browser; it’s about a year old, it’s only for macOS, iOS, and Windows, and it’s just a chromium fork with a Swift UI overtop and extremely boring features you can get with plugins on Firefox without risking getting your entire life compromised (til Mozilla decides that’s profitable, I suppose)

      Arc is designed to be an “operating system for the web”, and integrates standard browsing with Arc’s own applications through the use of a sidebar. The browser is designed to be customisable and allows users to cosmetically change how they see specific websites.

      oh fuck off. so what makes something an operating system is:

      • the whole UI got condensed down into an awkward-looking sidebar that takes up more space instead of a top bar
      • you can re-style websites (which is the feature that enabled this hack, and which must be one of the most common browser plugins)
      • you can change the browser’s UI color
      • it can run “its own applications”? which sounds like a real security treat if they’re running in the UI context of the browser. though to be honest I don’t see why these wouldn’t just be ordinary web apps, in which case it’s just a PWA feature
    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      Hm, I don’t really see the sneer. They wrote a nasty bug, got notified and had a patch out for it within 36h. The remediations look reasonable too: better privacy, less firebase, actual security audits; even the bounty program is probably the right call (but they result in so many shit reports, it’s probably a wash).

      I gotta admit I’m kind of partial to them and their browser? It’s the non-Brave one that ships with an Adblocker by default, has much nicer UI than the existing ones, and the sync thing isn’t half bad (if it doesn’t sync security badness to all your instances, ouch). Sure they sound like a cult but I guess that’s how browser dev gets funded since the 1990s.

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        3 months ago

        OK I might have been a little too harsh, but the security requirements of a browser are higher than pretty much any other piece of software except perhaps for operating system code, emails, or text messages. As a serious player in the browser space it is not optional to get the basic security model / architecture right. This isn’t a matter of a bug slipping through (which can happen to anyone), but the system being designed wrong. Hopefully this company has learned their lesson, treats it with the care it deserves going forward, and bring some diversity to the browser market.

        Anyway that said let’s look at how this was a colossal bug:

        1. The browser required an account hosted on a cloud to use. This is a central point of failure, and cloud is overrated, so should be opt-in.
        2. The browser allowed arbitrary script injection into any webpage based on this cloud account. This is a central point of failure, and goes directly against browser security model so should be opt-in.
        3. The developers did not recognize how dangerous the above was, so perhaps did not treat the back-end with the paranoia it deserved.

        Compare Firefox I have an extension that allows for arbitrary CSS injection, but this extension isn’t cloud based. So this class of vulnerability isn’t possible in the first place, and also it is an extension I opted into and can enable selectively on specific sites instead of globally.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      3 months ago

      so according to @liveuamap, the backstory here is that this is to get his name out of news about the WildBerries shooting in Moscow - where a battle for corporate control came down to gunshots - because he was backing one of the sides

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    Despite Soatak explicitely warning users that posting his latest rant[1] to the more popular tech aggregators would lead to loss of karma and/or public ridicule, someone did just that on lobsters and provoked this mask-slippage[2]. (comment is in three paras, which I will subcomment on below)

    Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade. As far as I can tell, it’s a meme that is exclusively kept alive by our detractors.

    This is the Rationalist version of the village worthy complaining that everyone keeps bringing up that one time he fucked a goat.

    Also, “this sure looks like a religion to me” can be - and is - argued about any human social activity. I’m quite happy to see rationality in the company of, say, feminism and climate change.

    Sure, “religion” is on a sliding scale, but Big Yud-flavored Rationality ticks more of the boxes on the “Religion or not” checklist than feminism or climate change. In fact, treating the latter as a religion is often a way to denigrate them, and never used in good faith.

    Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

    Citation very much needed, bub.


    [1] https://soatok.blog/2024/09/18/the-continued-trajectory-of-idiocy-in-the-tech-industry/

    [2] link and username witheld to protect the guilty. Suffice to say that They Are On My List.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      3 months ago

      nobody in the community is actually interested in the Basilisk

      But you should, yall created an idea which some people do take seriously and it is causing them mental harm. In fact, Yud took it so seriously in a way that shows that he either beliefs in potential acausal blackmail himself, or that enough people in the community believe it that the idea would cause harm.

      A community he created to help people think better. Which now has a mental minefield somewhere but because they want to look sane to outsiders now people don’t talk about it. (And also pretend that now mentally exploded people don’t exist). This is bad.

      I get that we put them in a no-win situation, either take their own ideas seriously enough to talk about acausal blackmail. And then either help people by disproving the idea, or help people by going ‘this part of our totally Rational way of thinking is actually toxic and radioactive and you should keep away from it (A bit like Hegel am I right(*))’. Which makes them look a bit silly for taking it seriously (of which you could say who cares?), or a bit openly culty if they go with the secret knowledge route. Or they could pretend it never happened and never was a big deal and isn’t a big deal in an attempt to not look silly. Of course, we know what happened, and that it still is causing harm to a small group of (proto)-Rationalists. This option makes them look insecure, potentially dangerous, and weak to social pressure.

      That they do the last one, while have also written a lot about acausal trading, which just shows they don’t take their own ideas that seriously. Or if it is an open secret to not talk openly about acausal trade due to acausal blackmail it is just more cult signs. You have to reach level 10 before they teach you about lord Xeno type stuff.

      Anyway, I assume this is a bit of a problem for all communal worldbuilding projects, eventually somebody introduces a few ideas which have far reaching consequences for the roleplay but which people rather not have included. It gets worse when the non-larping outside then notices you and the first reaction is to pretend larping isn’t that important for your group because the incident was a bit embarrassing. Own the lightning bolt tennis ball, it is fine. (**)

      *: I actually don’t know enough about philosophy to know if this joke is correct, so apologies if Hegel is not hated.

      **: I admit, this joke was all a bit forced.

    • ShakingMyHead@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade.

      Sure, but that doesn’t change that the head EA guy wrote an OP-Ed for Time magazine that a nuclear holocaust is preferable to a world that has GPT-5 in it.

        • ShakingMyHead@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

          This one?

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      nobody in the community is actually interested in the Basilisk

      except the ones still getting upset over it, but if we deny their existence as hard as possible they won’t be there

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 months ago

        The reference to the Basilisk was literally one sentence and not central to the post at all, but this big-R Rationalist couldn’t resist on singling it out and loudly proclaiming it’s not relevant anymore. The m’lady doth protest too much.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    3 months ago

    What are the chances that–somewhere deep in the bowels of Clearwater, FL–some poor soul has been ordered to develop an AI replicant of L. Ron Hubbard?

    There is a substantial corpus.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 months ago

      the only worthwhile use of LLMs: endlessly prompting the L Ron Hubbard chatbot with Battlefield Earth reviews as a form of acausal torture

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      They’ve had enough problems with the guy who claimed to be the reincarnation of LRH.

      I reckon Miscavige wouldn’t want a robo-LRH as it could challenge his power within the organization.