• 425 Posts
  • 586 Comments
Joined 3 years ago
cake
Cake day: April 24th, 2023

help-circle
  • Yeah, and how does that Tamil farmer fact check their black box audio interface when it tells them to spray Roundup on their potatoes, or warns them to buy bottled water because their Hindu-hating Muslim neighbors have poisoned their well, or any other garbage it’s been deliberately or accidentally poisoned with?

    One of the huge weaknesses of AI as a user interface is that you have to go outside the interface to verify what it tells you. If I search for information about a disease using a search engine, and I find an .edu website discussing the results of double blind scientific studies of treatments for a disease, and a site full of anti-Semitic conspiracy theories and supplement ads telling me about THE SECRET CURE DOCTORS DON’T WANT YOU TO KNOW, I can compare the credibility of those two sources. If I ask ChatGPT for information about a disease, and it recommends a particular treatment protocol, I don’t know where it’s getting its information or how reliable it is. Even if it gives me some citations, I have to check its citations anyway, because I don’t know whether they’re reliable sources, unreliable sources, or hallucinations that don’t exist at all.

    And people who trust their LLM and don’t check its sources end up poisoning themselves when it tells them to mix bleach and vinegar to clean their bathrooms.

    If LLMS were being implemented as a new interface to gather information - as a tool to enhance human cognition rather than supplant, monitor, and control it - I would have a lot fewer problems with them.






  • AI is a parasite. It can’t come up with anything a human didn’t create first. It eats our thoughts and regurgitates them.

    We kill AI by limiting our use of the Internet, renouncing social media in particular (and yes, I recognize the hypocrisy), and communicating with actual human beings through encrypted messenger apps that AI can’t scrape for new training material.

    Think of AI like an online troll. Don’t feed it, don’t engage with it, and it will be irrelevant to you until it finally gives up and dies.

    But the social media machine wants you not to talk to actual human beings, it wants you to be lonely and isolated, so you’ll consume its product - and AI is just a part of that machine, making you lonely and then providing you with the illusion of a real person to talk to.

    Gardening is a great way to fight that, especially community gardening, because you literally have to be out there in person with your hands in the dirt talking to other gardeners.

    So I agree with this post and strongly recommend anybody who doesn’t have space to garden go looking for a community garden, or volunteer at a food bank (which often have ties to community gardens and can point you at opportunities), or help at a Food Not Bombs event, or otherwise get yourself involved in the real live in-person work of feeding human beings, and reclaim your brain from the social media algorithm feeding you AI slop.


  • I don’t know who the people around you are. I won’t tell you you’re wrong to be afraid of interacting with them.

    But I do know that social media is designed to make you feel that way.

    Social media algorithms find the angriest, the most hateful, the most radical, content on all sides and feed it to you. So you’re going to see people on your side saying the other side wants to kill you, and you’re going to see people on the other side saying they want to kill you, and you’re not going to see the vast majority of people who don’t actually want to kill you.

    Because the more afraid you are of your actual human neighbors, the more time you’ll spend on social media watching ads and being force-fed algorithmic slop. And that slop makes you even more afraid of your neighbors, so you spend even more time online, and so on and so forth.

    So I’d ask you to ask yourself: if you believe people in your community want to kill trans people and enslave blacks, how much of that belief comes from what people in your community have actually said and done, and how much of that belief comes from stuff you’ve heard online?



  • I think “we” (secular Westerners) are more likely to appropriate spiritual indigenous narratives, take them out of context, and trivialize them into meaninglessness - as the article describes we did with the concept of mindfulness - than we are to erase them. And I think this will happen because we, secular Westerners, are living lives devoid of spiritual meaning, and it’s terribly tempting to steal other people’s beliefs in the hope we can find a fraction of their meaning in life.

    And though I’m sure people online are going to go full Reddit atheist on me and tell me belief in a higher power is ignorant and primitive, every society in human history that we know anything about has either had some sort of belief in higher powers or has aggressively suppressed such belief, and that belief served a function of social cohesion that a lot of the left no longer have.

    Honestly, I think part of the reason Trump won - and part of the reason populist, religious nationalism is surging worldwide, Trump being just one example - is that the secular West threw out its own spiritual narratives without replacing them with anything. We condemned Christianity as ignorant, bigoted, and repressive, but we didn’t create anything in its place to serve its role. We walked away from the churches, which were the “third places” of our towns, the centers of our social and cultural lives, and we replaced them with what? Coffee shops?

    People need something to believe in, and we told them “do your jobs and vote blue, but it won’t matter anyway because the environment is fucked”.

    The environmental left needs the warning not to engage in empty spirituality because so many people in it are desperate for the kind of meaning spirituality gives.



  • I have a serious question. Who thought Reddit Answers was a good idea? What’s the actual benefit to the company? Did they get a ton of venture capital funding to build it, or are they trying to jump on the AI hype, or what? Does anyone actually know?

    One of the biggest reasons, I think, for Reddit’s popularity in the 2010s was that its comment threads often had advice and information and product recommendations from real people - as opposed to, say, Amazon reviews, which were full of bots even back then. A ton of people still search for topics on Google using the site:reddit.com modifier, because searching Reddit bypasses all the SEO and AI-created spam sites that dominate Google results, and Reddit is still one of the biggest open source databases of actual human advice and conversation.

    And Reddit has decided to dilute its most valuable contribution to the internet with AI spambots?

    It’s some sort of stage 3 enshittification, obviously - cannibalizing its core use case for short term profit - but I’m morbidly curious who thought this was a good idea and why.







  • One: the original Axios article requires you to log in.

    Two: The researchers were aware of the limits of AI detectors, and tested the one they used. From the article:

    We should also take the judgments of AI detectors with a grain of salt, since their reliability is up for question. In its own testing of Surfer’s accuracy, Graphite had the detector analyze a sample of AI-generated articles and another sample of human articles, finding that it labeled human-written articles as AI-made 4.2 percent of the time — a common problem with these tools — but only mistook AI-written articles as human 0.6 percent of the time.

    And really, judging from the quality of search results these days, I would have expected a lot more than 50% of new online articles to be AI generated, so from that standpoint the article might be good news 😆



  • It’s like the old economist joke.

    Two economists are walking in the park. The first economist sees a pile of dog shit and says to the other, “I’ll pay you $50 to eat that dog shit.” So he does and gets paid $50. Later on, the second economist sees a pile of dog shit and says to the first, “I’ll pay you $50 to eat that pile of dog shit.” So he does and gets paid $50.

    The first economist says, “I can’t help but feel we just ate dog shit for nothing.” “Nonsense,” says the second economist, “We just contributed $100 to the economy.”





  • If the Trump administration, and especially Project 2025, have taught America anything, it’s that libertarians don’t fucking have ideals.

    Libertarians spouted propaganda about small government and free speech and privacy until conservative authoritarians took power. And then they cheered while conservative authoritarians built the most extensive police state and government surveillance apparatus in American history and began arresting people for writing op-eds and posting memes.

    Libertarians, like Republicans, never actually supported small government or free speech or the privacy of citizens. They deployed the rhetoric of small government and free speech and privacy as weapons to attack liberals and prevent Democratic administrations from pursuing their policy goals. Now that conservatives are in power, those weapons are no longer useful, and libertarians have discarded them.

    Libertarian “ideals” were weapons against Democratic government, and they were never anything else.

    And to get back to your point: of course libertarians spout rhetoric about financial privacy while keeping cryptocurrency in centralized KYC exchanges. Because crypto was never about privacy as an ideal. It was about bypassing financial regulations, laundering money, dodging taxes, grifting, scamming normies, and gambling on pumps and dumps. Crypto bros talk a good game about privacy and independence to shield themselves from regulation and make themselves look legitimate. Anyone who actually believes that crap is a useful idiot that probably lost all their money in a cryptoscam.




  • Complex algorithms that follow rules they cannot deviate from = lawful.

    Deliberately incorporating random factors into the algorithm so they don’t generate the same result every time = chaotic.

    So I’d argue the LLMs themselves are neutral evil, presuming we allow objects to have alignments. In D&D, non-sapient animals have no alignments, because they don’t understand moral or ethical concepts, so that would argue for LLMs being unaligned and the alignment applying to their companies.

    Could you argue a LLM is attuned to its corporate owner and shares its alignment? They’d definitely be cursed.

    Then the companies would veer from lawful evil (Microsoft has been the archetype of abusing laws and regulations to its own benefit for decades) to chaotic evil (Grok has no rules, only the whims of its infantile tyrant).


  • It doesn’t need to, but it’s clearly chosen to.

    I mean, the Trump Administration is willing to give Meta and Alphabet their own company towns, excuse me, special economic zones, where the only laws and regulations are what the corporation wants. SpaceX already has one in Texas.

    Conversely, I’m confident that if Democrats take back power they’re going to open all sorts of investigations against Musk and his cohorts. And Musk and his cohorts know it too.

    I don’t think there’s any putting the bipartisan technocrat consensus back together. American tech companies have pretty solidly aligned themselves with Republican authoritarianism.


  • Yeah, and, as the article points out, the trick would be getting those malicious training documents into the LLM’s training material in the first place.

    What I would wonder is whether this technique could be replicated using common terms. The researchers were able to make their AI spit out gibberish when it heard a very rare trigger term. If you could make an AI spit out, say, a link to a particular crypto-stealing scam website whenever a user put “crypto” or “Bitcoin” in a prompt, or content promoting anti-abortion “crisis pregnancy centers” whenever a user put “abortion” in a prompt …


  • It’s an incredible house of cards, and I’m honestly coming to suspect that’s the point. These companies have some of the greatest financial experts in the world working for them. They can’t possibly not know how fucked they are in the long run.

    But the long run is the long run. I’m confident Trump and his billionaire tech bro lackeys can pump enough money and silence enough regulators to keep the bubble afloat until 2029.

    If Democrats take power in 2029, or if Democrats take the House in 2027 and start doing something effective (lol), then the tech bros pull the plug, blame liberals and illegals for crashing the economy, and guide the predictable conservative / authoritarian backlash to their benefit.

    Yay, technofeudalism.