nickwitha_k (he/him)

  • 16 Posts
  • 2.77K Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle



  • and means lower costs, see: “reasons people like the march of progress for 100”

    Objectively incorrect. The actual costs of AI “art” are astronomically higher than the costs of hiring artists. When was the last time an artist needed a fission reactor and enough potable water to supply a moderately sized city over the course of their lives, much less for the completion of a project?

    The corpos running the scam just haven’t made the financial costs to end users align with reality yet. They’re trying to destroy livelihoods and get businesses stuck in vendor lock-in first so that they have no competition when they open the valves of the real costs. Generative AI under this hyper-capitalist regime is a net negative for the species.









  • why are you arguing that at me?

    Rationally and in vacuum, anthropomorphizing tools and animals is kinda silly and sometimes dangerous. But human brains don’t work do well at context separation and rationality. They are very noisy and prone to conceptual cross-talk.

    The reason that this is important is that, as useless as LLMs are at nearly everything they are billed as, they are really good at fooling our brains into thinking that they possess consciousness (there’s plenty even on Lemmy that ascribe levels of intelligence to them that are impossible with the technology). Just like knowledge and awareness don’t grant immunity to propaganda, our unconscious processes will do their own thing. Humans are social animals and our brains are adapted to act as such, resulting in behaviors that run the gamut from wonderfully bizzare (keeping pets that don’t “work”) to dangerous (attempting to pet bears or keep chimps as “family”).

    Things that are perceived by our brains, consciously or unconsciously, are stored with associations to other similar things. So the danger here that I was trying to highlight is that being abusive to a tool, like an LLM, that can trick our brains into associating it with conscious beings, is that that acceptability of abusive behavior towards other people can be indirectly reinforced.

    Basically, like I said before, one can unintentionally train themselves into practicing antisocial behaviors.

    You do have a good point though that people believing that ChatGPT is a being that they can confide in, etc is very harmful and, itself, likely to lead to antisocial behaviors.

    that is fucking stupid behavior

    It is human behavior. Humans are irrational as fuck, even the most rational of us. It’s best to plan accordingly.








  • This is fair and warranted.

    Also, to be fair, Windows is a trash-tier piece of software that become little but adware/spyware in a trenchcoat, masquerading as an operating system. I ran an install in a VM a couple of weeks ago for the first time in nearly two decades and even the basic installation process is on par with the WinXP alpha (before the installer was ready), requiring extra driver disks and software just to be able to think about installing. I had to fight with UEFI and Grub to get Arch to boot alongside Fedora the other day and that was a much more enjoyable process.