- Post in !techtakes@awful.systems attacks the entire concept of AI safety as a made-up boogeyman
- I disagree and am attacked from all sides for “posting like an evangelist”
- I give citations for things I thought would be obvious, such as that AI technology in general has been improving in capability compared to several years ago
- Instance ban, “promptfondling evangelist”
This one I’m not aggrieved about as much, it’s just weird. It’s reminiscent of the lemmy.ml type of echo chamber where everyone’s convinced it’s one way, because in a self-fulfilling prophecy, anyone who is not convinced gets yelled at and receives a ban.
Full context: https://ponder.cat/post/1030285 (Some of my replies were after the ban because I didn’t PT Barnum carefully enough, so didn’t realize.)
I originally stated that I did not find your arguments convincing. I wasn’t talking about AI safety as a general concept, but the overall discussion related to the article titled (Anthropic, Apollo astounded to find a chatbot will lie to you if you tell it to lie to you).
I didn’t find you initial post (or any you posts in that thread) to be explicit in the recognition in the potential for bad faith actions from the likes of Anthropic, Apollo. On the contrary, you largely deny the concept of “criti-hype”. One can, in good faith, interpret this as de facto corporate PR promotion (whether that was the intentional or not).
You didn’t mention the hypothetical profit maximization example in the thread and your phrasing implied a current tool/service/framework, not a hypothetical.
I don’t see how the YT video or the article summary (I did not read the paper) is honestly relevant to what was being discussed.
I am honestly trying to not take sides (but perhaps I am failing in this?), more like suggesting that how people interpret “groupthink” can take many forms and that “counter-contrarian” arguments in of themselves are not some of magical silver bullet.
Okay, cool. I was. That was my whole point, that even if some is grift, AI safety itself is a real and important thing, and that’s an important thing to keep in mind.
I think I’ve explained myself enough at this point. If you don’t know that the paperclips reference from the linked article is indicative of the exact profit maximization situation that I explained in more detail for you when you asked, or you can’t see how the paper I linked might be a reasonable response if someone complains that I haven’t given proof that AI technology has ever gained abilities over time, then I think I’ll leave you with those conclusions, if those are the conclusions you’ve reached.