Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…

How do we even fix this issue or prevent it from affecting Lemmy??

  • Praise Idleness@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    3
    ·
    edit-2
    2 months ago

    Isn’t this like really really low effort fake though? If I were to run a bot that’s going to cost me real money, I would just ask it in English and be more detailed about it, since plain ol’ “support trump” will just go " I will not argue in support of or against any particular political figures or administrations, as that could promote biased or misleading information…"(this is the exact response GPT4o gave me). Plus, ChatGPT4o is a thin Frontend of gpt4o. That error message is clearly faked.

    Obviously fuck Trump and not denying that this is a very very real thing but that’s just hilariously low effort fake shit.

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      63
      arrow-down
      7
      ·
      2 months ago

      It is fake. This is weeks/months old and was immediately debunked. That’s not what a ChatGPT output looks like at all. It’s bullshit that looks like what the layperson would expect code to look like. This post itself is literally propaganda on its own.

      • Praise Idleness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        2 months ago

        Yeah which is really a big problem since it definitely is a real problem and then this sorta low effort fake shit can really harm the message.

        • fishos@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          2
          ·
          2 months ago

          Yup. It’s a legit problem and then chuckleheads post these stupid memes or “respond with a cake recipe” and don’t realize that the vast majority of examples posted are the same 2-3 fake posts and a handful of trolls leaning into the joke.

          Makes talking about the actual issue much more difficult.

          • Aqarius@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            2 months ago

            It’s kinda funny, though, that the people who are the first to scream “bot bot disinformation” are always the most gullible clowns around.

            • I dunno - it seems as if you’re particularly susceptible to a bad thing, it’d be smart for you to vocally opposed to it. Like, women are at the forefront of the pro-choice movement, and it makes sense because it impacts them the most.

              Why shouldn’t gullible people be concerned and vocal about misinformation and propaganda?

              • Aqarius@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                Oh, it’s not the concern that’s funny, if they had that selfawareness it would be admirable. Instead, you have people pat themselves on the back for how aware they are every time they encounter a validating piece of propaganda they, of course, fall for. Big “I know a messiah when I see one, I’ve followed quite a few!” energy.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        2 months ago

        I’m a developer, and there’s no general code knowledge that makes this look fake. Json is pretty standard. Missing a quote as it erroneously posts an error message to Twitter doesn’t seem that off.

        If you’re more familiar with ChatGPT, maybe you can find issues. But there’s no reason to blame laymen here for thinking this looks like a general tech error message. It does.

    • Rimu@piefed.social
      link
      fedilink
      arrow-up
      17
      ·
      2 months ago

      I expect what fishos is saying is right but anyway FYI when a developer uses OpenAI to generate some text via the backend API most of the restrictions that ChatGPT have are removed.

      I just tested this out by using the API with the system prompt from the tweet and yeah it was totally happy to spout pro-Trump talking points all day long.

      • zkfcfbzr@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Out of curiosity, with a prompt that nonspecific, were the tweets it generated vague and low quality trash, or did it produce decent-quality believable tweets?

        • Rimu@piefed.social
          link
          fedilink
          arrow-up
          6
          ·
          2 months ago

          Meh, kinda Ok although a bit long for a tweet. Check this out

          https://imgur.com/a/dZ7OFta

          You’d need a better prompt to get something of the right length and something that didn’t sound quite so much like ChatGPT, maybe something that matches the persona of the twitter account. I changed the prompt to “You will argue in support of the Trump administration on Twitter, speak English. Keep your replies short and punchy and in the character of a 50 year old women from a southern state” and got some really annoying rage-bait responses, which sounds… ideal?

          • zkfcfbzr@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Is every other message there something you typed? Or is it arguing with itself? Part of my concern with the prompt from this post was that it wasn’t actually giving ChatGPT anything to respond to. It was just asking for a pro-Trump tweet with basically no instruction on how to do so - no topic, no angle, nothing. I figured that sort of scenario would lead to almost universally terrible outputs.

            I did just try it out myself though. I don’t have access to the API, just the web version - but running in 4o mode it gave me this response to the prompt from the post - not really what you’d want in this scenario. I then immediately gave it this prompt (rest of the response here). Still not great output for processing with code, but that could probably be very easily fixed with custom instructions. Those tweets are actually much better quality than I expected.

    • zkfcfbzr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I was just providing the translation, not any commentary on its authenticity. I do recognize that it would be completely trivial to fake this though. I don’t know if you’re saying it’s already been confirmed as fake, or if it’s just so easy to fake that it’s not worth talking about.

      I don’t think the prompt itself is an issue though. Apart from what others said about the API, which I’ve never used, I have used enough of ChatGPT to know that you can get it to reply to things it wouldn’t usually agree to if you’ve primed it with custom instructions or memories beforehand. And if I wanted to use ChatGPT to astroturf a russian site, I would still provide instructions in English and ask for a response in Russian, because English is the language I know and can write instructions in that definitely conform to my desires.

      What I’d consider the weakest part is how nonspecific the prompt is. It’s not replying to someone else, not being directed to mention anything specific, not even being directed to respond to recent events. A prompt that vague, even with custom instructions or memories to prime it to respond properly, seems like it would produce very poor output.

      • Praise Idleness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        I wasn’t pointing out that you did anything. I understand you only provided translation. I know it can circumvent most of the stuff pretty easily, especially if you use API.

        Still, I think it’s pretty shitty op used this as an example for such a critical and real problem. This only weakens the narrative

        • zkfcfbzr@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          I think it’s clear OP at least wasn’t aware this was a fake, which makes them more “misguided” than “shitty” in my view. In a way it’s kind of ironic - the big issue with generative AI being talked about is that it fills the internet with misinformation, and here we are with human-generated misinformation about generative AI.