I recently made the jump from Reddit for the same immediate reasons as everyone else. But, to be honest, if it was just the Reddit API cost changes I wouldn’t be looking to jump ship. I would just weather the protest and stay off Reddit for a few days. Heck I’d probably be fine paying a few bucks a month if it helped my favorite Reddit app (Joey) stay up and running.

No, the real reason I am taking this opportunity to completely switch platforms is because for a couple years now Reddit has been unbearably swamped by bots. Bot comments are common and bot up/downvotes are so rampant that it’s becoming impossible to judge the genuine community interest in any post or comment. It’s just Reddit (and maybe some other nefarious interests) manufacturing trends and pushing the content of their choice.

So, what does Lemmy do differently? Is there anything in Lemmy code or rules that is designed to prevent this from happening here?

    • voiceofchris @lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      That’s disappointing. Screening new accounts only forces spammers to create the accounts with a human touch and then turn it over to their AI. What about a system to prevent bots from up/downvoting? Something like websites use to detect bots. Just by clicking in the little box that says “I am not a robot” the website can tell you’re not a bot. What if every single up and down arrow was formulated like that little box?

      • kadu@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 year ago

        That’s the thing though - what system? Reddit, YouTube, Twitter, Facebook, you name it, nobody managed to prevent bots. How would Lemmy be more successful at this? It’s an extremely challenging battle, unfortunately.

        • czech@kbin.social
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Do those for-profit social media companies want to drive-down traffic that makes them seem more valuable to advertisers? I get that it’s still insanely difficult, and we can’t actually implement a captcha on every up-vote, but it seems like there’s a conflict of interest between moderators and site owners when it comes to bot activity.

          • kadu@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            Arguably, some of the platforms I mentioned have even more of an interest on preventing bots. If I want to place ads on your website, but you can’t tell me if out of 100 impressions 10 are bots or 90 are bots… I’m not wasting my money, or at the very least, I’ll expect rates significantly lower than other competitors.

            • voiceofchris @lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I don’t know. Wouldn’t their motivation be to know exactly how many bots there are (so they could disclose the number if/when asked) but continue to let them proliferate?

          • HQC@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            1 year ago

            Social media companies generally benefit from high traffic for advertiser appeal, but combating bots is crucial for maintaining user trust and engagement. Implementing CAPTCHAs for every upvote may not be feasible, but addressing bot activity is generally in the long-term interest of social media companies.

            This message was generated by ChatGPT.

            Not sure if you bought that, but if I was applying for an account on Beehaw using a LLM assistant, I bet the odds of passing a human review is better than 50%.

            • aeternum@kbin.social
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              Oh god. Could you imagine doing a captcha every time you upvoted? Please DO NOT do this, Ernest.

  • mjgood91@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    1 year ago

    I reckon it’d depend significantly on the instance. Beehaw has a signup form reviewed by humans - measures like this are by no means perfect, but coupled with other bot detection software could help. If an instance developed a real issue with bots, other more strict instances could potentially ban up votes and comments from accounts on it.

    At the very least, tracking instances that account interaction came from should be quite doable, so users part of more strict instances could filter out upvotes and comments from less strict instances if desired.

    • voiceofchris @lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Well that’s something at least. Individual instances blocking each other (working against other problematic instances) is at least better than the Reddit admins turning a blind eye because they have a fleet of their own bots out there behaving as bad as any others.

    • nivenkos@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Beehaw’s approach isn’t scalable.

      They want to have 4 people moderating every community, managing the creation of any new communities, and reviewing every sign-up request.

      It’s no surprise they’ve buckled on federation already. I give it a week before they stop accepting new sign ups or community creation requests too.

      • mjgood91@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yeah, I do agree Beehaw won’t be able to grow significantly if they keep doing things the way they’re doing them right now. At present point, they’re going to likely remain a more niche community long-term with how they’re operating. Who knows though, maybe this is what they want. Lemmy would have to do something different though without a herculean moderation effort.

    • CoderKat@kbin.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Beehaw has a signup form reviewed by humans

      I’m honestly not sure what difference that makes with federation. Someone from a server with easy signup can still post and comment in Beehaw subs. It doesn’t really scale well to manually review signups, either (with an essay question when I saw, lol).

      • hardypart@feddit.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        Someone from a server with easy signup can still post and comment in Beehaw subs

        Only if Beehaw federates with the other instance, though.

  • Greg@feddit.de
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    There’s a rumor that Reddit started with (automated and human) bots to gain popularity and kept to drive political and commercial interests.

  • Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    Something I’d like to see Lemmy and others adopt is a federated identity/reputation system.

    My identity as @Zak@lemmy.world has only modest reputational value. It’s moderately risky to let me participate in a new community, and busy moderators probably shouldn’t give me much slack before banning me if I post something that makes me look like an asshole or a spammer. In a place with a high enough volume or vulnerable enough population, perhaps this account shouldn’t be allowed to participate at all[0]. Someone willing to put a bit of effort into abusive behavior could create many accounts that look like mine.

    If, on the other hand, I can prove that I’m also https://news.ycombinator.com/user?id=Zak, that’s a more valuable identity. There aren’t all that many 16 year old accounts on news.ycombinator.com. If I can also produce a verifiable token with some machine-readable facts about that account, such as its age, post count, reputation score, how many of its posts have been moderated, if it has ever been banned, etc… then communities could have automated criteria for joining.

    Of course, communities would need to maintain lists of who they trust as reputation providers, which could also be shared to reduce the workload.

    [0] Lemmy does not currently have tools to restrict participation other than only allowing moderators to post. I think it’s going to need them.

      • Zak@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        The identity proof aspect is similar, but what I’m proposing goes beyond that to add a protocol for reputation information.

        The idea is a substitute for the account age and karma requirements many subreddits use to make creating accounts for abuse difficult. There are opportunities to be more sophisticated about it though, such as a community only accepting reputation from certain closely-related communities.

    • voiceofchris @lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      In other news, mobs of young out of work robo- tortoises, some sporting fresh scars from the ongoing Mojave Raven wars, have begun an all out assault on the dweebs of a little known Reddit spin-off. “An entire generation of robo-tortoise has been weaponized. They are equipping us with laser guns! They are making us to taste bad!” States one salty techno-turtle. “We are being shipped to the barren wastelands of America’s Southwest to fight a war in which we have no interest.” The repto-robots have decided to take out their frustration by relentlessly downvoting the “…federated tankies of Lemmy until those dweebs return to Reddit where they belong and leave the Threadiverse to us sentient snappers.”