Since Meta announced they would stop moderating posts much of the mainstream discussion surrounding social media has been centered on whether a platform has a responsibility or not for the content being posted on their service. Which I think is a fair discussion though I favor the side of less moderation in almost every instance.

But as I think about it the problem is not moderation at all: we had very little moderation in the early days of the internet and social media and yet people didn’t believe the nonsense they saw online, unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media. To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views; and I think anyone with two brain cells and an iota of understanding of how engagement algorithms works can see this. So why is the discussion about moderation and not about banning algorithms?

  • livjq@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    It would be really nice if at the very least we could get some insight into how algorithms are tuned. It seems obvious that Facebook and X want users to get pissed off. It does not seem ethical at all and should at the very least be examined

    • Plebcouncilman@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      While transparency would be helpful for discussion, I don’t think it would change or help with stopping propaganda, misinformation and outright bullshit from being disseminated to the masses because people just don’t care. Even if the algorithm was transparently made to push false narratives people would just shrug and keep using it. The average person doesn’t care about the who, what or why as long as they are entertained. But yes, transparency would be a good first step.