• Cyborganism@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    You know… Instead of having AI create art while humans bust their asses at work, why not make AI do the work and let humans create art?

    • SpaceToast@mander.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Because then people wouldn’t pay out the ass for small conveniences. Keeping people working as much as possible is the point.

  • Uriel-238@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    I think this is going to raise some questions about fair use, since AI projects are absolutely a derivative works that are sufficiently removed from the content they used. (There may be some argument that it’s also educational use.)

    This case may rekindle questions about fair use given that our current copyright-maximalist clime has been less interested in enforcing fair use and more interested in enforcing copyright regardless of fair use.

  • Andreas@feddit.dk
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Are they going to keep the lawsuit focused on OpenAI and Meta or turn it into yet another lawsuit against piracy?

    • Uriel-238@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Any lawsuit that rules in favor of copyright holders promotes piracy (as opposed to legalizing use of copyrighted material).

      The more draconian and extreme our copyright laws, the more there is a need for a piracy sector.

  • Hot Saucerman@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    More detailed coverage from The Verge: https://www.theverge.com/2023/7/9/23788741/sarah-silverman-openai-meta-chatgpt-llama-copyright-infringement-chatbots-artificial-intelligence-ai

    The complaint lays out in steps why the plaintiffs believe the datasets have illicit origins — in a Meta paper detailing LLaMA, the company points to sources for its training datasets, one of which is called ThePile, which was assembled by a company called EleutherAI. ThePile, the complaint points out, was described in an EleutherAI paper as being put together from “a copy of the contents of the Bibliotik private tracker.” Bibliotik and the other “shadow libraries” listed, says the lawsuit, are “flagrantly illegal.”

    I used to have a Bibliotik account, and if this is true about ThePile, they very likely have at least the beginnings of a successful case.

  • CaptainBasculin@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    7
    ·
    1 year ago

    It’s not illegal for a human to learn from the contents of a book, so why the fuck it’s illegal for an AI?

    • Ace T'Ken@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Because the thing referred to as AI (which is definitely not AI) is simply strip mining the book to shit out “content.”

      It is not reading, understanding, or learning from the book. It is using it to sell services for its masters.

      An author should control their work. They should be able to decide for themselves whether or not they want to help big tech sell garbage to idiots.