• yucandu@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    4 days ago

    What about the AI that I run on my local GPU that is using a model trained on open source and public works?

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      2
      ·
      4 days ago

      It’s cool as hell to train models don’t get me wrong but if you use them as assistants you will still slowly stop thinking no?

      So Nazgûl.

      • yucandu@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        3 days ago

        Feels like telling me not to use a calculator so I don’t forget how to add and subtract.

      • mattvanlaw@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        3 days ago

        I’ve settled on a future model where AIs are familiars that level up from their experience more naturally and are less immediately omnipotent

    • mattvanlaw@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      4 days ago

      This is very cool. Any advice a simple software engineer (me) could follow to practice the same?

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        4 days ago

        People were making LLMs before openai/chatgpt tbf.

        It’s the “destroy the environment and economy in an attempt to make something that sucks just enough to justify not paying people fairly so you can advertise to rich assholes gambling their generational wealth” that OpenAI invented for the LLMs.

        • amino@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          4 days ago

          what are those LLMs you mention that people are still using? never heard of them, sounds like a cop out

        • Jared White ✌️ [HWC]@humansare.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          It is still trained on open source code on GitHub. These code communities seemingly have no way to opt out of their free (libre) contributions being used as training data, nor does the resulting code generation contribute anything back to those communities. It is a form of license stripping. That’s just one issue.

          Just because your inference running locally doesn’t use much electricity doesn’t mean you’ve sidestepped all of the other ethical issues surrounding LLMs.

          • yucandu@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            3 days ago

            It is not trained on open source code on Github.

            But I can use it to analyze a datasheet and generate a library for an obscure module that I can then upload to Github and contribute to the community.

              • yucandu@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                StarCoderData.23 A large-scale code dataset derived from the permissively licensed GitHub collection The Stack (v1.2). (Kocetkov et al., 2022), which applies deduplication and filtering of opted-out files. In addition to source code, the dataset includes supplementary resources such as GitHub Issues and Jupyter Notebooks (Li et al., 2023).

                That’s not random Github accounts or “delicensing” anything. People had to opt IN to be part of “The Stack”. Apertus isn’t training itself from community code.

                • Jared White ✌️ [HWC]@humansare.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 days ago

                  I’m tired of arguing with you about this, and you’re still wrong. It was opt-out, not opt-in, based initially on a GitHub crawl of 137M repos and 52B files before filtering & dedup.

                  • yucandu@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    2 days ago

                    But again, you’d have to set your project to public and your license to “anyone can take my code and do whatever they want with it” before it’d be even added to that list. That’s opt-in, not opt-out. I don’t see the ethical dilemma here. I’m pretty sure I’ve found ethical AI, that produces good value for me and society, and I’m going to keep telling people about it and how to use it.