• sweng@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    It would depend on the format what is counted as source, and what isn’t.

    You can create a picture by hand, using no input data.

    I challenge you to do the same for model weights. If you truly just sit down and type away numbers in a file, then yes, the model would have no further source. But that is not something that can be done in practice.

    • delirious_owl
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      7 months ago

      I challenge you to recreate the Mona Lisa.

      My point is that these models are so complex that they’re closer to art than anything reproduce

      • sweng@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 months ago

        I don’t see your point? What is the “source” for Mona Lisa I would use? For LLMs I could reproduce them given the original inputs.

        Creating those inputs may be an art, but so could any piece of code. No one claims that code being elegant disqualifies it from being open source.

        • delirious_owl
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          Are you sure that you can reproduce the model, given the same inputs? Reproducibility is a difficult property to achieve. I wouldn’t think LLMs are reproduce.

          • sweng@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            7 months ago

            In theory, if you have the inputs, you have reproducible outputs, modulo perhaps some small deviations due to non-deterministic parallelism. But if those effects are large enough to make your model perform differently you already have big issues, no different than if a piece of software performs differently each time it is compiled.

            • delirious_owl
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              7 months ago

              That’s the theory for some paradigms that were specifically designed to have the property of determinism.

              Most things in the world, even computers, are non-deterministic

              Nondeterminism isn’t necessarily a bad thing for systems like AI.