• jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    Disagree. They can be connected to actual game mechanics. For instance, it’s quite easy to ask an LLM to output something in json format:

    {
      "name": "The Master of Evil",
       "hitpoints": 205,
       "class": "vampire,"
    }
    

    and so on. You might object that it could make mistakes here. Suppose the detectable error rate is 10% (I actually think it’s lower from what I’ve played around with.) Rerunning it in the case of a such an error (e.g. malformed json, invalid class name, hit points exceeds bounds, etc.) reduces to 1%, then 0.1% etc., and in the end there can be a non-AI fallback just for certainty. Admittedly, the errors are not i.i.d., but still it should be pretty low. Many traditional procgen techniques, such as map generation, also use rejection sampling in this way, with even larger rejection rates than 10%.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      It’s easy to generate something as generic as that, not as easy to generate mechanics. And if you don’t generate mechanics then you’re only doing fluff like I said

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        ah, I misunderstood by what you meant by “generate mechanics.” My bad.