Generate 5 thoughts, prune 3, branch, repeat. I think that’s what o1 pro and o3 do

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 hours ago

    Can’t you feed that back into the same model? I believe most agentic pipelines just use a regular LLM to assess and review the answers from the previous step. At least that’s what I’ve seen in these CoT examples. I believe training a model on rationality tests would be quite hard, as this requires understanding the reasoning, context, having the domain specific knowledge available… Wouldn’t that require a very smart LLM? Or just the original one (R1) since that was trained on… well… reasoning? I’d just run the same R1 as “distillation” and tell it to come up with critique and give a final rating of the previous idea in machine redable format (JSON). After that you can feed it back again and have the LLM decide on two promising ideas to keep and follow. That’d implement the tree search. Though I’d argue this isn’t Monte Carlo.