• danielbln@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    10 months ago

    Depends on the model/provider. If you’re running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.

    With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.