The researchers behind the simulation say there is a risk of this happening for real in the future.

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    1 year ago

    This is some crazy clickbait. The researchers themselves say that it wasn’t a likely scenario and was more of a mistake than anything. This is some more round peg square hole nonsense. We already have models for predicting stock prices and doing sentiment analysis. We don’t need to drag language models into this.

    The statements about training honestly being harder than helpfulness is also silly. You can train a model to act however you want. Full training isn’t really even necessary. Just adding info about the assistant character being honest and transparent in the system context would have probably have made it acknowledge the trade or not make it in the first place.