I can’t tell if you’re suggesting that foundation models (which is the underpinning technology of LLMs) aren’t being used for the things that I said they’re being used for, but I can assure you they are, either in commercial R&D or in live commercial products.
The fact that they shouldn’t be used for these things is something we can certainly agree on, but the fact remains that they are.
Sources:
Wayve is using foundation models for driving, and I am under the impression that their neural net extends all the way from sensor input to motor control: https://wayve.ai/thinking/introducing-gaia1/
So this all goes back to my point that some form of accountability is needed for how these tools get used. I haven’t examined the specific legislation proposal enough to give any firm opinion on it, but I think it’s a good thing that the conversation is happening in a serious way.
I can’t tell if you’re suggesting that foundation models (which is the underpinning technology of LLMs) aren’t being used for the things that I said they’re being used for, but I can assure you they are, either in commercial R&D or in live commercial products.
The fact that they shouldn’t be used for these things is something we can certainly agree on, but the fact remains that they are.
Sources:
Wayve is using foundation models for driving, and I am under the impression that their neural net extends all the way from sensor input to motor control: https://wayve.ai/thinking/introducing-gaia1/
Research recommending the use of LLMs for giving financial advice: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4850039
LLMs for therapy: https://blog.langchain.dev/mental-health-therapy-as-an-llm-state-machine/
So this all goes back to my point that some form of accountability is needed for how these tools get used. I haven’t examined the specific legislation proposal enough to give any firm opinion on it, but I think it’s a good thing that the conversation is happening in a serious way.