There’s a whole school of philosophy that has argued about this for … Well forever, but especially the last 100 years, the philosophy of mind. The problem is definition: what does it mean to think. Some may argue that it requires consciousness, but then the problem of definition is what the hell is consciousness?
So on the trivial side, yes, of course computers can think, if thoughts are nothing special. Computers have states, they can react to and inspect their own states. Is that thinking? LLMs use something like neural networks modeled after the mind to generate streams of words, and encode knowledge and concepts using statistics. Is that thinking?
On the other side, well no, computers don’t think because they don’t have souls. Are souls real? Or maybe there’s more to human thinking than just neural networks, like quantum effects? Or more complexity due to chemical biology? Is the ability to answer a question the same thing as understanding a concept (see Chinese room experiment)?
These are the questions that philosophers love to masturbate with, publish many papers on, and make no real progress towards. Definitions are funny like that
The trick behind it, and it is a trick, is that they have been fed billions of pages of text that contains the majority of every sentence ever written and they use math to estimate the most appropriate word-by-word response to the question from all of the other examples of text that they have to work on.
Current LLMs are incapable of creating an original combination of words( in the absolute sense). They don’t make anything. They just repeat. They are stochastic parrots.
Sometimes the answer is obvious, assuming that you have all of the relevant information, that you can provide the right answer without thinking at all. And when LLMs are correct, it is because of this phenomenon, and not because they actually thought about the question, and came up with a response.
There’s a whole school of philosophy that has argued about this for … Well forever, but especially the last 100 years, the philosophy of mind. The problem is definition: what does it mean to think. Some may argue that it requires consciousness, but then the problem of definition is what the hell is consciousness?
So on the trivial side, yes, of course computers can think, if thoughts are nothing special. Computers have states, they can react to and inspect their own states. Is that thinking? LLMs use something like neural networks modeled after the mind to generate streams of words, and encode knowledge and concepts using statistics. Is that thinking?
On the other side, well no, computers don’t think because they don’t have souls. Are souls real? Or maybe there’s more to human thinking than just neural networks, like quantum effects? Or more complexity due to chemical biology? Is the ability to answer a question the same thing as understanding a concept (see Chinese room experiment)?
These are the questions that philosophers love to masturbate with, publish many papers on, and make no real progress towards. Definitions are funny like that
@lung@lemmy.world that’s a nice image, philosophers masturbating 😄😄😄…
But seriously, l’m amazed at how LLMs respond to my questions.
The trick behind it, and it is a trick, is that they have been fed billions of pages of text that contains the majority of every sentence ever written and they use math to estimate the most appropriate word-by-word response to the question from all of the other examples of text that they have to work on.
Current LLMs are incapable of creating an original combination of words( in the absolute sense). They don’t make anything. They just repeat. They are stochastic parrots.
Sometimes the answer is obvious, assuming that you have all of the relevant information, that you can provide the right answer without thinking at all. And when LLMs are correct, it is because of this phenomenon, and not because they actually thought about the question, and came up with a response.