The argument that large language models (LLMs) are not truly thinking because they are just predicting the next token is flawed because the same argument can be applied to the human brain. The human brain is a collection of neurons that finds patterns and generates responses. The Chinese Room Argument, which suggests that computers can't truly think because they don't understand what they're doing, is also flawed. The argument is flawed because it doesn't recognize that the brain of a Chinese-speaking human is also a "Chinese machine" that takes in input and produces output without understanding how it happens. Our brains are essentially "Chinese rooms" that produce output without us knowing how it happens. We have no idea where our words, sentences, or ideas come from, they just stream out of us from some mysterious place. The ability to produce output from understanding turns into a "Chinese room" where we throw in input and our brain throws back output without us knowing where it comes from. We have forgotten that we have no idea where our knowledge, understanding, and creativity come from when we pose problems to our own minds. To determine if something can reason, we should decide on a definition of reasoning, give it problems that require reasoning to solve, and see if it can solve them, rather than making assumptions based on how humans think. Ultimately, our brains are black boxes that produce wondrous output through a vast network of small nodes that light up together when given input, just like Transformers.
danielmiessler.com
danielmiessler.com
Create attached notes ...