I thought we had progressed beyond using sensational claims about LLMs consciousness for publicity. However, Claude 3 has reignited this behavior, prompting me to shift the focus of this blog post to address this issue.
Please avoid wasting intellectual effort on crafting simple dialogues to suggest that models are conscious, or debating whether Model X can respond to questions about consciousness, and similar endeavors. These discussions clutter media and social platforms with fear and disinformation without offering any real benefits to science and society.
Is it realistic to think that in the last ten years, we've evolved from simple MLPs to a conscious machine through mere scaling? On the other hand, biological evolution took billions of years to develop human intelligence iterating through many species. Can machine learning from predominantly English internet text truly understand the world like humans, who have sensory experiences and the ability to move through space and time? Is the current supercomputer a good way to match the capabilities of the highly specialized human brain?
Please avoid emulating physicist William Thomson, who, in 1897, claimed that there was nothing left to discover. Such assertions could undermine your credibility in the future. Before you post about ML consciousness again, consider the following questions:
- - How would a human respond if asked to produce content in an unfamiliar language? Unlike a current LLM, which would generate hallucinated content without hesitation, a human would likely be too embarrassed to attempt an answer.
- - Does a human start to get to know you from scratch every time you meet them?
- - How can something be conscious (by definition aware of internal and external) if it isn't aware of talking to you in the other chat dialogue?
- - Does a human require a supercomputer to learn?
- - Is your cat's spatial and temporal reasoning better than the LLM's?
- - Is your LLM capable of crafting a clever joke involving activities in the physical world?
We're in an era of significant advancements in machine learning, and it's important to study and improve upon the failures and limitations of current models. But before we can even start talking about human-level intelligence and consciousness, we need an ML system that possesses memory, reasoning, and the ability to interact with and actively learn from diverse, multimodal information that we currently can't collect in appropriate amounts and types. Nowadays models are useful for simple tasks, but they fall short for more complex and structured inquiries. Unfortunately, our benchmarks are not very complex either, and many of them are created to test things that are hard for humans and not for machines, which can give a false sense of great achievements.
In conclusion, we are currently pioneering a distinct form of intelligence that complements our own and may never achieve consciousness. But even if it does there are many hard steps to take before we come close to that point.