A recent study found that the strategies large language models (LLMs) use to process human conversations are strikingly similar to those used by the human brain. Conducted by teams with Google AI researchers, Princeton University, New York University, and the Hebrew University of Jerusalem, the study comprised five years of individual studies meant to examine differences and similarities between the current generation of LLMs and the human brain.
LLMs help AI systems understand, process, and generate human languages by analyzing large datasets of text, ultimately applying strategies such as reinforcement learning and next-word prediction to generate natural speech patterns.
The researchers used text generated by speech-to-text generator Whisper for the AI portion of their studies. When they compared results to those of intracranial electrodes used to record 100 hours of real-world conversations, they found a “remarkable alignment” with the human brain and the AI’s internal representations — also known as embeddings.
Whisper extracts two embeddings from every word: speech embeddings and language embeddings. The study found that Whisper’s embeddings are very similar to the neural activity seen in the human brain’s speech and language areas.
Differences and similarities
Past research has shown that the language areas of the human brain actively try to predict the next word in a sentence before it is ever spoken aloud. The best LLMs use several different strategies, including next-word prediction, to better understand natural language. But there are also some notable differences between LLMs and the human brain. While today’s generative AI tools and LLMs are powerful enough to process hundreds of thousands of words simultaneously, the human brain’s language areas process words serially, or one at a time.
What does this mean for the future of LLMs?
Researchers say their recent study could help us gain a deeper understanding of neural activity in the human brain, especially within the speech and language areas. The study also teaches AI developers how to train an AI model more effectively — and more like the human brain.
“Moving forward, our goal is to create innovative, biologically inspired artificial neural networks that have improved capabilities for processing information and functioning in the real world,” researchers said in a recent blog post. “We plan to achieve this by adapting neural architecture, learning protocols, and training data that better match human experiences.”