AIs encode language like brains do − opening a window on human conversations

August 6, 2024

By Zaid Zada, Princeton University 

Language enables people to transmit thoughts to each other because each person’s brain responds similarly to the meaning of words. In our newly published research, my colleagues and I developed a framework to model the brain activity of speakers as they engaged in face-to-face conversations.

We recorded the electrical activity of two people’s brains as they engaged in unscripted conversations. Previous research has shown that when two people converse, their brain activity becomes coupled, or aligned, and that the degree of neural coupling is associated with better understanding of the speaker’s message.

A neural code refers to particular patterns of brain activity associated with distinct words in their contexts. We found that the speakers’ brains are aligned on a shared neural code. Importantly, the brain’s neural code resembled the artificial neural code of large language models, or LLMs.

The neural patterns of words

A large language model is a machine learning program that can generate text by predicting what words most likely follow others. Large language models excel at learning the structure of language, generating humanlike text and holding conversations. They can even pass the Turing test, making it difficult for someone to discern whether they are interacting with a machine or a human. Like humans, LLMs learn how to speak by reading or listening to text produced by other humans.

By giving the LLM a transcript of the conversation, we were able to extract its “neural activations,” or how it translates words into numbers, as it “reads” the script. Then, we correlated the speaker’s brain activity with both the LLM’s activations and with the listener’s brain activity. We found that the LLM’s activations could predict the speaker and listener’s shared brain activity.


Free Reports:

Sign Up for Our Stock Market Newsletter – Get updated on News, Charts & Rankings of Public Companies when you join our Stocks Newsletter





Get our Weekly Commitment of Traders Reports - See where the biggest traders (Hedge Funds and Commercial Hedgers) are positioned in the futures markets on a weekly basis.





To be able to understand each other, people have a shared agreement on the grammatical rules and the meaning of words in context. For instance, we know to use the past tense form of a verb to talk about past actions, as in the sentence: “He visited the museum yesterday.” Additionally, we intuitively understand that the same word can have different meanings in different situations. For instance, the word cold in the sentence “you are cold as ice” can refer either to one’s body temperature or personality trait, depending on the context. Due to the complexity and richness of natural language, until the recent success of large language models, we lacked a precise mathematical model to describe it.

Our study found that large language models can predict how linguistic information is encoded in the human brain, providing a new tool to interpret human brain activity. The similarity between the human brain’s and the large language model’s linguistic code has enabled us, for the first time, to track how information in the speaker’s brain is encoded into words and transferred, word by word, to the listener’s brain during face-to-face conversations. For example, we found that brain activity associated with the meaning of a word emerges in the speaker’s brain before articulating a word, and the same activity rapidly reemerges in the listener’s brain after hearing the word.

Powerful new tool

Our study has provided insights into the neural code for language processing in the human brain and how both humans and machines can use this code to communicate. We found that large language models were better able to predict shared brain activity compared with different features of language, such as syntax, or the order in which words connect to form phrases and sentences. This is partly due to the LLM’s ability to incorporate the contextual meaning of words, as well as integrate multiple levels of the linguistic hierarchy into one model: from words to sentences to conceptual meaning. This suggests important similarities between the brain and artificial neural networks.

An important aspect of our research is using everyday recordings of natural conversations to ensure that our findings capture the brain’s processing in real life. This is called ecological validity. In contrast to experiments in which participants are told what to say, we relinquish control of the study and let the participants converse as naturally as possible. This loss of control makes it difficult to analyze the data because each conversation is unique and involves two interacting individuals who are spontaneously speaking. Our ability to model neural activity as people engage in everyday conversations attests to the power of large language models.

Other dimensions

Now that we’ve developed a framework to assess the shared neural code between brains during everyday conversations, we’re interested in what factors drive or inhibit this coupling. For example, does linguistic coupling increase if a listener better understands the speaker’s intent? Or perhaps complex language, like jargon, may reduce neural coupling.

Another factor that can influence linguistic coupling may be the relationship between the speakers. For example, you may be able to convey a lot of information with a few words to a good friend but not to a stranger. Or you may be better neurally coupled to political allies rather than rivals. This is because differences in the way we use words across groups may make it easier to align and be coupled with people within rather than outside our social groups.

About the Author:

Zaid Zada, Ph.D. Candidate in Psychology, Princeton University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

InvestMacro

Share
Published by
InvestMacro

Recent Posts

Speculator Extremes: New Zealand Dollar, Euro & CAD lead Bearish Positions

By InvestMacro  The latest update for the weekly Commitment of Traders (COT) report was released…

23 hours ago

COT Bonds Charts: Speculator Bets led by SOFR 3-Months & 10-Year Bonds

By InvestMacro Here are the latest charts and statistics for the Commitment of Traders (COT)…

23 hours ago

COT Metals Charts: Speculator Bets led lower by Gold, Copper & Palladium

By InvestMacro Here are the latest charts and statistics for the Commitment of Traders (COT)…

23 hours ago

COT Soft Commodities Charts: Speculator Bets led by Live Cattle, Lean Hogs & Coffee

By InvestMacro Here are the latest charts and statistics for the Commitment of Traders (COT)…

23 hours ago

COT Stock Market Charts: Speculator Bets led by S&P500 & Russell-2000

By InvestMacro Here are the latest charts and statistics for the Commitment of Traders (COT)…

24 hours ago

Argentina’s soaring poverty levels don’t seem to be hurting president Javier Milei – but the honeymoon could be over

By Nicolas Forsans, University of Essex  Argentina, a nation once ranked among the wealthiest in…

2 days ago

This website uses cookies.