Prefrontal Neurons’ Cascade of Phonetic Representations

A new study led by researchers at Massachusetts General Hospital (MGH) uses sophisticated brain recording techniques to show how neurons in the human brain cooperate to enable people to plan words for speech and then produce them verbally.

Human head and brain.Artificial Intelligence, AI Technology, thinking concept.

Image Credit: sutadimages/Shutterstock.com

Collectively, these results offer a comprehensive map of how speech sounds - vowels and consonants, for example - are represented in the brain long before they are uttered and how they are combined during language production.

The research, which appears in Nature, sheds light on the neurons in the brain that facilitate language production, which may help us better understand and treat speech and language disorders.

Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech - including coming up with the words we want to say, planning the articulatory movements, and producing our intended vocalizations. Our brains perform these feats surprisingly fast - about three words per second in natural speech - with remarkably few errors. Yet how we precisely achieve this feat has remained a mystery.”

Ziv Williams MD, Senior Author and Associate Professor, Neurosurgery, Massachusetts General Hospital

Ziv Williams is also associated with Harvard Medical School.

Williams and colleagues found cells involved in language production that may underpin the ability to speak when they recorded the activities of individual neurons in the prefrontal cortex, a frontal region of the human brain, using a cutting-edge technology called Neuropixels probes. The scientists also discovered that speaking and listening are controlled by different neural networks in the brain.

The use of Neuropixels probes in humans was first pioneered at MGH. These probes are remarkable - they are smaller than the width of a human hair, yet they also have hundreds of channels that are capable of simultaneously recording the activity of dozens or even hundreds of individual neurons. Use of these probes can therefore offer unprecedented new insights into how neurons in humans collectively act and how they work together to produce complex human behaviors such as language.”

Ziv Williams MD, Senior Author and Associate Professor, Neurosurgery, Massachusetts General Hospital

Williams had worked to develop these recording techniques with Sydney Cash, M.D., Ph.D., a Professor in Neurology at MGH and Harvard Medical School, who also helped lead the study.

The study demonstrated how the brain’s neurons encode some of the most fundamental components used in the construction of spoken words, from phonemes, which are basic speech sounds, to syllables, which are more complex speech strings.

For instance, the word dog requires the consonant “da,” which is made by placing the tongue against the hard palate behind the teeth.

Through single-neuron recordings, the researchers discovered that some neurons fire before this phoneme is uttered. More intricate parts of word formation, like the precise assembly of phonemes into syllables, were reflected in other neurons.

The researchers demonstrated that it is feasible to accurately predict speech sounds people will make before they utter them using their technology.

Scientists can anticipate the blend of consonants and vowels that will be produced even before the words are spoken. This ability might be used to create brain-machine interfaces or artificial prosthetics that can generate synthetic speech, which would be helpful for a variety of patients.

Disruptions in the speech and language networks are observed in a wide variety of neurological disorders - including stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders, and more. Our hope is that a better understanding of the basic neural circuitry that enables speech and language will pave the way for the development of treatments for these disorders.”

Arjun Khanna, Study Co-Author, Massachusetts General Hospital

By examining more intricate language processes, the researchers hope to build on the findings and address issues such as how people select the words they want to say and how the brain puts words together to form sentences that express a person’s feelings and thoughts to other people.

Source:
Journal reference:

Khanna, R. A., et al. (2024)Single-neuronal elements of speech production in humans. Nature. doi.org/10.1038/s41586-023-06982-w

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoLifeSciences.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
A New Technique Reveals the Behavior of OPCs in the Brain