Issue 90, September 2017
bullet Language and the Brain
bullet Interview with Prof. Dr. Isabell Wartenburger, Professor of Patholinguistics / Neurocognition of Language and Spokesperson of the DFG Collaborative Research Centre SFB 1287, University of Potsdam
bullet The NeuroCommTrainer - Regaining Communication through Training and Translation of Brain Responses
bullet The MPII Movie Description Dataset
bullet
bullet Adult Second Language Acquisition
Language and the Brain
The emergence of human cognition, language, and speech has been a subject of fascination for hundreds of years. Language allows humans to be sophisticated social creatures. It facilitates and shapes our thought processes and plays an important role in human cognition and consciousness. While significant advancements in the field have been made over the past decade, much work still remains for understanding our neural architecture. With brain, language, and speech disorders affecting millions globally, acquiring a better understanding of the human mind is imperative.  

The NeuroCommTrainer, a device being developed by experts in Germany, is meant to help people with severe brain damage to communicate with others. The device is supposed to understand and control patients' brain signals by using electroencephalogram (EEG) measurements. Legascreen, a Fraunhofer and Max-Planck Society project, is based on examining brain activity and genes for an early detection test for dyslexia. The project's goal is to reach a higher detection rate of whether a child will suffer from dyslexia or not.

With half of the world's population using a second language, there is also a need for understanding if multilingualism has advantages for the brain. There is evidence that learning a new language helps your brain. On the other hand, according to this month's GCRI interview partner, Prof. Dr. Isabell Wartenburger, from the University of Potsdam, the question whether there is a bilingual advantage or not has still not been fully resolved.
 



Prof. Dr. Isabell Wartenburger is a professor of patholinguistics at the University of Potsdam. She is a leading expert in linguistics and neuroscience. The focus of her scientific work is the neural basis and development of higher cognitive functions and language processing by use of psychophysiological and brain imaging methods across the lifespan. Prof. Dr. Wartenburger is the spokesperson of the DFG Collaborative Research Centre, SFB 1287 - Limits of Variability in Language: Cognitive, Grammatical, and Social Aspects.

After studying psychology at Bielefeld University, Prof. Dr. Wartenburger received her PhD at the Berlin Neuroimaging Center at the Charité - Universitätsmedizin Berlin, where she held various postdoctoral fellow and leadership positions. In 2007, she started working at the University of Potsdam where she obtained her professorship.

In the interview with the GCRI, Prof. Dr. Wartenburger discusses why it is easier for young children to learn a foreign language and how problems in language acquisition can be predicted early in life. She also explains the reasoning behind scholars who argue for a bilingual advantage for the brain and those scholars who argue for the presence of a bilingual disadvantage. To read the full interview, click here.

Source & Image: University of Potsdam



People who have suffered severe brain damage can fall into an unresponsive wakefulness syndrome or other disorders of consciousness. Doctors often assume that these patients are vegetative - meaning that they are fully unconscious. Yet, in more than a third of the cases, this proves to be a misdiagnosis. Nevertheless, these people cannot communicate and they are not able to make themselves understood. To help them regain communication, a group of researchers and partners are developing the NeuroCommTrainer. It is a system that identifies the patients' residual brain responses as they occur, which trains these responses to become more consistent and translates them into a meaningful code for their environment.  

Today's brain-computer interfaces are not suitable for patients with suspected disorders of consciousness, because they do not adapt to the individual. The NeuroCommTrainer will be able to recognize phases of optimal alertness in which the person is most responsive. It will train patients to control their brain signals. Most importantly, it will improve language comprehension. This is crucial, because the underlying brain damage often causes the patients to lose their language. However, we know from our research that adequate brain responses to language are excellent predictors of recovery.

To achieve the ambitious goal of reconnecting disorders of consciousness patients to their environment, challenges in the fields of psychology, neuroscience, sensor technology, and computer science have to be overcome. The daily requirements of nursing and care are also taken into account and ethical issues are considered. To this end, research groups from Bielefeld University and the University of Oldenburg, Bielefeld's Cluster of Excellence Cognitive Interaction Technology, the von Bodelschwingh Foundation Bethel with its long-term nursing facilities, experts in medical ethics, and two companies have joined forces in the "NeuroCommTrainer" research project, which is funded by the Federal Ministry of Education and Research (BMBF) with 1,8 million euros.

Source: Bielefeld University 
 
Image: Stefan Debener      

 



A large amount of digital content available today, such as movies and web videos, is not accessible to many visually impaired people. The goal of Audio Description (AD), also known as Descriptive Video Service (DVS), is to complement a movie's audio stream with an additional audio stream that precisely describes what happens in a scene. This allows a blind person to better understand a movie through an audio description like "Gustave dashes for the staircase." The additional audio track is created by a group of professionals after the movie was produced. This process is costly and time-consuming, which limits its applicability.

The ability to generate movie descriptions automatically would greatly benefit millions of visually impaired people. At the same time, there is an interesting research problem related to Computer Vision and Natural Language Processing. Most research on automatic video description has been focused on short web-sourced video clips. The research group at the Max Planck Institute for Informatics (MPII), Saarland Informatics Campus has published a new movie description dataset. The MPII Movie Description dataset has been further expanded to become the Large Scale Movie Description Challenge (LSMDC), which now features 200 movies associated with over 128,000 natural language descriptions.

Most of the state-of-the-art approaches to video description rely on machine-learning techniques. Generally, the idea is to represent the input video in a way, which allows a machine to "translate" this representation into an understandable human sentence. Recent advancements in Deep Learning provide the machines with a powerful means of representing the visual data and the Recurrent Neural Networks can be used to decode such representations into language. The lead researchers of the MPII Movie Description dataset, Anna Rohrbach, Marcus Rohrbach and Bernt Schiele, believe that such a large-scale dataset will allow for the development of algorithms able to tackle automatic movie description.

The associated scientific publication and the LSMDC webpage provide more information.
   
Source & Image: Max Planck Institute for Informatics (MPII)
 
InnovationInnovation: Legascreen - Recognizing Dyslexia in Time with Future Early Tests
 
Words stretch and break up, and letters are unrecognizable. In Germany, one in 20 children, which is at least one pupil per class, despairs when reading or writing words and sentences, despite having normal intelligence. They suffer from dyslexia, a congenital defect in the brain. An affected child may experience constant failure at school, often without knowing the real reason.
 
Scientists at the Fraunhofer Institute for Cell Therapy and Immunology (IZI) and the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) have spent the last five years on their project known as Legascreen to successfully lay the foundation for an early dyslexia test. By examining brain activity and genes, it will be possible to indicate whether a child will suffer from dyslexia or not.
 
Electroencephalogy (EEG) offers a promising approach to reveal changes in the cortex. Through this approach, a child listens to a chain of the same syllables or sounds that are sometimes interrupted by another sound or tone. If the child can easily notice the irregularities and if its brain activity shows the typical amplitude, then its reading skills are largely well-developed. If this is not the case, it could indicate an impending dysfunction.
 
The sole prognosis by means of the EEG is, however, not significant enough. The explanatory power of the genes can, therefore, be used. Since dyslexia has a heritability of up to 70%, a simple saliva test could predict the disorder even more precisely. As a precondition for this, a comprehensive list of DNA variations has been identified that are involved in this disorder in German dyslexics. The greater the number of these variations found in a child is, the higher the risk is of being affected by dyslexia. By combining indications of the brain and genes, it is possible to reach a higher detection rate.
 
Before these results can be implemented in an early diagnostic test, the two single procedures have to be validated by a further independent sample.  
 
Source: Fraunhofer Institute for Cell Therapy and Immunology (IZI) 
 

BionaticAdult Second Language Acquisition
 
Acquiring a second language poses a big challenge to most adults. We spend the first years of our lives acquiring our mother tongue. We learn specific sounds, words, and rules in an automatic fashion until the sixth to tenth year of age at the latest. At the Max Planck Institute for Human Cognitive and Brain Sciences, we have shown that this ability to infer the rules of a language by mere exposition to its input is no longer present later in life.

The ability to adapt to our mother tongue is implemented in a complex network of brain gray matter regions connected by white matter fiber bundles, which shows a strong plasticity in the first years of life. Yet, it is still unknown how adults learn a second language, as well as what the most efficient way to learn it is and to what extent changes the brain network during the learning process.

Therefore, we investigated a cohort of Arabic native speakers while they learn German in an intensive course over a short period of time. We were particularly interested in understanding whether different teaching methods can affect the outcomes of language learning. For this purpose, one group had a stronger focus on syntax learning and was exposed to the rules that help build complex sentences in German. The other group focused more on the use of words in context and how their meaning is affected by the whole sentence. We aim to find out how these methods affect fluency in second language learners and whether they give rise to different brain changes. For this purpose, participants in these two language courses were observed for up to 15 months, allowing us to assess changes in their brain function and tissue, while they adapted to the new language.
   
Source: Max Planck Institute for Human Cognitive and Brain Sciences 
 
Image: Amac Garbe/MPG

MOSCOW        NEW DELHI       NEW YORK        SÃO PAULO       TOKYO