Using the Computerized Comprehension Task, the team measured concepts by asking children to touch images on a touch-sensitive screen that represented words they were learning. The team used a measure of vocabulary that focused on stable concepts, finding that it was superior to prior measures in predicting children's general language ability at age 3. The team also identified individual children at risk for language problems a full two years earlier than prior studies.
Facebook, Snapchat, and Instagram may not be great for personal well-being. In the first experimental study examining use of multiple platforms, Melissa G. Hunt of the University of Pennsylvania shows a causal link between time spent on these social media and increased depression and loneliness.
Social media echo chambers may reflect real-life conversations that are linked to the geographic locations of users, according to new research. The findings contradict the assumption that echo chambers -- discussions which only involve people with the same views -- are the result of online interactions alone. Conducted by City, University of London and published in the journal PLOS ONE, the study analyzed 33,889 Twitter posts from the Brexit referendum campaign period.
Beatboxing is a musical art form in which performers use their vocal tract to create percussive sounds, and a team of researchers is using real-time MRI to study the production of beatboxing sounds. Timothy Greer will describe their work showing how real-time MRI can characterize different beatboxing styles and how video signal processing can demystify the mechanics of artistic style. Greer will present the study at the Acoustical Society of America's 176th Meeting, Nov. 5-9.
Few things can delight an adult more easily than the uninhibited, effervescent laughter of a baby. Yet baby laughter, a new study shows, differs from adult laughter in a key way: Babies laugh as they both exhale and inhale, in a manner that is remarkably similar to nonhuman primates. The research will be described by Disa Sauter during a talk at the Acoustical Society of America's 176th Meeting, Nov. 5-9.
Sign languages can help reveal hidden aspects of the logical structure of spoken language, but they also highlight its limitations because speech lacks the rich iconic resources that sign language uses on top of its sophisticated grammar.
Hearing has long been suspected as being 'on' all the time -- even in our sleep. Now scientists are reporting results on what is heard and not heard during sleep and what that might mean for a developing brain. At the Acoustical Society of America's 176th Meeting, Nov. 5-9, researchers from Vanderbilt University will present preliminary results from a study in which preschool children showed memory traces for sounds heard during nap time.
Here's another reason you might be exhausted after that preschool birthday party: Your brain had to work to figure out who actually asked for more ice cream. 'What we found with two-and-a-half-year-olds is that it's amazingly hard for adults to identify who's talking,' said Angela Cooper, a postdoctoral researcher at the University of Toronto. Cooper's co-authored research will be presented at the Acoustical Society of America's 176th Meeting, Nov. 5-9.
Research at the University of York has shown that the accepted hierarchy of human senses -- sight, hearing, touch, taste and smell -- is not universally true across all cultures.
Some conversations are forgotten as soon as they are over, while other exchanges may leave lasting imprints. Researchers want to understand why and how listeners remember some spoken utterances more clearly than others. They're specifically looking at ways in which clarity of speaking style can affect memory. They will describe their work at the Acoustical Society of America's 176th Meeting, Nov. 5-9.