Engineers at the University of Washington are nearing completion of cell phone software that can work effectively without hogging as much bandwidth as typical video-conferencing. This story from ScienceDaily reports that a field trial is nearing completion, with generally positive results. The new software specifically optimizes video quality around the face and hands, which makes use of sign language on cell phones more practical for potentially, more people.
Tags: deductive teaching, Language Teaching, lizbeth finestack, marc fey
That’s the gist of a new study by Lizbeth Finestack and Marc Fey from the University of Kansas, published in the August ’09 American Journal of Speech-Language Pathology. Their study compared 6-8 year olds assigned to either a deductive training group, or an inductive training group. A computer program was used to teach a specific aspect of an invented alien language. The deductive training group received explanations, i.e. a brief description of the target. Both groups were made aware that the alien – “Tiki” – used many of the same words that we use, but this alien language also contained something different. In this case that was different word endings for male and female verbs. The kids in the deductive group were told that when it’s a boy you add -po to the end, and when it’s a girl you add -pa to the end. The kids in the inductive group were just supposed to figure it out on their own, another way of saying they were required to use inductive reasoning.
Finestack and Fey’s results showed that significantly more kids in the deductive group acquired the target. They concluded by asserting that generally, the most efficacious treatment may be one that combines natural language approaches with explanations. For those with access, here’s the link.
Tags: bilingual babies, language news, preschool language, recent language research, Richard Nisbett
Bilingual Babies and Executive Function – A study recently published in the Proceedings of the National Academy of Sciences suggests that early exposure to multiple languages increases abilities in executive function. The researchers looked at infants in their home city of Trieste, an Italian city with a history of being at the crossroads of multiple cultures. Babies from bilingual homes did better at tasks of executive function in the study – basically meaning that they demonstrated precocious abilities to switch attention compared to control babies. You are more than welcome to go to this story from the Economist for more on the study.
Preschoolers Language Skills Partly Tied to Classmates’ Language Skills – A research team from Virginia and Ohio State longitudinally looked at over 1800 preschoolers to obtain their results, which are generally self explained from the headline. The researchers mentioned the Matthew Effect in stating the importance of focusing on early childhood language skills. They also described research demonstrating the correlation between receptive language and classroom attention (strong and very important). A short synopsis of the study is here. A more comprehensive report is here.
Richard Nisbett and Environmental IQ – Nisbett, a prominent cognitive psychologist, has been getting good reviews for his book – Intelligence and How to Get it; Why Schools and Culture Count. Nisbett counters hereditarian claims that roughly 75 to 80% of IQ is inherited with his own view that the number is probably less than 50%. In addition to multiple other points, Nisbett points out the mistake often made in drawing erroneous conclusions from twin studies, such as those that find that the IQs of separated adopted twins correlate higher than twins living with biological parents. We now better understand that the homes of adoptive parents themselves correlate extremely highly with rich, nurturing environments in which to raise children. This book review from the NY Times, gives a real good feel for Nisbett’s book.
Scientific Consensus on How the Brain Processes Speech – Scientists may be reaching a consensus on how the brain processes speech. Josef Rauschecker, from Georgetown University, claimed that his studies of primate and human brain imaging confirm his decade old theory that speech is processed roughly along pathways traveling from lower to higher functioning neural regions. These pathways parallel similar visual pathways, but run from regions around the auditory cortex to regions in brain’s outer cortex. This report from Science Daily, on Raushecker’s report in Nature Neuroscience, provides more info.
Not news or research, but here is an interesting recent Q and A in Newsweek on memory with a Harvard psychologist.
Now that I’m back, I’m planning on starting off with some brief bits concerning language and learning that I probably would have posted on over the past month or two, had I been here all along.
First this: New findings from researchers at the University of Washington strengthen a suspected link between early childhood TV exposure and delayed language development. The study looked at 329 children and found that an increase in TV time correlated negatively with both attempts to speak from the children, and words used by their caregivers. This one has been reported in various places, such as USA Today, this link at LiveScience and in this link from ABC News.
Interestingly, a study published in the March issue of Pediatrics seemed to arrive at an opposite conclusion, while criticizing the widespread nature of the American Academy of Pediatric’s (AAP) often repeated recommendation that children should not watch any TV before age two. Their conclusion was that duration of TV watching has no cognitive effects on children under two. This study, from researchers at the Center on Media and Child Health at Children’s Hospital Boston, surveyed 872 mothers on their childrens’ viewing habits. After controlling for maternal age, income, education, language vocabulary scores, marital status, child’s age, gender, birth weight for gestational age, breastfeeding duration, race or ethnicity, primary language, and average sleeping duration, the researchers found no correlation (negative or positive) between TV watching and scores on tests of cognition and language. More on this less reported study can be found here.
So, what to make of these seemingly contradictory studies? Actually, both studies do add support to the advice many pediatricians have already been giving parents. Because it may be unreasonable to expect that parents will completely turn off the TV for two years, the content and type of TV viewing is essential. It may be more practical to advise parents to watch educational shows, and more importantly, watch these shows together, and talk about what it is that they are seeing.
Well, that post went longer than expected, so my brief accounts of other recent language and learning interest will have to come next.
Tags: expectations, learning research, Penn, rewards
Researchers at the University of Pennsylvania have used direct recordings of neuronal activity in the human brain to demonstrate that specific neurons fire more frequently in response to unexpected rewards over unexpected losses. No differences were observed in the study between expected rewards and losses.
In a report published in the journal Science, the researchers described how they used a computer based card game and micro-electrodes measuring neuronal impulses during deep brain stimulation surgery to confirm their hypothesis that lucky wins are remembered better than expected wins, or unexpected or expected losses.
So, our brains are primed to learn when surprised. It seems like it should be common sense that variety and the unexpected should naturally engage the human mind, much more so than the expected and routine. It’s interesting to see evidence of how this is ingrained in our brains. Unfortunately, the rigid structure of the contemporary bureaucratic educational system, and the necessity of routine imposed by large classroom sizes naturally stifles the creativity necessary to take advantage of this study’s conclusion. In our current system it is far too easy to impose learning rather than to entice learning.
Researchers at the Max Planck Institute for Psycholinguistics have used studies of brain waves to show how the brain makes efficient use of tiny cues and context to rapidly anticipate and process language. The studies have shown that different areas of the brain appear responsible for different aspects of comprehension. As one example, a specific brain wave pattern called N400, located in the back of the head, has implicated that area in analyzing the meaning of sentences. The N400 is a spike that occurs when a word is heard that is unexpected or out of context. The remarkable aspect is the speed with which this spike occurs after the word – literally fractions of a second. This, and other similar studies, have shown the amazing efficiency possessed by the human brain in using expectation and anticipation to assist in using language. The study, published in the journal, Current Directions in Psychological Science, was led by Jos Van Berkum at the Max Planck Institute of Psycholinguistics in the Netherlands.
The author’s research paper can be found here along with much detailed information. A little more information can be found at this blog post. My illustration derives from BrainWaves Educational Toys, which does have some cool toys.
Tags: coventry university, language, texting, texting and language study
After studying 88 children between the ages of 10 and 12, researchers at England’s Coventry University concluded that contrary to public perception, increased texting correlates to increased reading scores. The study was published in the British Journal of Developmental Psychology, and supports similar results of other studies such as from the University of Toronto.
The study’s relevance to the larger notion of language and learning is this: no matter the form, meaningful exposure to language assists language learning. According to Dr. Beverley Plester, the study’s lead author, “The more exposure you have to the written word the more literate you become and we tend to get better at things we do for fun.” The BBC story link is here.
Tags: baby gesture, education, income, vocabulary
According to a study published in the February 13th issue of Science Magazine, researchers found that the babies of parents with higher education levels and income had both higher use of gesture and higher vocabulary. While its not clear if the chicken or egg comes first in this case, the established link between these three things (socioeconomic status, vocabulary, gesture use) is an important step toward future research. The story, linked here from US News and World Reports suggests that the next step may be trying to determine if increasing gestures in babies may lead to later vocabulary growth. One element that may also contribute to this link is motivation – a child who is more motivated to communicate in general may be likely to use whatever means necessary, whether gesture or language. Gesture is also often an important foundation for oral language, as children not motivated to speak frequently need the motivation to communicate that pointing and other gestures can provide.
Study shows link between body language and socioeconomic status (SES). – Researchers at UC-Berkeley used videotaped sessions of various people in one-on-one interviews to confirm their hypothesis that people use nonverbal cues to communicate their SES. These behaviors included disengagment behaviors, such as doodling and fidgeting, and engagment behaviors, such as eye contact, head nodding, and laughing. Their results showed that individuals from higher SES groups displayed more disengagment behaviors, and that observers were able to identify the SES of study participants after looking at 60 second clips of their interviews. The researchers surmised that wealthy folks depend on others less, something which is reflected in their nonverbal communication.
Here’s the journal reference: Kraus et al. Signs of Socioeconomic Status: A Thin-Slicing Approach. Psychological Science, 2009; 20 (1): 99 DOI: 10.1111/j.1467-9280.2008.02251.x The Science Daily report is here. A PsychCentral commentary is here.
Tags: baldwin effect, explanation, language evolution, study, universal grammar
I like learning new things that may have future relevance. That’s why this story from Science Daily especially appealed to me. It taught me about the Baldwin Effect, an effect relevant enough to have over 9 million Google search results, and a Wikipedia entry, yet something I’d never heard of. Essentially, the Baldwin Effect can be a sort of an evolutionary short cut from learning to instinct. Animals that have a predisposition to learning anything (like language) that enhances its survivability can turn that anything into an instinct under lengthy continuous circumstances. The study authors conclude that a “universal grammar” must have arisen by societal impetus that predates the relatively recent divergence of language over the last 100,000 years.
I personally believe the univeral grammar is an invention that describes a phenomenon that occurs because of universal human needs. Because (nearly) all humans need to describe things that have happened, we get past tense, for instance. There are guys and gals in all human cultures, and they all possess things, so we get possessive pronouns. What linguistic construct exists that has been used as support for universal grammar, and is useful to one group of humans, but not another? Let me know if you come up with one.
Tags: Hugh Catts, J. Bruce Tomblin, language and reading, research
According to a study published in the December 2008 Journal of Speech, Language, and Hearing Research, children in second grade with language impairments have a pronounced delay in word recognition and reading comprehension that remains delayed through tenth grade. This delay, however, only widens slightly for reading comprehension, and does not widen at all for word recognition.
The study, by Hugh Catts, Mindy Sittner Bridges, Todd Little, and J. Bruce Tomblin, compared the development of over 600 kids with language impairment and normal language over a nine grade span. The authors stressed that while the delays did not worsen or worsen much over this time, they did not get smaller either. This lends strength to the assumption that early language impairment is an excellent indicater of later reading disability.
Implications? More screening early on, and also more in-depth screening. In addition to looking at phonological awareness and letter knowledge, early screening would be most effective if it also looked at vocabulary, grammar, and narration abilities.
Tags: adhd, autism, facial expressions, language research
1) Older Parents, Birth Order Linked to Autism- This, from the U. of Wisconsin, is claimed to be the largest study ever to look into autism correlations. In short, older parents and/or earlier birth order = greater chance of autism. While the “complex combinations of multiple causes” are up to their usual shenanigans, waiting longer to have children does appear to slightly increase one’s odds for autism.
2) Facial Expressions are Innate, Not Learned – Researchers from San Francisco State found that both blind and sighted athletes demonstrate similar facial expressions after winning or losing competitions. Social smiles after winning and lip pursing after losing were found to be similar regardless of sight, prompting the conclusion that these are well developed instincts that trace back long through evolution.
3) Mother’s Care Influences ADHD Diagnosis- The results of this study imply “that the diagnoses and health care utilization that a mother receives prior to having her child is predictive of having a child who is diagnosed with ADHD.” Those mothers who more often seek other forms of health care are more likely to seek (and get) ADHD diagnoses for their children. Hey, they said it, not me.
Tags: adolescent, J. Bruce Tomblin, language sampling, Marilyn Nippold, SLI, speech language pathology
Marilyn Nippold and J. Bruce Tomblin are the headliners in this group of researchers finding that adolescents produce higher syntactic complexity in expository contexts when compared to conversational contexts. Expository discourse is described by the authors as what “is often required in educational, social, and vocational contexts, as when a high school student is asked to interpret the outcome of an historical event, describe methods to control global warming, or teach others how to perform a chemistry experiment, operate a new cell phone, or prepare a multicourse gourmet dinner. The complexity of these topics suggests that successful explanations require sophisticated language skills and specialized background knowledge.”
Two points justified this study’s conclusion: 1) There was very little difference between compared SLI (specific language impairment) adolescent groups and adolescent group members with typically developing language when using conversation. 2) There was a difference between these two groups when comparing measures of expository discourse.
The conclusion: In adolescents it appears that expository discourse may yield better diagnostic accuracy than more informal conversation when determing the presence of language disorder. The study was in the November edition of the AJSLP.
Tags: learning research, max planck study, synapse formation
Why do we never forget how to ride a bike?
Researchers from the Max Planck Institute in Germany have demonstrated that when we forget something, contacts between nerve cells may disappear, while many of their appendages remain.
The study closely studied nerve cells when information was blocked and then reopened. Study researchers were most surprised to find that immediately after being blocked, nerve cells produced more dendrites and synapses, before the connections themselves were then lost. However, the appendages leading to the lost points of connection remained, as if nerve cells anticipated the possibility of needing them again someday. When the scientists reopened the information flow, connections re-developed more efficiently. The original press release is here. I have to admit that I’m behind multiple other reports of this study, such as this blog, and the story at ScienceDaily.
Tags: asha convention, autism, chicago, clincian effectiveness, language assessment, language research, language sessions
Marc Fey and Ronald Gillam presented on phases of clinical research in language intervention. These phases were pre-trial (can a treatment possibly work?), feasibility (maybe), early efficacy (possibly), later efficacy (probably), and effectiveness (yes, but how much?). The gist was that good research goes in this order. Not going in this order can be dangerous. Don’t do effectiveness studies before efficacy studies.
Kerry Ebert and Kathryn Kohnert dicussed the often underated importance of the clinician in treatment effectiveness. Studies in psychotherapy have found that clinicians can be more important than even medication in determining treatment outcome, but SLP studies rarely consider the therapist.
Tammie Spaulding reported on her work that pretty much all language tests lack both sensitivity and specificity. Sensitivity is when a test accurately identifies a kid that’s language disabled. Specificity is when a test accurately shows a kid as not being language disabled.
Teresa Ukrainetz and et. al. asked “How Much is Enough?” while discussing how much therapy clinicians should be giving. There was a lot of info in this one, such as intervention gains seem better in the first four months than the second, Head Start is effective, teaching vocabulary using context and definitions works better than only context or definitions, and the optimal range for most effective treatment dosage may be between 4 and 12 weeks.
Middendord and Buringrud discussed the SLP role in selective mutism. While counseling should typically be a large component, the presentation described a possible progression of therapy that can go from gestures to whispering to vocalizing nonsense words to vocalizing with soft voice and finally vocalization with full voice.
A group of presenters from the New England Center for Children described their program of incidental teaching in autism. They teach strategies to people that work with autistic individuals. In this program, incidental teaching is contrasted with discrete trial teaching, or ABA-type therapy, although both teaching types can be used depending on a student’s needs. Because many autistic (and other children with early developing communication) lack the desire to communicate, incidental teaching can be extremely effective, especially considering that a strict adherence to ABA therapy may actually suppress this desire. In other words, one size does not fit all.
Tags: diabetes, Ginette Dionne, language impairment, research
Pregnancy diabetes doubles the risk of language impairment in study
This research, gleaned from this link from COMD news was led by Professor Ginette Dionne of Canada’s Universite Laval. Details have been published in the journal Pediatrics. Their results showed that children born to mothers with gestational diabetes achieved lower scores on tests of grammar and vocabulary than individuals in control groups. This difference is not inevitable, however, as children from more educated mothers are much less affected. Risk factors of gestational diabetes include the mother’s age and weight.
Tags: gestures, oral language, research, signs, teaching, young child
Study Looks at if Using Signs May Slow Language Learning
An international team led by Jana Iverson of the University of Pittsburgh, compared language learning between Italian children and American children, after first determining that Italian children do grow up using more gestures. The study found that American children consistently use more words, and combine words more often. The difference, however, was accounted for by the larger use of gestures in the Italian children.
The main implication seems to be this – gestures don’t slow language learning, and they don’t negatively impact one’s ability to communicate within a society in which everyone uses a lot of gestures. As someone who works with young children with language impairments, I’ve seen a lot of practicioners encouraging the use of signs in children that weren’t verbally communicating. This gnawed at me, but I couldn’t put my finger on why exactly until reading Cogntive Daily’s excellent summary of this study. Teaching signs rather than oral language does not necessarily inhibit a child’s ability to learn language, but it does inhibit a child’s ability to communicate to others that don’t use signs. This is not a problem in Italy where gesture use is the norm, but it does imply that in the U.S. where the norm is not to use signs, that for a child struggling to learn language, the use of gestures should only occur as an absolute last resort.