The twentieth century saw a slew of research studies that gained notoriety as much for the ethical boundaries they pushed as well as the psychological insights that were gained. The attention garnered by these studies, which included the Stanford prison experiment, Stanley Milgram’s shock experiment, and John Watson’s “Little Albert,” helped highlight the need to create ethical standards and reviews in research. The “Monster Study” was different, though. It’s lessons went long swept under the rug.
In 1939, Wendell Johnson, who is now the namesake of the University of Iowa’s eminent Speech and Hearing Center, together with one of his graduate students, Mary Tudor, undertook an experiment to gain more information into the behavioral nature of stuttering. Specifically, Johnson sought to question the prevailing theory that stuttering’s cause was entirely genetic, and therefore, little could be done to therapeutically help stutterers. While Johnson’s ultimate goal may have been noble, his methods and subsequent cover-up led to this one, out of all of the era’s questionable studies, as the one that was dubbed the “Monster Study.”
Tudor was sent to an orphan’s home to pick subjects to test Johnson’s view that stuttering develops when speakers are criticized for normal mistakes. Johnson, himself a stutterer, ultimately developed the diagnosogenic theory of stuttering, despite later hiding the research used to support this view that the diagnosis causes the disease. Out of 22 test subjects, half (five previously identified stutterers, and six identified as having normal speech) were given sessions every few weeks in which they were criticized harshly for every mistake, and during which Tudor tried to convince them that they were stutterers. Tudor also instructed teachers to be critical of the speaking of this group of children. The other half were generally complimented on their speech. The experiment lasted for nearly five months. While the exact effects were disputed, the fact that these kids were harmed is very clear. Tudor attempted three follow-ups, and in later correspondence to Johnson she expressed remorse for not being able to reverse the study’s earlier ”deleterious effects.”
And so for decades, few outside of the participants, and Johnson’s colleagues at Iowa, knew about the study. Meanwhile, the department named after Johnson grew into one of the more prestigious institutions of Speech-Language Pathology in the world. Then, a 2001 story in the San Jose Mercury News brought what had been local whispers of a “Monster Study” into the national limelight. The story was re-published in newspapers across America, immediately igniting a firestorm of controversy. It also spawned litigation, ultimately leading to nearly million dollar settlements for 3 still living subjects, and for the descendants of three others.
Unfortunately lost in the details was Johnson’s big question. Can you create a stutterer? The results of Tudor and Johnson were themselves mixed. According to their own ratings of the previously non-stuttering children, two kids from the normal group developed more stuttering, but two didn’t, and two others were even marked as improved. While its effects on stuttering were ambivalent, the experiment clearly did have other negative consequences for its participants. There were admitted increases across multiple areas of behavior, such as increased shyness, tics, anxiety, inhibition, and self-esteem.
Most insidious, perhaps, was the results of Johnson’s own actions. Not only did he fail to publish results which were, at the best, ambivalent toward his hypothesis, he continued promoting his view that caregivers are almost solely responsible for stuttering. Directly due to the diagnosogenic theory, therapy was greatly reduced for decades of stutters. In its place, therapists worked almost exclusively with parents. While we still don’t know exactly what causes stuttering, the research has clearly indicated a strong genetic component which can be triggered or exacerbated by events in the environment. And critically, direct therapy can, and often does, help.
As a pioneering psychologist in the merging studies of cognition and learning, Jean Piaget helped change the common assumption that as thinkers, children are merely less complex versions of adults. His twentieth century work built upon the classical roots of Socrates, and more recent work of Lev Vygotsky, Jerome Bruner, and others who believed learning to be a process facilitated, rather than caused, by teachers. At the forefront of constructivist assumptions are the notions that the most effective learning takes place when learners are active and motivated participants in the process.
While constructivism as a system has been criticized as being too subjective and difficult to manage, as with so many complex systems it has several components that stand out as applicable outside of the larger theory as a whole. The notions of assimilation and accommodation are two of my favorites. Assimilation occurs when a learner adds new information, basically layering it on top of the old. Accommodation occurs when a learner must change previously learned information before placement of new information is possible. Assimilation is like placing files in a file cabinet, while accommodation is like needing to add new folders, or rearrange existing ones. Because of this, learning is said to get more difficult as we age, with the tendency of older people to get what has been deemed, “hardening of the categories.”
Piaget and the constructivists also coined all kinds of terms, such as schema and equilibrium, not to mention those associated with the famed stages of development, such as the sensorimotor, concrete operational, and preoperational stages. Piaget’s ballyhooed notion of object permanence (the understanding that an object exists even when out of sight) has been extensively studied and debated.
As with seemingly all mind related theories, the popularity of constructivism has followed the pendulum of favorability. There are many specific aspects of constructivism, though, that should stand the test of time. Some additional good information can be found here. This, also is kind of cool.
A study of survey responses regarding Specific Language Impairment testing was discussed in the April (2013) Language, Speech, and Hearing Services in Schools. The authors basically found out what tests SLPs were giving and compared those to test characteristics, such as validity, reliability, etc. They generally found that characteristics such as the inclusion of multiple facets, and testing time seemed to be more important in test selection, than characteristics such as reliability, accuracy, and validity. SLPs also like to give single word vocabulary tests. The synopsis is at this link.
There are many different terms and abbreviations used in discussing the topic of second language acquisition. Just some of these include second language learning, L2 acquisition, ELL (English language learners), and ESL (English as a Second Language). ESL and ELL are sometimes used interchangeably, and sometimes argued to be completely different things. ESL seems to be an older term that, depending upon the source, is either being phased out, or is continuing to be used to distinguish a specific pull-out program, as opposed to somebody in the general education environment who happens to not speak English. Some claim that ELL is more politically and technically correct, since English could be a third or fourth language. In all my years I’ve never experienced any language issues with a student learning English as a third or fourth language, but I suppose it is technically possible. Also, use of these terms seems to be different in different places. There is a good little description of ESL and ELL issues in this link.
One of the preeminent researchers in second language acquisition is Stephen Krashen. According to Krashen, learning is less important than acquisition. His theory includes five main hypotheses, which he’s labeled the Acquisition-Learning hypothesis, the Monitor hypothesis, the Natural Order hypothesis, the Input hypothesis, and the Affective Filter hypothesis. His Affective Filter hypothesis embodies one of his main views that a number of affective variables play a facilitative, but non- causal, role in second language acquisition. These variables include: motivation, self – confidence, and anxiety. You can find a lot of his stuff at his site.
Second language acquisition presents some interesting challenges for those who teach language. In school settings, speech-language pathologists are supposed to only work with students with disabilities. For students whose primary language then is something other than English, this means that a language disability should exist in that student’s first language in order to qualify for services. Theoretically and legally, the disability should have nothing to do with the fact that the student has learned another language prior to English. In the real world, it gets complicated. Some kids do all right with their first language in preschool, and then face problems as parents may attempt to use more English at home. Maybe one parent speaks more English. Aunts, uncles, grandparents, friends, etc. all bring their own language preferences and abilities to the mix. Then there are things like code-switching, the switching between languages in a conversation or with different conversation partners. Commonly, these kids also display a silent period, in which they are so focused on comprehension that they don’t speak much. Also, there can be language loss of the first language if it is not continuously reinforced. There have been controversies over the extent to which academics should be taught in one language over the other, as well as the extent to which English must be learned, and who is responsible. I think most experts agree that bilingualism is an awesome attribute. More info can be had here.
As an interesting aside, this recent study suggested that second language learners may have an advantage in learning to read compared to native language speakers. The study’s authors suggested that this may be due to an increased awareness in language overall – metalinguistic awareness.
This large, long study found that children had worse academic outcomes after being treated with Ritalin, a common medication used in the treatment of ADHD. A 1997 policy reform in Quebec expanded coverage and use of Ritalin, providing ideal conditions to study its use relative to the rest of Canada. Generally, there were little overall improvements in short term outcomes, and worsened long term outcomes, highlighted by increased incidents of repeating grades, lower standardized math scores, and more school dropouts.
One especially interesting consequence of increased ritalin use was a large reported increase in unhappiness, especially among girls. The study authors hypothesized that increased Ritalin use, while decreasing adverse behaviors, also decreased attention these students received from teachers. They surmised that use of these medications may be a substitute for more beneficial learning interventions.
A study summary from The Atlantic can be found here: http://www.theatlantic.com/health/archive/2013/06/study-ritalin-doesnt-help-academics/276894/
A link to the full study can be found here: http://www.nber.org/papers/w19105.pdf
This story from Advance is really cool. Christopher Merkley, a Speech-Language Pathologist, became known as the only “speaking specialist” in a large area of Africa. People would come from far and wide to see him, and because of widespread cultural feelings, such that disabled people are possessed by evil spirits, he had to get permission from village elders for therapy. He gives other details, including descriptions of a lack of electricity in their clinic, very few supplies, and a local thirst for knowledge that can help those of us in far different settings to give our vocation some much needed perspective. Here’s the link: http://speech-language-pathology-audiology.advanceweb.com/Features/Articles/Speaking-Specialist.aspx
Engineers at the University of Washington are nearing completion of cell phone software that can work effectively without hogging as much bandwidth as typical video-conferencing. This story from ScienceDaily reports that a field trial is nearing completion, with generally positive results. The new software specifically optimizes video quality around the face and hands, which makes use of sign language on cell phones more practical for potentially, more people.
Study Probes Connection Between Texting and Language Impairment - This study, from these people, at the University of Manchester, finds that teens with language impairment (or SLI, to be specific), don’t use texting technology as much as their typically developing peers. The study authors surmised that this relative lack of texting is caused more by societal factors, such as shyness, and lack of friendship networks, rather than lack of ability.
Doctors and Sreenings – Good; Doctors and Referrals – Not so Good – A report spearheaded by John Hopkins Children’s Center shows that while pediatricians may be doing a good job of screening kids, referrals for further assessment often go unheeded. The study recommended that instead of placing referrals in the hands of parents, these referrals should be directly placed to specialists. My information comes from this this link from Science Daily.
Study Challenges Current Thinking on Language Evolution – Again from Science Daily: According to a statistical analysis of more than 2,000 of the world’s languages, they may evolve more like biological organisms, and less from more random forces, as previously thought. The bullet synopsis is that the more people speak a language, the simpler the language becomes. The researchers called this the “Linguistic Niche Hypothesis.” One possible explanation for this is that simplicity holds an evolutionary advantage over complexity, particularly when children learn languages. It should be noted that simpler languages are not necessarily inferior languages. They just do not have aspects which aren’t as necessary, such as elaborate gender marking, for example. Pschologists from the Universities of Pennsylvania and Memphis conducted this analysis. More info can be found at this Penn site.
Children Make up Their Own Rules To Help Them Learn Language – This study used computer analysis to theorize that early language development follows formulas that children generate on their own, rather than specific rules governing such things as nouns and verbs, as linguists have traditionally thought. Or as I’ve simply put it, in language development, Form Follows Function. Leading this work was Colin Bannard, at the University of Texas, and Elena Lieven and Michael Tomasello, two colleagues working at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. The more in-depth article can be found at the University of Texas site.
Cognitive referencing is the practice of using IQ scores to establish eligibility for special education services, specifically in areas of language and learning disabilities. It’s often called by it’s gentler label, the “discrepancy model.” Many others disapprovingly call it the “wait to fail” model. Cognitive referencing has been denounced by groups such as the American Speech-Language-Hearing Association (link), the President’s Commission on Excellence in Special Education, 2002, and very explicitly, by the U.S. Department of Education (link, pg. 31). It has been eliminated in many states, but persists in many others. Even those who don’t come right out and denounce this practice (as they should), state that it should be only one component of a larger process used to determine eligibility (e.g. this CEC link). The problem is that wherever it is used, the IQ-Academic discrepancy becomes the sole method of determining eligibility in nearly all cases. In my state of Missouri, our state law very specifically mandates this discrepancy, unless a school district is willing to go through much expense and work to use other methods, such as RTI. My guess is that 99% of kids tested for LD and Language Impairment in our state use only IQ comparison to determine eligibility.
Despite its prevalence, cognitive referencing is wrong on many levels.
- It uses a single IQ score, ignoring standard deviation. A kid that scores 80, may actually have a “true” IQ of something like 85 or 90, but could have performed poorly on that one day, for various reasons. Tough luck for that kid. An IQ score of 80 usually means that your academic or language scores have to be 58 or lower, an extremely difficult thing to do.
- By even using IQ at all, the assumption is that this is as good as a kid can get. That was the initial rational for the discrepancy model way back before we knew better. Now we know that IQ can go up (or down) in relationship to environmental factors. (When IQ scores of large groups of children are studied, IQ scores do tend to remain stable, especially in older children. However, this skews the fact that a smaller percentage of children do show substantial IQ fluctuations over time. For more on this interesting topic, see Sigelman and Rider, 2008.)
- IQ and language are correlated. Vocabulary and IQ especially correlate well. This means that children with low language scores tend to have comparably low IQ scores. It is virtually impossible to obtain a low IQ score and say that language difficulties didn’t have something to do with that score.
- Kids with certain scores are especially difficult to qualify for special education under this model. Whenever a child scores in the 70s you can just about rest assured that the kid will not qualify, and you will be testing that kid again, perversely hoping that the academic and/or language scores have fallen enough to qualify the next time. In effect a child is punished for having an IQ score that just happens to be in that one certain range.
- IQ scores can set artificially low levels of expectation for kids, teachers, and parents. IQs describe obstacles, not limits. It may be harder for someone with a lower IQ to learn, but it is never impossible. Only comatose or dead people can’t learn, and IQ scores too often allow somebody to say, “Well he’s achieving close to his level.” IQs can provide a stimulus to somebody with a high IQ who is not motivated to learn, and can provide a bit of insight into why a particular student may be having trouble learning, but to withhold helping a child because of a lower then average IQ is at the least dishonest, and borders on unethical.
So how can this horrible practice persist? For starters, no states have been forced to abandon cognitive referencing. It is almost amazing that so many have, considering the financial implications of having to provide more help to kids. That nobody has come up with anything better seems to be the main excuse given for continuing the discrepancy model. I don’t really understand why this practice hasn’t been challenged in court. Perhaps someday, somebody such as these special ed lawyers with a great web site, will.
That cognitive referencing can continue to exist is a symptom of a larger problem in our society. We attempt to find labels and categories to justify providing (a good thing) or withholding (not so good) help to kids that could really benefit from extra help. In my opinion the most ethical method of providing special education services would be to establish a bare minimum of expected competence in various areas, and at least offer to help any child achieve the next step toward reaching that bare minimum. If this were to happen those of us in special education might then be able to spend more effort looking for ways to help, and less time looking for excuses not to.
- Testing takes too much time.
- There is too much pressure to teach to the test.
- Tests measure limited aspects of a student.
- Ignores standard error of measurement.
- Increases anxiety and stress
I don’t think I even have to write an introductory sentence for this post – if I did, it would be something like, “The way group testing is done now creates a lot of problems.” It’s become almost cliche to say that No Child Left Behind’s emphasis on testing has created a lot of headaches and hassles. The testing emphasis and the accompanying problems have been shared by other countries. Research has been mounting in support of the overwhelming mountain of testimonies from educators, and even the general media at large has joined the bandwagon. (For example: CBS news story; Boston Globe article; UK Daily Telegraph study story) Everyone agrees that accountability is a good thing, and there’s only one way to measure how our children are learning. Well, actually, there’s something wrong with that last part… There is another way. Individual testing.
I’ll go ahead and get my bias out of the way, because I am a diagnostician. I test students for speech and language competency in order to decide special education eligibility, and to help provide planning for appropriate speech and language therapy. I work with a team of other diagnosticians serving 13 school districts. Most students that we test receive IQ and educational testing, and probably two-thirds get speech and language testing. I am not exaggerating when I say that when we finish testing a child parents, teachers, and the students themselves know the tested child like never before. We can tell exactly what’s wrong, and exactly how to fix it. Individual testing trumps group testing in so many ways. Individual testing specifically…
- takes less time with greater accuracy.
- is impossible to teach to the test.
- We can measure any educationally relevant aspect of the student that we want.
- takes special circumstances into account.
- has less anxiety.
Additionally, individual testing …
- specifically measures progress (or lack of) in very specific areas.
That’s the only bullet there, but its important enough to merit its own list. Put another way, this means that when we are able to test kids this way, we can determine exactly what a student knows, and what a student should know, but doesn’t. We can also tell what’s developmentally appropriate for each student to learn next.
So why don’t we just test each kid individually then? Well, it would require a lot of change – change sparked and implemented by bureaucrats in an educational system who would only do so in response to mandates from politicians in a government who would only mandate in response to political pressure which would require much greater media attention. As the ongoing attempt to overhaul health care has demonstrated, real change in our country is often extremely difficult. Especially systematic change. And even when the need for change is obvious.
Here’s some recent language learning news that I’ve found interesting:
Talking helps language development more than reading alone – Although the conclusion of this UCLA study seems almost blatantly obvious, there is a significant implication, which is that the importance of talking to children has been obscured by the recent emphasis on reading with children. The study found that back-and-forth conversation was strongly associated with future improvements in the child’s language score. Conversely, adult monologueing, such as monologic reading, was more weakly associated with language development. TV viewing had no effect on language development, positive or negative. The study’s lead author, Dr. Frederick J. Zimmerman noted, “What’s new here is the finding that the effect of adult-child conversations was roughly six times as potent at fostering good language development as adult speech input alone.”
Inattentive behaviors in young children with autism predict lower later language development – The authors of this study, from the University of British Columbia, looked at autism from a different perspective than most previous research. Rather than focusing on social and linguistic aspects of autism, the authors looked at five types of inappropriate behaviors and how these behaviors predicted later language development. The study looked at some behaviors that parents and teachers frequently focus on, such as acting out, resistance to change, and socially unresponsive behavior, but the one that best predicted later language difficulties was inattentiveness. This is strikingly significant for autism intervention. Why is inattentiveness such a large problem? Creating a desire to change is critical with these children. Often, current intervention practices target making autistic children communicate (such as in ABA therapy), instead of trying to convince these kids to want to communicate.
Gene found to be associated with language, speech, and reading disorders – The gene in question is found on Chromosome 6. The significance is that variability in the gene was associated with both language and reading disorders, but not other disorders, such as autism or hearing impairment. Mabel Rice, from the University of Kansas, Shelley Smith, from the University of Nebraska, and Javier Gayán of Neocodex, Seville, Spain led a team of researchers that is part of a 20 year research program that is being funded by the National Institute on Deafness and Other Communication Disorders, one of the National Institutes of Health.
Although the title of this post sure looks like a set up for some boring educational acronym, it really describes making learning fun. More significantly, it describes using fun to teach. The purpose of the bureaucratic looking title is to please the administrative types that sometimes try to understand why it is often in the best interest of our students to use teaching methods that are actually fun. I could have called it “Goal Directed Teaching,” or “Learning for a Reason,” or “Why’s Before Whats,” but these other possibilities simply don’t seem to fit as well.
Achievement oriented instruction is when a teacher provides a goal that requires the student to use a targeted skill to accomplish something. This is not quite functional teaching, and its almost the opposite of drill. The goal itself provides the motivation, and for this reason the choice of the goal is critical. It is perhaps as or more important than any teaching method that may be used. And this is how achievement oriented instruction most differs from traditional teaching.
Here are some examples that may best serve to illustrate my overall point:
Achievement Based Teaching
teacher instruction/ text book/ worksheets
using jelly beans, pennies, etc. and asking motivating questions, such as “Would you like two more, or six all together?”, etc.
discussing prepositions/ worksheets
asking preposition laden questions while playing hide and seek, hidden pictures, Simon Says, etc.
parts of speech
sentence diagrams/ teacher instruction/ worksheets
Mad Lib style activities, separate students into different parts of speech teams and score points when correctly identifying parts of speech, etc.
internet typing games, practice typing labels, letters, etc.
As you can see, the achievement based teaching column contains more possibilities, and an “etc.” The only limit to one can go in the final column is the teacher’s imagination. The more creative and varied the activities, the more salient is the learning. This should not in any way disparage traditional teaching, however. Another way to put it is that traditional teaching relies on expectations. In achievement based teaching the learning is elicited. The student constructs his own expectations, and uses specific targets to achieve these expectations. Expectations and elicitations are both critical when teaching.
So when an administrator comes in and sees you playing a game with your kids, if you did this kind of teaching, you could say: “You caught me on my ABT day. Some days I do drill, some days I do direct instruction, some days worksheets, and about half of the days I do activities specifically designed to elicit my students’ target skills. It just so happens that fun motivates.”
Conjunctions are an important method of extending sentence length and complexity, because they are a common method of joining words or parts of sentences together. Coordinating conjunctions join independent clauses (as well as words and phrases), while subordinating conjunctions can join both dependent and independent clauses (as well as words and phrases).
The acquisition and frequency of conjunctions have both been studied extensively. Among the findings are that the word and often initially takes the role of other conjunctions (Bloom et al., 1980; Scott, 1988; cited by Owens, 1996). The conjunctions but, so, or, and if soon are acquired in typically developing children to serve functions that and isn’t as easily able to achieve. Conjunctions like because then develop to express not only a relationship between sentence elements, but additionally a temporal sequence. According to one estimate, by the time a normal child’s mean length of utterances reach 5.0 (at an average age of 4 to 5 years), 20% of the sentences they use in spontaneous speech contain embedded or conjoined clauses (Paul, 1981).
Language itself doesn’t require conjunctions, but effectively communicating advanced ideas usually does. As with other language modalities, conjunctions exist because they assist. We use them to achieve a goal. Just try giving a reason for something without using the word because, or try describing the time relationship between two completed events without using conjunctions such as before, after, or then. It can be done, but much less effectively.
Generally, developmental order of conjunctions is determined by the complexity of the relationship the conjunction serves. Conjunctions appear frequently in assessments such as the CELF, CASL, OWLS, and SPELT. Also, Conjunction Junction is a timeless piece of art.
That’s the gist of a new study by Lizbeth Finestack and Marc Fey from the University of Kansas, published in the August ’09 American Journal of Speech-Language Pathology. Their study compared 6-8 year olds assigned to either a deductive training group, or an inductive training group. A computer program was used to teach a specific aspect of an invented alien language. The deductive training group received explanations, i.e. a brief description of the target. Both groups were made aware that the alien – “Tiki” – used many of the same words that we use, but this alien language also contained something different. In this case that was different word endings for male and female verbs. The kids in the deductive group were told that when it’s a boy you add -po to the end, and when it’s a girl you add -pa to the end. The kids in the inductive group were just supposed to figure it out on their own, another way of saying they were required to use inductive reasoning.
Finestack and Fey’s results showed that significantly more kids in the deductive group acquired the target. They concluded by asserting that generally, the most efficacious treatment may be one that combines natural language approaches with explanations. For those with access, here’s the link.