I’ve recently re-read this article from October’s Discover magazine, an extremely interesting read concerning genetic influences on developmental delays. It was interesting enough that I wanted to comment on it here. Unfortunately, the article is behind a pay wall, so I’ll first summarize a few of the more intriguing points, and then follow up with my comments.
The author, Mark Cohen, a developmental pediatrician, specifically describes comparisons between the developmental problems of a boy with velocardiofacial syndrome (VCF) and another boy with DiGeorge syndrome. The boy with VCF was brought to Cohen’s clinic at the age of 2½ before the mother knew that he had VCF. The child had begun demonstrating moderately severe delays with speech, language, and learning generally, and she didn’t know why. Cohen was able to diagnose the boy’s VCF by using additional medical information that does normally require a trained individual to diagnose. This diagnosis was later confirmed by genetic testing. The mother was not only relieved to discover that her son’s problems were not her fault, it was now possible to more adequately devise a future plan of treatment.
This all got Cohen to thinking – and specifically, remembering a boy he’d once treated who had DiGeorge Syndrome, a much more serious disorder that usually presents with multiple medical issues and more severe mental retardation. He got to thinking about it because both VCF and DiGeorge Syndrome are caused by chromosomal deletions occurring on the exact same chromosome location. In fact both of these syndromes are referred to as 22q11.2 deletion syndrome, despite that the patterns of abnormalities they present with are often quite different. The two syndromes have very similar causes, but result in far different outcomes, only because in the severe syndrome slightly more genetic material is missing. DiGeorge Syndrome has historically been far easier to diagnose however while VCF has not.
The implication here is that there must be countless other cases similar to this first boy’s with now diagnosable genetic conditions, which are going undiagnosed. Anybody who works in special education knows that some children just respond well to a certain amount of extra help that other children just don’t respond as well to. There are so many children that we just don’t know exactly what’s going on. Without adequate information, conjecture must often take a more overly predominant role in therapy planning. Sometimes teachers suspect that the root of a child’s issues are occurring because of that child’s home life, while other times members of the planning team know that something unidentifiable is amiss. When I was in graduate school many years ago, we were taught the phrase “FLK” for “funny looking kid,” which while definitely not politically correct, at least underscores the issue here. These were children that everybody suspected had genetic causes to their developmental delays that were just impossible to know for certain.
Times have changed dramatically in our abilities to now figure these things out. However, in the United States, at least, the cost continues to prohibit. This New York Times article does a great job of spelling out the issue – basically, companies that perform these tests are extremely vigilant in patenting, reducing competition so that they can charge whatever they want. These companies do point to what they consider comparably high costs elsewhere, especially Europe, and there are definitely other factors that muddle this, such as overall infrequency of testing, and labor intensiveness, which also work to drive up the costs. The bottom line, though, is that for people of monetary means, this testing can provide extremely valuable information unavailable to people with less disposable income – and that includes children. And it seems as though there are solutions that can drive down the cost of genetic testing, such as having governments essentially buy the patent, opening up the market for competition. It will be interesting to see how this plays out as yet another possible instance (such as the convergence of mass media, campaign finance, U.S. health care, Wall Street, etc.) of one group’s money interests conflicting with what seems to be the greater good.
An excellent post. Ultimately communication should be a rewarding experience. My experience is that ABA can be effective when it’s used as sort of a last resort, and when used along with the intent of phasing it out as soon as possible.
Originally posted on Unstrange Mind:
This week, I watched a community implode. I’m not going to talk about that, though, because it was very painful to watch people I love being treated so badly. But a lot of the implosion centered around a topic I do want to talk about. That topic is ABA – Applied Behavior Analysis, a common type of therapy for Autistic children. I watched people fight around in circles, chasing their metaphorical tails. It will take some time and lots of words to unpack this topic, but I hope you will stick with me on this because it’s so important and there is a lot that needs to be understood here.
Here’s the argument in a nutshell. It gets longer, angrier, and much more detailed than this, but I am exhausted just from reading the fighting, so I’m boiling it all down to two statements. And both statements are correct.
View original 3,981 more words
Language is a frustratingly complex symbolic system, yet it is effectively used by small children. Language has been thoroughly explained by those with brilliant minds in disparate fields, yet a complete understanding of it remains frustratingly beyond our grasps. It always seems as though just as we reveal something new about language, just as we appear certain to construct some edifice of ultimate linguistic understanding, those damn counterexamples keep cropping up, as if in language as with nothing else, there is this holy grail of scientific system building, yet, almost schizophrenically, there can be nothing scientifically secure. Why is this?
There are several reasons. A big one is that words, or any unit of meaning, do not represent some eternal, scientifically measurable truth. Instead, they represent our beliefs regarding reality. Language is just a lot more subjective than we often realize. As a reflection of our beliefs rather than reality it is itself imprisoned by the subjective constraints of our minds. Thinking of our words and meanings as mere tools can help us avoid conflating them with the objects of the tools.
And then consider the extent to which what everything one person believes intertwines with the belief systems of every other person. Our belief networks are incredibly complicated, creatively elaborate, and elusively impossible to completely understand. Not one person seems to be in complete understanding of every one of his desires, memories, and beliefs. When one person tries to understand those of one other person, the task becomes exponentially more difficult, and then again when trying to understanding the beliefs of groups of others.
I’ve read a lot of language philosophy, much of which I’ve found fascinating. However, a larger part of it seems to muddy the layman’s waters. To me, much of language philosophy seems to be concerned with creating systems to describe how we use language, and then addressing the problems that other philosophers find with those systems. Explanation becomes layered upon other explanation, creating a complicated morass that is practically unrecognizable to people outside of the field. The heated debates usually use argued definitions of such things as reference and sense and scads of linguistic specific jargon to argue over questions, such as: What is meaning? How do words work? How do words refer to objects? These are topics that often seem trivial to outsiders.
Many of our confusions and disagreements are complex versions of this simplified example: Let’s pretend that I have a job sorting paper with colored swatches into two piles – blue and red. All papers must go into one pile or the other. This job is simple and mundane for awhile. Soon, though, I come across a color sample with a blend of red and blue, something like this:
Some of our most subversive problems with language occur when we are incapable of creating new piles. If I must be confined to either of only two choices, when we come across new examples of something, it will be impossible to come to a consensus. I could ask hundreds or thousands of people if this color sample would be best in the blue or the red, but I will never get 100% agreement if I must keep the options to the two original ones.
If I ask a question such as, “Is there free will?” I am doing something similar. I am ignoring the possibility of creating new ways of describing that may be more accurate than the old. You can say, “Yes there is free will,” or you can say, “No there isn’t free will,” but you are ignoring that there are other ways of describing reality. This is critically important. Language is a tool that belongs to people. There are no words or concepts that themselves exist independently of people, and a frequent failure to realize this too often becomes a pause on the potential progress of human thought. These are more than just fallacies of false choice; they are linguistic shackles on human intellect.
The skill of pragmatic judgment, as it is commonly known, involves forming appropriate social language responses. In other words it’s saying the right thing at the right time. This is not always easy to measure or even verify. Pragmatic judgment also involves prior knowledge, and knowledge of a conversational partner’s prior knowledge. Sometimes social language involves initiation (e.g. “Hello), while other times the reaction is a response (e.g. “You’re Welcome). At times it is appropriate to not respond in an expected manner. This occurs when conversational maxims are deliberately flouted for reasons such as sarcasm, intentional overstatement or understatement (e.g. Grice, 1975).
Frequency and effectiveness of social response has been shown to significantly affect aspects of life as diverse as interpersonal relationship and occupation (e.g. Swann and Rentfrow, 2001). Pragmatic competency is assessed through such activities as requiring recognition of appropriate topics for conversation; selection of relevant information for directions or requests; initiation of conversation or turn-taking; adjusting communication to situational factors such as age or relationship; using language for expression of gratitude, sorrow, and other feelings; and judgment of the pragmatic appropriateness of the language behavior of others who are engaged in these activities (Carrow-Woolfolk, 1999).
Commonly used assessments with pragmatic judgment include the TOPL and the CASL tests. Informal assessment in natural settings may provide more reliable information regarding pragmatic judgment than formal assessment. Assisting teachers complete pragmatic checklists, such as that provided with the CELF-5, provides information regarding skills specific to the classroom. Read on for a very shortened hierarchy of possible pragmatic judgment goals.
Chances are that if you have a kid with a language disorder, you have a kid with verb tense problems. Verb tense overlaps with many language skills, such as subject-verb agreement, production of infinitive verbs, irregular past tense, question formation, and helping verbs. Research suggests that omission of tense marker (“zero marking”) is the most prevalent kind of tense error in children with SLI (Marchman, Wulfeck, Weimer, 1999). Tests that assess for verb tense include the OWLS, CASL, CELF, PLS, and SPELT tests.
A common gripe for a long time with research of language disorders is that much of it has often been irrelevant to the actual teaching of language. And by often, I mean nearly always. A lot of the research seems geared toward one isolated characteristic of one subset of one small segment of people, and only to that particular population. The similarity of the following fake titles to actual titles may help demonstrate my point…
- “Toward Understanding Morphologic Tendencies in Left Handed Nicaraguan Preschoolers.” or
- “Past Tense Comprehension in Bilingual Adopted Children; A Conceptual Framework.”
It just has long seemed that these are the sorts of research titles that usually exist in the most commonly read sources of language research. And that’s when these journals even decide to address language at all, which has long seemed to be much less common then research addressing say, hearing, or voice, or stuttering. And this is too bad. Because there are many, many language related questions out there that can be addressed scientifically that would actually be useful in teaching language. So what might these “practical” studies look like? Here are some ideas I’ve had.
Question: Do twins often have one member with more language deficits than the other?
Implication: This often seems to be the case. Anecdotally, it seems as though one twin often speaks for the other, almost creating deficits in the less talkative twin. If the research would support this hypothesis, then we could prepare for this with extra early intervention, and assistance for twin parents.
What follows is an abbreviated list of some linguistic aspects of the English language along with their functions. As with any system of symbols, such as drawing, math, and language, this list is meant to convey an interpretation of reality. Just as the Mona Lisa is not the actual woman used as the model, squiggles on a map of England are not the actual England, and buildings and roads are composed of bricks and mortar rather than the equations used to combine building materials, this list is meant merely as a representation. Significant overlap exists. And as with any symbolic representation, any part of it that does not assist may reasonably be ignored. Keep in mind that the functions answer the question of why we learn the units. For example: Why do we learn idioms? To provide flexibility, creativity, and social status to our language.
Earlier, I was tutoring some speech kids working on reading. I just happened to have some comprehension flash cards targeting comprehension – oral or reading – of specific targets. These kids needed help with reading more than oral language, and so because I don’t have a lot of materials targeting reading, I decided to use the cards. One deck had about 12 cards with negative contractions (can’t, aren’t, isn’t, etc.), and the other deck had regular plurals.
Bottom line – this activity rocked. The kids missed the first couple. I told them to focus on “those tricky word endings,” (you know the kind that so many speech and language kids miss in oral language), and after struggling with the next few cards, by the end, they were getting it with no problems. They’d improved right then and there.
That got me thinking. These kids didn’t have deficits with plurals and contractions in oral language. But they did in reading. And I bet they did in writing too. They used to have these kind of errors in oral language, and we know from the research that young kids with speech and language deficits often turn into kids with reading deficits. I’ve never seen anybody targeting specific language structures like these in reading, but I’m pretty sure it would be a good idea.