Some teachers speak English as much as 90% of the time, others as little as 10% of the time. But sometimes, oddly, on year end assessments, students from the second teacher’s class – the kids who rarely hear the language in class – often show gains that are not too far away from those of the teacher whose students hear the language most of the time.
Why can’t assessments pick up any difference? Could it be true that the second teacher’s students have acquired as much of the language as those of the first teacher? What goes on in assessment in our country to create rather similar results when the pedogogies used are so dissimilar?
Some would have that the 90% group just didn’t learn anything. I suppose that is possible, but I find it unlikely. Comprehensible input done poorly is, after all, at least the target language. And thousands of teachers have been using input methods for over ten years now, and are getting better at it every day as the sap continues to rise.
I suspect that the problem lies in the instruments and the people who design and use them. For decades, assessment has been essentially the same – an analytical kind of fill in the blank assessment, a strange inauthentic game of making the kids look like they know more than they actually know.
[ed. note: last summer on this blog I challenged the Coalition of Essential Schools leadership to show proof that the language departments in their schools nationwide indeed align with the Ten Common Principls, which look like a manifesto of Krashen’s ideas. Perfunctory contact with their yearly magazine editor ended in no response. They didn’t respond because they couldn’t – the CES WL teams claim to align with Ted Sizer’s ideas but they do not. It’s a bust. Search Coalition of Essential Schools for more on this. I mention this because what I am writing here me think of their hypocrisy – sorry if you don’t see the connection but hey it’s a blog, a place for opinion and discussion, an attempt to put some dynamite under the concrete that we all labor within in our nation’s WL departments]
In my opinion, the reason CI trained kids may be matched on these tests by the 10%kids is because the current typical end of year instrument measures knowledge that can be prepared for. Teachers routinely spend their classes – each class all year – preparing for the test.
Let’s look at the listening portions of these tests. Paul Kirschling (Thomas Jefferson High School, Denver) suggested to me once that, on these tests, nouns count more than verbs. That is to say, the tests are written in such a way that being able to pick out some basic vocabulary can likely lead to a successful response to the question by the kid who has memorized a list of words.
If a kid has memorized such words as fish, teach, pen, desk, for example, and each listening selection question (I’m talking about level 1 here) may have 99 other words in it, but the test is designed to ask the child not to decode the passage, but to connect the one word that they understand with a multiple choice answer, then the kid can know only that word and still get the question right. In no way does that measure if language has been acquired.
I am sure that if I were force fed/tested in that way in a Russian class over the course of a year, and I was a mildly bright student, and was motivated by my parents to get an A in the class, and then heard 100 words of Russian CI on a test, I could trap that one word and connect it to a picture or whatever. Would I understand Russian? No, because I never really heard it all year, but I could look like I do.
Other test designs are similarly deceptive, using similar test-taker friendly formats. Who writes these tests? I think it is the same teachers who teach the classes in most districts. The hen guarding the hen house, as it were, making sure that the assessment in the spring doesn’t go too far and wide from the chicken feed that the chickens have been fed that year. As one person said, “Where’s the beef?” I’ll tell you where the beef is – it’s in the CI.
Were such 10% kids to face an assessment instrument in which they actually heard the language spoken in, say, a twenty minute passage, and then were questioned on specifics that result from actual acquisition (specifics detailed in the smaller texts found in the ACTFL standards as well as the new state standards in CA, OR, and CO), they would not understand much of anything. How could they, after not really hearing the language all year?
It is almost as it the writers of the tests, the book companies, and the teachers all conspire to address a basic vocabulary, and then test on it, without really developing in the kids the ability to actually decode the language. Decoding the spoken language is a skill that is very low on the list, it seems, of what kids are asked to know.
Robert Harrell recently pointed out that the new California Standards are not even published on the state’s Department of Education website, and have been in that dormant state for over a year now. I don’t think that is an oversight. Even with the new standards, it seems to be business as usual for these teachers, and the book companies. What has changed?
However, kids who are able to hear the spoken language in their classes
– sign up for the next level much more readily
– go beyond mere learning to actual acquisition
– seem to be happier with their language selection
– “get” the deal about CI, once they’ve experienced it when done by a competent CI practitioner
It’s kind of like market research. Companies can do all the research in the world, but, at some point, they have to put the product on the market, and people either buy it or not. Kids buy CI (when properly sold), and they only buy 10% language instruction when they have to and there are no adults around to lobby on their behalf. Exept for the four percenters.
The sad part is that the teachers who so jealously guard the test design turf in their districts are probably the same teachers who could very much enjoy incorporating more CI into their teaching. It’s not that difficult, but, alas, it is apt to be perceived by some of them as personally insulting.
The Problem with CI
Jeffrey Sachs was asked what the difference between people in Norway and in the U.S. was. He responded that people in Norway are happy and