In our department we have been wrestling with the question of what to require when in the curriculum. Do we agree that first-year students should only be asked to demonstrate comprehension? At what point can/ should we require output, at what point should we assess it, and by what means?
I suppose we could refer to research on the subject, but I prefer to conduct my own. To that end, I have done 2 output-based activities with my first-year students recently, and analyzed the results carefully. Here are some of my conclusions:
1. Some students can demonstrate close to 100% listening and reading comprehension and yet still produce very little in the way of spoken or written language.
2. Any one student’s spoken or written language can in one moment be breathtakingly good and in the next moment breathtakingly bad.
3. Asking students to write or speak before they are ready makes them feel stupid and inadequate.
4. Asking students to write or speak when they do have some skills is a huge self-esteem booster.
5. In any given class, there will be a wide variety expressed in terms of vocabulary repertoire: some students will have acquired phrases and words that were hardly used in class, depending, I suppose, on their interest in the phrase or their relative level of engagement on the day it was used in class. In other words, reading their papers, it would not be hard to believe that they came from different classes.
All of this points, of course, to the idea that language acquisition is highly individual, both in terms of the rate of acquisition and the specific content acquired. My conclusion is that I can not in good conscience make output any part of a first-year student’s grade, except perhaps to award extra credit points to the superstar who doesn’t really need them but would appreciate them.
This is not to say that we can’t have output activities, but even here I have to be careful (see #3 above.)
I need to put this in words and to hold myself to it. To do otherwise is to spit in the face of my professed philosophy of meeting the students where they are, and being responsive to their developmental and emotional needs.
Feel free to challenge this. I welcome other viewpoints.
[ed. note: Hanna W. – this is your chance to argue for the other side of this argument]
CI and the Research (cont.)
Admins don’t actually read the research. They don’t have time. If or when they do read it, they do not really grasp it. How could
15 thoughts on “Anne Matava – On Output”
This is really helpful to me!
Can you (or anyone) share what kind of assessments you use? I have been using translation assessments. In my first year classes after we do a story (which I feel I don’t do enough) we do a reading based on our structures. I use the reading as a basis for the quiz/exam. They are worth a bit as I give points by the word (!) in translation. I also include some personal questions (output) which of course they have more problem with. The personal questions are only worth 2 points each (and I usually have 2 or 3) so it doesn’t make or break them, if they understand the translation. I give partial points for answers that show me they are understanding (e.g. using the wrong form in the answer but the right verb, etc.) I always get at least 80/80 in every class.
I would love to know what y’all are doing. I hope I am on the right track but always looking to improve and I gain so much for everyone in this group!
One last question, our dept. requires a written final, oral final and listening final. If we should not test them on output the speaking final seems a waste of time. I am trying to figure out what I will do this year. In the past I have had them write a short children’s book (not paste in the text until after class) and have them share those with their classmates during the time for the oral final. This way they are all at least speaking the language for the entire hour and doing it one on one with other students (and me) in class. Afterwards they paste in their text and turn books in for use with beginning students the following year.
Thanks Anne and everyone for your thoughts and ideas in this area!
Hi Ruth, I’m no expert, but when I’m assessing for comprehension, I don’t ask the questions that require output. Translation is fine, true/false, one word responses to questions like who/where/what/when. It is hard to stick to, especially when your school wants to see you asking questions from the higher levels of Bloom’s taxonomy. But if I really believe that comprehension is all that I can in good conscience require of first-year students, then my assessment must reflect that.
We are still in the process of determining when to expect/ require/ assess output. I’m thinking somewhere in the second year, maybe halfway or three-quarters of the way through. This is such a difficult point because my students are at very different places with it. My inclination is to begin to ask them to write a bit, but arrange the grading system to favor each individual’s strength. If someone is scoring 100% on comprehension tests but still having trouble getting sentences out, in writing or in speech, I must be careful to not penalize them too much, grade-wise. In my experience, those kids will be able to produce output, sooner or later.
In fact, I remember hearing somewhere (Susie Gross or Blaine?) that there is research indicating that those students with longer “silent periods” end up speaking with greater accuracy. It makes sense, really.
So you see we have way more questions than answers about this. I’m really hoping that people who have been using TPRS longer than we have will weigh in.
Ruth, regarding your school’s requirements for a final exam: I think your idea is fine. When the school requires us to do something that goes against what we know to be best for the students, it behooves us to jump through their hoops as best we can, ideally in a way that gives the students some kind of success. It sounds like you already have that figured out.
It seems to me that there is much about TPRS that doesn’t fit into the model that is presented to us by the powers that be. Flying under the radar, putting together a package that satisfies the suits, not using up energy railing against the BS—these are all ways of dealing with it. Do the necessary with a minimum of emotional reaction or resistance, and get on with the business of helping students to acquire language.
Your clarity and honesty on this subject is breathtaking. I have a lot to say but wanted to say this first:
• As Anne states, forced, graded outpute will make many students feel inadequate and CAUSE IRREPARABLE EMOTIONAL DAMAGE. This damage will not only prevent many of them from continuing with language learning opportunities, it will turn them into individuals who will be “anti-second language advocates” for the rest of their lives. Since these folks are our future parents and voters, this is VERY VERY damaging to the future of the United States.
Output is frequently used as a measure of evaluation. I am not a fan of this method because many people (myself included!!!) do not have a good grip on how to make an output-formatted evaluation that effectively measures anything .
That being said….Here are my heretical thoughts on output. :o)
I used to think that output was one of the major goals of language instruction. The assumptions behind it that were that:
• A student learn to comprehend and to produce language at the same rate.
• The rate at which students begin to comprehend and produce is totally dependent on teacher-controlled issues save three: student motivation, student work, student “ability”-level.
• Teachers who organize right, plan right, establish expectations right and create good evaluative activities can then identify a correct level of language production.
• This identification should then be used to compare and delineate students.
Since working to focus my instruction on Comprehensible Input I’ve developed a new way of looking at input…and trust me…it is still evolving. What I have seen is that….
NON-GRADED OUTPUT can:
• Allow students to organize input in their brain (best-case scenario) or perhaps only in their notebooks (worst-case scenario.)
• Give students the opportunity to demonstrate the ability (or inability) to practice testing scenarios.
• Open doors for students to creatively combine acquired vocabulary in a new situation.
• Communicate ideas/thoughts/doubts/questions on a variety of topics.
• Add interest to story-asking activities (ie responses)
• Increase student-suggested ideas during story-asking activities.
• Provide an opportunity for the teacher to monitor “ spontaneous” output.
In actuality, GRADED output will:
• Provide PR opportunities in the form of projects that can be displayed or shared.
• Allow the teacher to collect samples of standards-based student writing to use in comparison to other writings and department/district/state requirements. (ala my former perspective….)
• Give students who are “good at school” a chance to show off their skills.
• Damage many, many students.
• Take valuable time away from activities which actually increase students’ language interests and abilities.
with love and fired up about this topic……
Ana that observation of yours about not being able to predict what students have acquired/picked up changed everything for me in my teaching a few years ago. It has been talked about in research—it just gets overlooked/ignored because it completely undermines how much of second language teaching is done around the world (and makes book companies nervous). We like nice neat packages and that little observation makes things really messy for teachers that like a linear and mathematical model to language teaching (and like to test accuracy). I’m awed at how you not only identified it, but described it so well. I had an inkling something was going on, but needed to read about to really put my finger on it. Thanks for putting that and your other points so well.
Doug, do you know where that research can be found?
I don’t know if anyone is still reading this thread, if not I may need to post this on the blog again on Sunday, but an essential question came up and I would really like some feedback on it.
When we get to the place of assessing output, whenever that is, by which criteria do we assess it? My old speaking and writing rubrics base some portion of the grade on accuracy or functions of language, including verb forms and tenses, noun genders, etc.
I’m not saying that these are insignificant. I do however have a problem evaluating students on something that I have not: 1. explicitly taught, 2. given homework on, AND 3. repeated frequently. Those of you who used to teach from the textbook know exactly what I am talking about. You know, the ol’ “We did adjective agreement, you passed a test on it, so now I have the right to penalize you for those errors on your essay.” (I have taken to apologizing to former students when I see them, much to their amazement.)
Come to think of it, what part of a rubric can I use? I never have had conventions of English, such as topic sentences and supporting ideas, on my rubrics. I personally am opposed to evaluating things that I myself haven’t taught thoroughly, with ample opportunity to practice, take and re-take quizzes, etc.
It begs the question: what have I taught thoroughly? What do I want the students to demonstrate?
The only answer I have is to point to my classes, especially to my fourth-year class. I want them to speak fluently. I want them to forget that they are speaking a foreign language. I want to talk about funny, sad, provocative, and interesting things with them in a foreign language. I want them to be able to say and write just about anything they want to.
I’ll be damned if I know how to write a rubric for that.
For me, a key element in the rubric is comprehensibility. I take away for errors that prevent comprehension. Thus I can ignore minor spelling errors, some subject-verb disagreement and some (actually a great deal of) gender/case disparity. I also look constantly at the ACTFL guidelines to see what students should be able to do. Generally I use the idea:
-understandable by a native speaker with little or not experience with English speakers
-understandable by a native speaker with moderate experience with English speakers
-understandable by the teacher with some effort (e.g. the student who wrote “Ich trage Kurzschluesse” for “I am wearing shorts” – unfortunately Kurzschluss is the word for an electrical short)
Just so you know, I don’t have this nailed down by any means; I’m still wrestling with the new paradigm as well.
Edit: my point with Kurzschluss was that I had to translate back to English in order to understand what was being said (and know why the word got into the sentence)
It’s sort of the German version of “Yo voluntad mosca a Hawaii” – I will fly to Hawaii.
Anne, I agree with Robert that the big point here is how comprehensible it would be for a native speaker; for me that means how much am I NOT hearing translations from English phrasing. One of the little pleasures I keep getting this year as opposed to earlier years is how much I hear some spontaneous great phrasing as opposed to all the “Das war ein gut eins” (that was a good one) direct translations that I used to think was normal.
Why? Output seems to be more defined by having gotten comfortable thinking simply– “like a second grader” as someone said a bit back–and trusting what pops out of their mouth. And if what pops out are the phrases they have indexed to situations we have simulated for them, those phrases come back out when the situation arises again spontaneously. (Again, I’m preaching to the choir here, but mostly reminding myself of these things by enunciating them).
If you have to evaluate output, though, don’t forget about the writing Rubric calibrated to “understandable by a native” standard: the New York Regents exams. http://www.nysedregents.org/ Note the evaluation dimensions:
2. Organization (refering to coherence or non-randomness of narrative)
5. Word count
Grammatical accuracy in this rubric is limited, then, to 20% of the total score, and even then comes with the direction to evaluate how much “Errors do not hinder the overall comprehensibility of the passage” [or do].
Oh yeah (can’t remember point I was building to until after I posted), if you don’t like a category because you haven’t been doing it, remove or replace it. Can we in good concience post “Wasn’t just a boilerplate response” or “Appropriately involves Chuck Norris” as an evaluative dimension? Well, maybe we should, if that’s what they are driving at anyway. Make the unstated agenda stated, and give them credit for it.
For classroom use you don’t have to give each of the categories equal weight, so structure/conventions doesn’t even have to be 20%.
Anne, I absolutely love your description of what you want from your fourth year class! What I wouldn’t give to be able to give that out as a class expectation at the beginning of the year instead of that mealy-mouthed edublather thing we have to give out! It’s exactly what I want and expect from my fourth and, at their level, from my 3rd year. What more could one ask? I have worked at that for a while now. And yet still, the little voice in the back of my head says “Are you sure you gave them enough direct grammar? Will they be okay next year?” And so it goes….I still do what I do, I just worry a little.
Jeff, I got swamped at work and haven’t been back until now–I don’t know if you are still checking. I don’t have it in front of me. Look up Skeehan and Jane Willis (Dave Willis too). I have an article from a book by Willis which is out of print which talks about this. I’m pretty sure it is by Skeehan. Can you get my email from this blog? If so, email me and I’ll give you details. I also noticed that someone in the one of the replies mentioned someone from CAnada. I don’t know if they were researchers.
Doug I don’t have your e-mail, but thanks for the names–I did a search and several links came up.