Here’s the thing: we have this computer in our minds, far greater, infinitely greater, than the most sophisticated computers ever built, and we go around thinking that we as teachers can impact the language output process of our students in a conscious way – that we can “teach them how to speak”. That doesn’t make sense to me.
I mean, the neuro-processing is so complex! When we interfere with it, the kids listen to us, some from the throne of their teenage fogs, hearing, too often, English (L1.5) from most of us, including most TPRS teachers (sic) instead of grokking the pure language (L2), and their computer can’t process because what we are giving them is garbage.
I am reminded of the old adage in the computer industry – “garbage in, garbage out”. If we put garbage (L1.5) in, we will get from our students, duh, garbage out.
Only when high quality,uninterrupted, compelling (Krashen) pure sound information is fed into the computer, can the work of computing be done successfully – not in a quick few lessons like in the movie Avatar. Only over long periods, actually years, can we start speaking naturally. As long as we don’t mess with it.
Again. the process is so complex that any conscious fooling around with it, like standing up in front of students and “teaching” them how to speak (c’mon!) is just futile. Conscious fooling with a miracle is just stupid. Encourage them, yes, when they feel like outputting in class – a few of my kids have almost zero level affective filters, but “teach” them? I hardly think it’s possible.
Let’s give up our hubristic assumptions that we as teachers can teach them how to speak, and let’s allow our students’ natural language learning capacity, this monster computer that we all have, to do its own thing without all that L1.5 conscious forcing nonsense. We can’t learn a language consciously (Krashen).
What Diana and Dr. Krashen and others mentioned here about the need the hero of Avatar to hear the Na’vi language without being forced to output it (that scene was too short), we all probably had similar reactions. It was a bit weird.
I wish that Dr. Krashen had told his friend Dr. Frommer to put something into the movie, some innocuous line of dialogue, that would get people to maybe reflect a little, even on an unconscious level, about what Dr. Krashen is saying.
Maybe they can do that in Avatar 2. I say that because it looks like this movie is going to reach more people than any movie in history. So, why not slip in a little Krashen? (for those who didn’t read Dr. Krashen’s comment here, it is his friend, Dr. Frommer at USC, who developed for the movie, over four years, the Na’vi language).
By the way, having seen the movie once, I am going back to study Na’vi today on a 3D Imax with my boys. I want to see if I can decode any patterns. The language was very compact – Dr. Frommer did a magnificent job, just beautiful, and the interpretation of the actors was just fantastic.
The language helped the movie so much – it uplifted everything about the movie. It sounded like Dr. Frommer based Na’vi on a beautiful mixture of Native American gutterances (made that word up) and African American rhythms, with a Creole element, not to mention some definite traces of Sanscrit, especially in the song of lament under the ancestral tree. And, of course, Krishna’s blue skin was there. It is worth studying. Maybe we can get Dr. Krashen to convince Dr. Frommer to learn TPRS well enough to teach the language to us. I’d love to speak a little Na’vi. Maybe get my own Avatar. Maybe we can all have our own Avatars for the Maine conference. How cool would that be? We could “see” each other, instead of the usual shit.
Back to the point:
Early forced output, usually mixed in with a little sneaked-in English – bad. The computer can’t handle it.
Thousands of hours of interesting, meaningful, personalized, unconsciously received (there’s what many people don’t get about TPRS), compelling, comprehensible input delivered with strong classroom rules and the “power” TPRS skills of Circling, Point and Pause, and SLOW – good. Computer is happy.
CI and the Research (cont.)
Admins don’t actually read the research. They don’t have time. If or when they do read it, they do not really grasp it. How could
6 thoughts on “Output”
If we can just relax and provide the input that Ben describes in the way we know works, then we can let go of planned, intentional lists of structures and vocab and just let things come up. Should be easier on us all.
Yet….I still succumb to some structured warm ups that focus on explicit language and that take far too long…..I do it because it’s easy and gives me a sense of control…and I think I am afraid that they will get bored with just talking about themselves…
Maria
“…I think I am afraid that they will get bored with just talking about themselves…”.
Maria I am rethinking thatgeneral kind of PQA in favor of the three steps and stories – classic TPRS, if you will. I will blog on this tomorrow. No big deal, but just some inquiry into whether just talking to the kids in generalized PQA, which a lot of us seem to be doing these days – there’s been a run on PQA lately – is as valuable as the three steps as we got them from Blaine.
1. http://www.youtube.com/watch?v=J24ZL_Y5dV8 forces FUN output.
2. Some lab work here shows kids like to imitate before knowing what they are saying. For example, watching some youtube cracks ’em up. Naturally they imitate some part, for each other (not for me). It’s like one phrase that tickles them. So we play it over and over. We MIMIC it for as much fun and as long as we can. Now what was an Affective Filter is dead meat. The kids are pumped. Ready. Now it’s my turn. We break down phrase words SLOW circle point pause. We use the same words in other contexts the kids care about. But we start before the brain makes meaning. We start with the fun, emotional, acoustical, experiential part of the words. The Mimic.
I want output (earlier and more fun) because my customers want it. I also want it because I want them to get input easier. They need to tune into new sounds. Mimicry and such helps them to experience the sounds, better identify with the sounds, and now hear the sounds more easily. Now more input can be more comprehensible.
I’m with you Duke. Alot of children, the younger ones especially, LOVE singing and role playing. When they play naturally at home, they are imitating/mimicing as a normal part of their development. Why not take advantage of this in the L2 classroom!
What langauge do you teach Duke? If ESL, do you know Genki English? It pumps them up, fascinates them and is super fun. They go home smiling and singing the songs because it sticks! What’s even better is that they can transfer the language from the songs into real life very naturally. This kills the affective filter.
In this video I would hope that those who are more outgoing had volunteered to sing and act while the shyer ones were listening and laughing. I think it’s ironic that many famous actors say that they were horribly shy as a child….
Yes, completely agreed. We want spontaneous fun output. When it happens in class we go for it. We just don’t want forced output that lacks spontaneity, which, in classrooms, takes the hideous form of a teacher telling a kid, in the name of foreign language instruction, to do something that they don’t really want to do. It’ s not only embarrassing (affective filter) for the kid, but the teacher is, in the invisible world, sending the message that the kid is not up to par, is somehow lacking, and needs to work harder to be what the teacher wants. That sucks, and, in my view, is the norm in most classrooms, which explains why they are so tomb-like, and why the teacher is so nervous inside.
What we want, what we honor is interaction. Sometimes that takes the form of output. Output for output’s sake is only effective for a small percentage of students a small percentage of the time. Output for the sake of interaction is pure gold.
with love,
Laurie