To view this content, you must be a member of Ben's Patreon at $10 or more
Already a qualifying Patreon member? Refresh to access this content.
To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to
To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to
To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to
To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to
Subscribe to be a patron and get additional posts by Ben, along with live-streams, and monthly patron meetings!
Also each month, you will get a special coupon code to save 20% on any product once a month.
149 thoughts on “Authentic Assessment – Russ – 28 – Assessment of Language vs. Assessment of Content”
A particularly striking example of contextualized Ci instruction is seen in Anne Matava. When she works with students, her German (or French) is a blur but the students get it all. It’s amazing. It’s exactly what Alisa said.
This whole thing about absolute transparency in the TPRS world was described by Dr. Krashen in 2010 as too extreme. What he was saying was that we learn languages contextually and not in a decontextualized (and thus boring) way. Notice that this was about the same year that Krashen was coming out with his strongest statements against targeted CI in the TPRS world and why he never fully embraced TPRS.
When we insist on speaking to our students TOO slowly, we are not doing the kind of CI instruction Krashen advocates. If you want to read more about this idea search Krashen and the term i + n (i + noise). It’s a complete validation of Alisa’s point.
decontextualized language, content-embedded language= CALP (academic, not social language)= developmentally inappropriate for beginners
I’m so glad you all get that. Now if I could only convince the many thousands of ESL teachers nation-wide of this, we’d be set.
Also, how are rubrics “not so great”? Rubrics are hotter than Gerard Butler with a box of puppies.
Yeah I thought you would call me on that Claire. I tried to slip it by. It’s just my own bias. From what I have gleaned in recent weeks here, I kind of started thinking that portfolios were the cat’s ass and that rubrics, because they involve (in my own perception at least) a bit more rating and ranking and judging than portfolios, were slightly lower on the totum pole. Plus I am concerned that rubrics would differ so much from one CI teacher to the next that the lack of uniformity would bring down their overall quality.
Here is a challenge for the group. Send – in the comment fields below – the best possible rubric for assessing using CI and maybe some of us can test it out next year. Don’t you agree, Claire, that we need to be specific about what we even mean by the term rubric, and come to an agreement about what is a good one and what is a bad one. And then on top of that to figure out how the supreme jGR rubric fits in with any new rubrics we create for next year.
Ben, I love your idea that we share what we have, but there needs to be one question every time we pick up a rubric: how do I modify what I used to reflect what students authentically experienced in the lesson? Rubrics can not be one size fits all. Every rubric must be adapted to suit your classroom. Some classes with mixed abilities are students with special needs may need to adapt for individual students.
This process is not terribly time consuming (for you who don’t plan, you can find the time)- just a few moments to consider what COULD this student have understood? If you have to delete entire rows or columns with “NA” until there is little left of the original rubric, that’s fine. If you decide not to use them as grades, okay. If you don’t fill them out most of the time, no problem.
But the principles behind finding and using a rubric that fits our needs-that accurately reflect what just happened or is currently happening in a lesson is primordial to Authentic Assessment: promoting differentiated paths to holistic language with multiple measures. It is crucial that we use varied assessment instruments, not just ISR, to reflect the diverse learners and their interests, knowledge base, multiple intelligences, and styles of learning.
A child with Asperger’s syndrome won’t look like he’s listening, but he is. He is so bright but he would fail ISR. The kid with ADHD would never get full credit for their biggest strength, getting up and moving around, without a TPR rubric. And a kid with ADD (me, I am super ADD!) wouldn’t get to show off the winding twists and turns their imaginative stories take when they have time and get immersed in a story of their own creation with a Retell Rubric.
And you neuro-typical people will just miss us. And not being noticed when everyone else around us is, that makes us feel weirder, and over time it builds. After a lifetime of having to be assertive to be noticed, personally, I am hoping I can do better for my students.
Granted, for most students if you use jGR, you are getting a huge piece of the assessment puzzle. I love this rubric and I thank you for introducing me to it. However, if you are not using other rubrics to document the language actually being used, you aren’t gauging and engaging conversational language that students can actually use.
However they are modified, whatever they look like, and whoever contributes them, multiple rubrics must be used. We have to find what kids can do in our classroom and get it on paper…again, filling it out is optional. This is the only way to force parents, administrators, and other stakeholders to take our Authentic Assessment seriously. This is the only way to end abusive testing practices.
“Don’t you agree, Claire, that we need to be specific about what we even mean by the term rubric, and come to an agreement about what is a good one and what is a bad one.”
I love this idea so much, you need to create a separate thread and we should discuss criteria for our rubrics, with the understanding they should be adapted.
Questions to be addressed may include:
1. What types of rubrics are we looking for and like Ben said “what is a good one and what is a bad one”? Even though jGR is technically a holistic rubric, I think we can work under the assumption that analytical rubrics would be helpful for most of us, right?
2. How can they be scored?
I have my own take on them as being descriptive, not evaluative, but I would like to know what you all think about scoring rubrics.
3. How can they be turned into grades?
This is so tricky, and I struggle with doing this myself, because grades are just grades. Grades stink. This might be where share ideas and then pick our poison.
4. Most important question: where are we pulling our criteria from?
We’ve covered that we don’t assess targets. I think we decided against using the “Performance Indictors” from that Foreign Language professional organization I should be embarrassed about not knowing about. Where then? Or maybe the curriculum question is for a later post. But it needs to be addressed sooner or later.
There is clairly so much going on here. Instead of creating a new thread on “rubrics” I think this thread will serve as a good start. Claire what is the best way to handle this? I don’t think the blog is going to work because all the great ideas on rubrics will be lost as they scroll out. We need a document for the Primers at the least and a book at the most. Let’s just see what happens. I know one thing here – we in TPRS know far less than we need to if we are to put the assessment third leg on the CI stool to join the research (box checked) and pedagogy (box checked) legs. But we can’t post a bunch of brilliant stuff here and let it all scroll out. We need a document, a primer article, a pamphlet, something we can post here and start our 2016-17 academic year next year as the “Year of Assessment in TPRS”. Face it, you are the moderator and inspiration in this discussion. As an ESOL teacher you are in a unique position to help us remove the logs from our eyes on this critical topic.
maybe a google doc?
Name pun!!! I got another name pun!! I will write until the wee small hours of the morning and shirk all other responsibilities if it means I get name puns and cool nicknames.
Kind of on this topic… I had a really great time with the Intermediate class last Friday. This crew gets it. I share a quote somewhat related to SLA on Fridays & invite their responses, maybe link it to what we’re doing in class. So I had a quote from this nice article: mandarincompanion.com/blog/the-magic-of-learning-from-context/ I pulled a quote about how reading in multiple contexts really gives you comprehension, and that flashcards for vocab practice are limited in usefulness. (This was in part to demonstrate why we were reading, and why I encourage them not to rely on memorizing word lists.)
One student spoke about having done just that a ton in middle school Spanish. Another student who’d never taken language until my class last year asked, “Why don’t more people teach like the way we learned our first languages? I mean, we never had to memorize words for that but we’re fluent.” This is an awesome kid who cracks jokes & teases his friends in Chinese. I love it that they know how & why class runs as it does, and have bought in overwhelmingly. I’ve been having spontaneous conversations with several students in that class, second year of Chinese; they initiate them before class. Not all the class is there yet, but they understand & add. What a joyful thing.
Diane great story. It’s almost as if the kids believe the first thing they are told and stay with it. My 8th graders have from 2 – 5 YEARS of grammar behind them. They were so closed to CI that their “stories” – such as they were this year – were far inferior to those of the 6th and 7th graders. It was as if they had been ruined to the French language. Which is why Robert said here a few years ago that the term “criminal negligence” is not entirely inappropriate in the context of this discussion.
And Diane most interesting is the reaction of the “smartest” of those 8th graders this year. All they want to do is write (which I allow) and be corrected grammatically. The kids in that class, along with two 7th graders bumped up by parental pressure, who rock the house are the ones who are not the grammar freaks. Two months ago I broke the class into two 40′ half blocks. The smart kids do grammar and the others do stories. The smart ones, when they come back in the room at the end of class and see us rocking the house in French, are easily seen through. I look in their faces and can see that they don’t want us to know that they can’t hang with us. And it’s all bc of some damn grammar teacher in the past, who presented French to them as a mechanical puzzle to be solved. That really sucks!
It is too bad for those kids.
I know I’ve said this before, but I keep having the majority of my students new to CI with me rather freed up by not having to memorize, get corrected in front of others, and rehearse other people’s “conversations.” It’s really awesome. I don’t know why. Maybe French attracts that kind of student who clings to grammar? (If it had been me… good at the French grammar stuff — I would’ve been really happy in a CI class!)
To Claire’s point about flexible and adapted- to- the -students-in- front- of y-ou rubrics:
How about we come up with as many descriptors as we can, one set for behaviors that support acquisition (jGR blurbs) and another set that ‘gauges language.’
Some for real beginners, some for rising intermediates. Some for elementary, some for middle to high school (more literacy- related comments). Whatever tweets you e made, you list it and explain. For example, I reworded ‘maintains eye contact’ cuz I teach an autistic child- it’s not a fair expectation for him.
We could have a google doc that we can all add to, then see where it goes. Folks could consult and pull from it as needed; we could make a folder of the rubrics we create from it. Templates, samples.
I think this could be a tremendous contribution. Some assessment- focused teachers (I mean that in a good way- warriors like Lance that are deeply studying and articulating the issues!) could share what they’ve already done to this end, to model for the rest of us. For me seeing this kind of thinking with lots of examples side by side is helpful.
Thoughts?
” Some assessment- focused teachers (I mean that in a good way- warriors like Lance that are deeply studying and articulating the issues!) could share what they’ve already done to this end, to model for the rest of us.”
Lance is as an assessment “warrior” is such an accurate description from the rubrics I’ve seen of his.
” lots of examples side by side is helpful” yes, that’s perfect! That’s how we’ll address Ben’s question “what is a good one and what is a bad one?” Great suggestion, Alisa.
Mine should be the first one torn to shreds, I’m not shy about rubrics because they are precious to me. I want you guys to roast my rubrics and make them better, okay.
This one’s not bad, but I got inspiration from Ana Lado, an ESL Storytelling genius.
https://onedrive.live.com/redir?page=view&resid=7EAFBE0D798F0F9A!11074&authkey=!AE-ch2ef5Pv9O9Y
This one is not terrible, but…
https://onedrive.live.com/redir?page=view&resid=7EAFBE0D798F0F9A!11471&authkey=!ADcYFbG366WQMWM
This one needs help.
https://onedrive.live.com/redir?page=view&resid=7EAFBE0D798F0F9A!11420&authkey=!ALp921B1j0cQIgc
I really like, Claire, in your retelling rubric you shared, where it says, “Some fluency of expression or volume of writing (relative to instructional level) in coherent phrases or complete sentences,” and specifically the “relative to instructional level”.
I like the idea that in our rubrics we can give a high proficiency student a lower grade for not producing as fluently as is appropriate to their level while giving a different, lower proficiency student a higher grade for producing more fluently than they had before. Because, you know, if you’re anything like me, you have multiple levels in a class. And some of those kids that just have the hardest time with writing, for whatever reason, don’t get dinged.
I love that you see their potential for differentiation.
We’ll add this to our list of questions for TPRS teachers being bullied: How do you differentiate instruction AND assessments?
Wow, I want to look at these, but don’t have time right now. Please keep bringing up rubrics.
I definitely would use the TPR rubric. The Listen and draw is good too. The last one I would not have students do a retell until the (at the very minimum) the end of Year 1 — I wouldnt use the rubric for year 1. For year 2 I would use the rubric but a much more simplified version. Also, highlighting the changes in from one category to the next could help too.
Yes to all of this, Alisa. Very pragmatic and a good way to start the rubrics discussion…with real rubrics.
So…who’s doing this? I don’t google doc; I Onedrive, is that okay? How do we make it so people can submit stuff?
I want to. I will be submitting all summer ha-ha.
I just realized I misread your post I mean I want to submit them and yes who do we send them to? Oops! Ha-ha that’s what I get for reading this before school haha
I’m not sure, but I volunteer this folder (you can edit and add your own I think, but test it out Russ).
https://onedrive.live.com/redir?resid=7EAFBE0D798F0F9A!11711&authkey=!ADFup_-KWT6YdAQ&ithint=folder%2c
It’s OneDrive, oops. Do you guys use that?
Ben’s suggestion and that we”come to an agreement about what is a good one and what is a bad one” will get us to question what communication really is. Are my rubrics really focused on communication or are they too language trait specific? Be honest.
I once sat through an hour long presentation on how to use the 6+1 Trait rubric. It was misery. I can tell you that a bad rubric, like a mean teacher, is a crime. If I am committing that crime, please pull me over and give me a warning.
Is my retell rubric too “6+1 Traity”? Like the fluency thing, I just delete that for my beginners. And I’m also on the fence about the mention conventions-which we don’t explicitly teach, however we do want criteria that delineate features of written work that are specific enough to be observable, but still focus on the holistic nature of language to “reflect what we value” as Alisa so beautifully said. Any thoughts?
I have thoroughly enjoyed following this thread, jumping up off the couch in excitement a few times. We’re starting our month long exam season here in Scotland. Many kids are panicking, self-doubting, and vanishing from the building. We’ve got what looks like crime scene tape blocking off one hallway where the exams take place and ‘invigilators’ keeping watch. No exaggeration. Please don’t let things devolve into this back in the States.
Reading all these posts have made me confront how I’ve left many kids in the dust this year, disguising my neglect with a silly hands-off data-less tracking system that I justified as humanistic. What a joke! My boss keeps telling me to provide evidence of pupil progress and I mostly dance around the issue. I need to step up here. I will commit myself to learn the ways of Authentic Assessment. No excuses. No laziness.
I’m totally on board with portfolios/adapted rubrics for my middle schoolers (no mandatory course exams and I have full control over all the asssments) – let’s get cracking! I’ll add what I can to the google doc masters.
Thank you folks for helping me wake up a bit. Britan is assessment crazy and now I can begin to speak their language better. Forging relationships like Steven said, yeah?
All threads really, not just this specific one. 🙂
Thanks for this feedback, Jason.
I really appreciate it because addressing assessment, which is traditionally such a judgement-laden thing, is so deeply personal with not a little ego involved for many teachers, and you just lay it all out Jason, and I admire that.
It’s taken me to a bolder, more brave place (but it helps that Ben is brave and fearless on assessment and he’s leading the charge). Sorry if my above post was too bold or too personal or made people feel uncomfortable, but I see things differently from other people, even you very intelligent people, and I’m not able to hide from that any more. I see rubrics as beautiful.
I know I’m a dork, but I adore the largely untapped power rubrics have to value children and empower teachers who value children. I geek out at the mere idea of you amazing teachers making a fancy spreadsheet and collecting data and moseying into data meetings with your heads held high. That is so beautiful to me.
Rubrics honor intent (borrowing Ben’s phrase) and honor children and their efforts. The TPR rubric I suggested (again, please give suggestions for improvement) it’s such a beautiful tool. It gives teachers the power to take something simple and value it.
We involve students in something easy and silly and linguistically appropriate for beginners, like “touch your nose” and the kid touches their nose, and we notice and make it into a big deal. We gave it a spreadsheet with a fancy title and fancy-pants descriptors, and basically just throw the kid who touched his nose an assessment parade. We are already noticing kids doing TPR, now let’s just notice them and make a big deal out of them. Let’s share it with parents and administrators, and data turds who refuse to do the same.
Taking something simple and choosing to value it and make a big deal out of it would in other circumstances make me a huge drama queen (and I am right now). Yet, I see this as a very beautiful thing to do for children. Particularly for those children who are told by the grade at the top of a multiple-choice test that they are simple when they are really a big deal. I used to be one of those children so I know.
Claire,
i love that TPR rubric, but what kinds of rubrics would you use with more advanced learners. Do you still do just TPR? Are their any other rubrics that you use for reading comprehension. I could adapt the listen and draw rubric you demonstrated, but I was wondering what else you got. I have rubrics for interpersonal and presentational but both of them are from another universe where one can actually learn language, so they have to go bye bye as it were. But I am on the lookout for interpretive rubrics since I have noticed that my students can simply guess right answers from time to time. This is rare, but it is inline with what y’all have been saying about fair assessment. I want to move away from grading kids to something that clearly explains to me and my students what they can do. And that is what rubrics help us do. They don’t judge, we do that when we say this is where you “should” be. Should by the way is the worst shaming word I know IMO.
I second the interest in a reading comprehension rubric. I could see having a rubric for say 4-5 core activities that we do all the time.
Yes! That sounds great. Then, you need a grade in the books — pull out the rubric & you’ve got one.
What performance would you measure with a reading comprehension rubric?
I would argue that an assessment of “reading comprehension” as separate from other communication is less authentic than a simple retelling. We tell a story aloud, write it as the whole group reads along, then retell. Don’t do a separate step to artificially quiz students on one language domain. Assess in the same way you instruct.
Why create something that only addresses one of the four language domains, when we teach holistically, demonstrating the interconnectedness of language? Students understand a story with input in oral and written form, so they need both on assessments. Checking “reading comprehension” of words on paper will give us some idea of reading ability in terms of decoding surface-level text, but not the application of language to respond verbally and nonverbally in communication.
All of that is to say that I prefer a retell rubric. Even if you don’t like mine rubric, we can fix it. But when the instruction is telling stories, retelling stories is the most authentic way to assess reading comprehension. You also get a sense of oral/aural language (nope, not buying the presentational v. interpersonal language crap-it’s all just communication). Assessments should be as holisitic and directly related to instruction as possible, without teasing out language domains. It’s not necessary for foreign language learners until very advanced levels because they acquire so haphazardly in a way we can’t control. We can only go faster or slower and determine overall proficiency.
No more reading comprehension questions (multiple choice or constructed response). Only retells.
Is this okay, or does this sound bizarre to you?
A noteworthy exception would be for Heritage Learners, who don’t use TPRS anyways. Intermediate and Advanced ELLs, who become L2 dominant in literacy, need assessments that focus on literacy. But that’s the exception that proves the rule. They don’t learn with storytelling, so they don’t tell stories for assessment. Assessment immitates instruction.
…We tell a story aloud, write it as the whole group reads along, then retell. Don’t do a separate step to artificially quiz students on one language domain. Assess in the same way you instruct….
Yes, WE tell/ask a story. The students can’t do that yet. They will be able to understand what they read LONG before being able to show us with a retell. I still prefer a genuine exchange about what was read instead of a multiple choice quiz, but to require output kind of goes against our creed, no?
What about drawing? Illustrations with or without captions/labels are widely accepted by early literacy experts as a natural pre-cursor to writing.
…What about drawing?…
Sure, but that almost seems like a less-authentic assessment instead of just talking to kids in class; one is a school thing (we don’t usually draw pictures for each other), the other is a real communicative thing.
There are certainly other reasons why a good Listen & Draw, or Draw/Write/Pass activity is exactly what I want to do, but I don’t know if I’m about to have kids draw pictures every time we read something as the only way to assess comprehension.
Drawing is a GREAT alternative to retelling too soon, but I would want to assess comprehension by providing more input via my questioning.
“Drawing is a GREAT alternative to retelling too soon, but I would want to assess comprehension by providing more input via my questioning.”
How? What kinds or questions are you asking and how do they respond nonverbally to your questioning? Thanks for sharing.
Drawing isn’t an alternative to retelling, it is retelling. It also allows creative expression as well, particularly if they make up their own ending. They can copy from a model, reading and comprehending an anchor text (like your whole group retell), then match the text with their illustrations.
…How? What kinds or questions are you asking and how do they respond nonverbally to your questioning? Thanks for sharing…
I use the term “Retell” to refer to spoken recounts of a story. I think this is the default mode (i.e. speaking) when people use that term. I’ve also referred to Timed Write Retells as an option during Fluency Writes. I suppose now I’ll add Drawing Retell to the possibilities.
I only ever expect one-word responses, though. By orally asking comprehension questions about a reading, students receive more input. Retells don’t involve input and require language that might be unacquired (= forced), so I tend to avoid them.
I’ve heard foreign language teachers talk about “presentational” verses “interpretive” and I’m not sure why this distinction is made-it’s all communication: oral or aural language, is that what they’re getting at and to what end? I suppose you could make the distinction in a rubric if you had clearer criteria, but I’m not sure what that would look like since I don’t quite get what that is. Is presentational even a real word?
“I have noticed that my students can simply guess right answers from time to time.”
Is that bad?
Rubrics don’t really have right or wrong answers, but focus on a process.
Oh, I get what Russ was saying.
A major argument against tests that we should use in our favor is the fact that discrete-answer tests (multiple choice, true/false) are vulnerable to guessing (they are likely to get at least 25% by just guessing).
Reason #1000 not to use “tests”: Russ’s “they could just guess” statement brings into question the reliability of testing.
Thank you Russ for bringing this up. Some ammunition for data meetings this Fall.
Yeah and ACTFL has determined that there is language for conversation (ie interpersonal) and presentational (for an audience). So some people like my dept have decided we need to assess them as different skills for different purposes. I make the distinction because one is a rehearsed performance (presentational) and the other is spontaneous and on the fly (interpersonal).
How will that distinction help you modify instruction?
I am not asking to pick on you, sorry, and I’ve seen other people mention this here, so thank you for bringing this up on everyone’s behalf.
I just see people caving into pressure to assess for assessment’s sake, not really to gather useful data, and not changing anything with their data. It’s not you, but our you whole PLC needs to be on the look-out for data turd’s tricks.
They try to out-assess others by making up distinctions for “types” of language that aren’t relevant to instruction. They may matter in a linguistics course (which Krashen argues grammar-translation really is) but not in a communicative language class.
If it can’t help you change how you teach, don’t assess it. Our instructional time is frittered away by needless assessment. Simplify. Simplify.
What ACTFL seems to want is kids “conversing” (interpersonal) as well as “presenting” but “presenting” for a novice is practically useless since (as Grant said in the 90% article) THE WORK OF A NOVICE IS LISTENING! So why let ourselves get suckered into assessing their SPEAKING?
This is ACTFL Novice Presentational performance description:
Presents simple, basic information on very familiar topics by producing words, list, notes, and formulaic language using highly practiced language. May show emerging evidence of the ability to express own thoughts and preferences.
Kids can’t even say a SENTENCE at this level so why have them even speak? WHO speaks in LISTS?
OK let’s see Intermediate:
Expresses own thoughts and presents information and personal preferences on familiar topics by creating with language primarily in present time. May show emerging evidence of the ability to tell or retell a story and provide additional description.
That is what my kids do at this point. But that is after a YEAR of NO SPEAKING ASSESSMENTS. So picture this: I took time each term or month or what have you, to asses and test. The kids would have had tons less input which is the fuel powering the whole rocket. So they might well still be LISTING.
No LISTING when you should be LISTENING. Catchy slogan right?
I am just all in favor nowadays of pushing back in our own quiet way and not doing what ACTFL tells us to do. Who am I kidding? I have always been a boat rocker and a member of the Troublemakers Union.
Thank you, Tina. We are co-chairs of this Troublemakers Union, now and I am deputizing Russ to go after these ACTFL people for over-assessing with no purpose, splitting hairs and shaming students with assessments that are developmentally inappropriate.
I have spent my entire 8, almost 9 year career teaching students who go from zero English to fully proficient, bilingual and biliterate on grade level. I have never, not once in my life, heard of the “presentational” verses “interpersonal” distinction, and my kids are doing better than just fine.
They have stellar speaking skills because they receive holistic, comprehensible input through meaningful messages, not forced output “on the fly” …followed by even more forced output that’s “rehearsed.” Just for the sake of forcing output… twice.
No instructional value. Just to out-rigor and out-assess.
The fact is, there is no difference in the semantics, lexicon, syntax, or any other feature of language being used when speaking “on the fly” verses “rehearsed,” except perhaps a very minor pragmatic difference in addressing an audience. But pragmatic is so late-acquired and so far beyond non-fluent BICS learners’ scope, it is futile and ignorant to try to teach this to early language learners.
More important than the fact that teachers can not collect discernably different data from these two erroneously differentiated assessments, there is nothing instructionally we can do to encourage “rehearsed” verses “spontaneous” speech or any speech, as Tina points out, without first providing holistic comprehensible input to build a foundation.
This is the most important idea I can share on assessment, and I want it on my headstone: data is only useful when used to modify instruction to help students learn. Never assess needlessly. If assessments don’t help kids learn, get rid of them.
So, Russ, I ask my question again, not to you, but for you-keep it for your next data meeting. Simply ask: How will that distinction help you modify instruction? This is your ammunition to stop this lunacy, because you are awake to this issue and I like your fire.
If you can think of a way to get separate presentational assessments to help your kids at this stage in their learning, by all means.
If not, stand up to the data turds out for blood, not improving the comprehensibility of their input… because that’s what Authentic Assessment is–noticing students to improve Comprehensible Input.
This is us ignoring the real issue: assessment should be used to modify instruction to meet kids where they are (aka. Comprehensible Input).
This is us allowing Authentic Assessments to be marginalized, so Comprehensible Input can be marginalized by our peers as well.
This is us participating in needless assessments because when ACTFL says jump, we say how high?
This is abusive testing!!! Haves verses have nots, ranking kids, over-testing, and shaming.
This is Tina, Russ, and me demanding we stop it! Like Ben says, wake up.
(I am so sorry this is so long, but it needs to be said.)
ACTFL is made of intellectuals, the cloth of their habits is offensive to normal people and is in no way friendly to kids. Perhaps you haven’t heard of ACTFL because they haven’t been able to infiltrate your field, which in ESL is more based on genuine human need on the part of the students than on the highly intellectualized corporate/university brand of intimidation of regular old WL teachers, who have fallen for their plastic goods hook, line and sinker. Mimi Met and Helena Curtain and so many others, all the university people who embarrass themselves daily with criticism of Krashen, so many pipe smoking tweed wearing losers have for forty years forged a golden chain (gold for their bank accounts) by pushing the textbook model, supporting it in spite of the obvious contradictions with the standards they don’t/can’t/won’t enforce to force people to become purveyors of Realidades in no less an egregious way than other major brands have done to become monopolies in their own businesses. Don’t get me going on Mimi Met. I’ve got a story. We must focus on the new. It will take down the old. And Paul Sandrock and those ACTFL dudes are coming up for a hard fall soon. ACTFL is like a building that doesn’t look like it’s full of us termites, but when that last TPRS termite takes that last bite, that final chomsky, it will come krashen down. ACTFL is far more part of the problem than it is part of the solution.
Tina where did you get those descriptors? I can’t find those they are similar but not exactly that.
“but when that last TPRS termite takes that last bite, that final chomsky, it will come krashen down.” Awesome
Russ I am using Star of the Week with some of my classes right now to end the year and am sensing that in the right activity (Star of the Week is one) we can get them talking and develop a rubric for their presentational skills as long as it isn’t forced. Star of the Week and in particular the interview sheet that Sabrina developed really gets them talking. So I am just suggesting (and this will be a very long discussion as we all now seem set, hammers in hand, to start seriously defining what rubrics in TPRS even are) that kids at upper levels can be graded using an output rubric. After all, our work with TPRS is to teach for fluency. When output happens we’ll need a rubric for that. As long as none of what they say if forced. I am NOT talking about the kind of forced conversation we have used in DPS over recent years to force them to talk, but to take a naturally powerful starter of natural conversation like Sabrina (and Nina Barber) have crafted. After all, when we keep inputting the language, we are going to get some back. We just don’t know when. But we’ll be ready with our rubrics when it does come flooding out of them.
Actually Ben this is Genius the Star of the Week or Persona Especial in my classes IS presentational. I am having a conversation with the student so that is interpersonal, but the nature of the class is to get info about the person and highlight what they can do in their life (at least as i do it now). For an audience that does not give immediate conversational feedback (no negotiation of meaning). This is huge for me because I can actually have a conversation with my student that is not forced, and I can assess their proficiency and their performance, but not in a canned way! HOLY CATS!!!
I will never forget talking to Sabrina just before observing her do the Star of the Week. It was a DPS Learning Lab and we had like 15 teachers in the room, including one from Maine! Sabrina looked at me like she knew where there was a gold mine and said, “Wait till you see this!” and then blew us all away.
And then also Russ just today with my 8th graders I found myself in a full fledged total communication experience in French. The kids not confident tried to sabotage it but I was a mad dog on it and it blew my mind how deep we got. The kids had just come from a “scare the 8th graders and tell them how much work high school is” meeting (the American Embassy School has lost its way and is going all IB now) and I asked the first question on Sabrina’s questionnaire – “What scared you the most?” and a brilliant child said “high school” and off we went. Bam!
It revealed something very deep to me. It’s not that they can’t talk but that they need something interesting to talk about! Star of the Week may be the single most effective tool we have to get them to speak.
Russ I think the secret, if we are to learn from Sabrina, is not in general lame PQA questions, but in asking them to come up with questions on Star of the Week or in other areas that they want to be asked. Just ask them what questions they want. Who said this here a month ago re: Star of the Week? I can’t remember. Big deal because when they tell us questions that they want to be asked we pull the CI space ship up to warp speed. I just talked to Anne Matava about this. More on that later. Worth repeating a third time because a major new bit of information for me at least – find out things that THEY want to talk about by asking them. Duh. Just another example of “CI teacher hubris where we think we have to do it all.
As per the above, Russ, I am starting to see how we are a bunch of blowhards in our classes, never caring to know what others think in the conversations. I have seen this year deep proof of how this work is in somehow finding ways to find out WHAT IS INTERESTING TO THEM. We think we do it but we don’t. We unlock the conversation, launch it, when we dig deep enough to find out what they REALLY care about in the conversation. I have said it but never really done it because I still think the class depends on me. My eighth graders yesterday TOOK OFF with the first question on Sabrina’s Start of the Week questionnaire and 85 min. later we were still on it. Guess what? This fact scores a torpedo hit on the targeted input battleship, a direct hit.
And I’m learning this now in the 86th of 90 block classes this year. Great.
Keeping focused on rubrics is going to be a major activity here over the next few years if we do not shirk the responsibility that has just landed, kerplunk, in our lap in the last month on assessment. What Claire is saying below is that we do so much great assessment in our classrooms but don’t even know it because we don’t think that we have permission to quantify it. Claire is saying that as the experts we get to write the rules and not some fool data turd as she calls them. We need to learn how to do that. We don’t have to quantify things that go on when we work with our students in a left brain way anymore, but in a whole brain way that reflects the Three Modes of Communication and that aligns with the research.
Claire adds a major piece of timbrer to our newly started House of TPRS Rubrics. It can form one of the foundational beams of the house:
…we involve students in something easy and silly and linguistically appropriate for beginners, like “touch your nose” and the kid touches their nose, and we notice and make it into a big deal. We gave it a spreadsheet with a fancy title and fancy-pants descriptors, and basically just throw the kid who touched his nose an assessment parade. We are already noticing kids doing TPR, now let’s just notice them and make a big deal out of them. Let’s share it with parents and administrators, and data turds who refuse to do the same…..
Coining the phrase “data turd” is my greatest accomplishment here I think. (Super classy, I know.) I can’t shake the words of the original Data Turd telling you that you don’t care about assessment. What a lie! Who could have the privilege of meeting Ben Slavic and talking with him and think for one second that this man doesn’t care about something that impacts his students. This man loves kids!
Ben cares about assessment in more dynamic way: what does my student understand (assessment) and how can I meet them where they are (CI). Ben just didn’t know to call this assessment because he had been lied to. So many of you have been devalued in the same way our kids are (besides the 4%ers) and it has to stop.
What kind of person would I have to be to not share Authentic Assessment with you? Really! I would be a terrible person and an ingrate.
So I started hounding Ben about assessments even though I knew he was working on the book. And a lesser man would have been too busy, but not Ben. He finds time, even when there is none, for anyone who loves children.
I don’t have to tell you that Ben is amazing, we all know this. But each of you TPRSers are here and have stuck around through the assessment posts because you are remarkable teachers. Truly, you don’t get credit and are often bullied for work you are doing, the revolution you are leading in foreign language.
You deserve to have your assessments acknowledged and defended. I’m throwing YOU an assessment parade all summer long.
Rubrics communicate purpose and goal. They reflect what we value.
Clearly they don’t have to judge, sort and winnow.
I will look at the elementary Language Arts world rubrics on reading comprehension for some gems we can adapt. Stuff like inferring meaning, using context clues, responding to Qs abt concrete story details, story order (using transition words) etc…
Once we see a bunch of goals we can add & adapt to our needs…
I like the idea of including transition words in our rubrics because the IB Language Acquisition subject guide includes “the use of coherent devices” (or transition words) in their production rubrics. And it seems worthy. In order to communicate with greater complexity students will need to acquire these transition words.
I appreciate that clever Alisa brings up the most important question for me: where are our criteria coming from? If we had a well-designed S&S, we could just copy and paste the performance indicators into a rubric with “sometimes” “often” and “never” -that’s exactly what I did with many of my rubrics.
That helps us present them this Fall as a complete set: a S&S, rubrics for our assessments, and TPRS as the instruction. Unlike with traditional teachers’ tests, the instruction, curriculum, and assessment actually align-like copy and paste into a rubric align. It’s pretty convincing evidence of the validity of our assessments.
Here, Lance explains how he connected expectations of what the kids can do (objectives or performance indicators or whatever you call them) to the rubric. His rubrics rock, by the way.
https://benslavic.com/blog/authentic-assessment-russ-28-assessment-of-language-vs-assessment-of-content/comment-page-1/#comment-79190
I encourage you to adapt whatever to align with your instruction, Sean, since the most important thing is that instruction and assessment go together. If you teach transition words, go for it, but I chose not to for my French students and Newcomer ELLs. Organization or sequencing is feature of writing that students pick up through extensive reading, and it’s later acquired than you would think. This may work if you teach fluent readers, perhaps some advanced Heritage Learners?
In terms of English Language Arts be careful, because foreign language learners, even the most advanced, are still very much beginners in the grand scheme of language and literacy acquistion. Consider Krashen and Brown’s definition of Academic Language (CALP) as reliant on higher-order thinking per the problem solving hypothesis. That’s a lot of extra mental energy spent drawing inferences, responding to wrong/right questions, etc. and often there is an assumption that we are explicitly teaching these skills (that’s one of Krashen’s major complaints of SIOP). Could we not just tell stories until they are fluent in the social language?
Unfortunately, few ESL teachers understand the above (they know it but they chose to ignore it/ignore lost Newcomers in there CBI classes). I’m working to fight this…but I’m so glad Sean is going to talk to his ESL teacher.
Thank you so much Sean and Alisa for bringing this up and altruistically reaching out to ESL teachers in your building. Those kids need your expertise, even if they aren’t on your class role.
What did I just do? Did I just put a link to Lance’s post and it’s directly below this post? How did I manage that? What is happening to my brain? I’m going to nap now.
More from ACTFL in the Presentational mode. This is the one that always GETS me because why have beginning language students present ANYTHING to ANYONE? All it does is, if you do traditional “presentations to the class”) eat up class time with shitty input from novice speakers (and what torture for the class who has to listen to everyone’s presentations or the teacher who has to review the videos!) and RAISE THE AFFECTIVE FILTER SKY-HIGH. Ask any kid anywhere and they all hate speaking to the class, in L1! WHY have them do that in L2 unless they crave it? (Some do…) PLUS imagine the time needed to do the following with a first year group:
Novice:
May use some or all of the following strategies to communicate, able to:
• Rely on a practiced format
• Use facial expressions and gestures
• Repeat words
• Resort to first language
• Use graphic organizers to present information
• Rely on multiple drafts and practice sessions with feedback
• Support presentational speaking with visuals and notes
• Support presentational writing with visuals or prompts
Practicing, memorizing, using graphic organizers…can’t we just have the person in the room who can SPEAK THE LANGUAGE speak to the kids slowly and comprehensibly? I just do not see why we need to weigh them so fast! Just FEED them. They will grow fat and happy and one day they will surprise you. Mine do. Even the shiest ones (sometimes especially the shiest ones) can write and talk when they want to and feel safe, like if I talk with them on the side.
I think what ACTFL believes is that learners have to like progress through Novice Presentational Speaking before they get to Intermediate Presentational Speaking. When, maybe I am understanding the theory of comprehensible input wrong…isn’t it true that you could spend three years listening to good rich input and then once they are all well-fed and happy, get them on the scale at the end of the year and see if they can do this when they are READY (This is Intermediate Presentational):
Remember this is in the context of narrating or describing.
May use some or all of the following strategies to communicate and maintain audience interest, able to:
• Show an increasing awareness of errors and able to self-correct or edit
• Use phrases, imagery, or content
• Simplify
• Use known language to compensate for missing vocabulary
• Use graphic organizer
• Use reference resources as appropriate
I doubt, knowing me, that I would even have end-of-second-year or third-year kids do this in front of the group though. What is the point? Kids HATE it, for the most part, and it DOES NOT SUPPORT ACQUISITION.
I was about to type well maybe having kids give presentations would be good in a situation where your sanity was on the line like you just needed a break from teaching. But that is NOT THE WAY to get a break–at the expense of the poor kids’ affective filters shooting up! Give them reading, put them on the computers to listen to music, anything but making them present to the class. It is not only not helping the acquisition, it is probably damaging it in the long run…by making them anxious and anxiety and language acquisition just do not mix.
Tina says: it DOES NOT SUPPORT ACQUISITION.
(look at those beautiful all caps words!)
Claire says: Where have you been all my life, Tina?
Georgia. Then Oregon. Now Chattanooga.
I actually think the novice level is important and we should not try to get kids out of it too soon. But I also know that we need to honor the novice learner and understand what their role is: learner. Bvp tells us that the silent period isn’t really a thing. Even babies babble. But grading a student’s performance is no different than grading my two year old. The problem is that I HAVE to grade it right now. It’s mandated by my dept. And unlike Tina, my school says I have to tie everything in my grade book to the state or national standard. And that means ACTFL so I will keep pushing this, but because ACTFL standards were written by 4%ers and are designed to be research independent there is no way it’s going away. I will literally have to wait until my dept head retires to even get it changed. She is super into the performance and into the sbg so it’s a standard it’s performance it means it’s graded. I love my dept head and although I don’t agree with her on this, I will figure it out. I am already planning on it being weighted almost nothing next year.
1. The silent period is a thing! I have never been a BVP fan.
He has zero research for that. Not even anticdotal evidence.
He’s never seen deer in the headlights Newcomer ESL students not talk to anyone in L2 for the better part of a year, all the while responding no verbally, then start speaking complete sentences. Foreign language people can’t comprehend this because they’ve never seen kids traumatized to this extent sitting in incomprehensible classrooms all day, then going to an ESL classroom where the teacher doesn’t know or take the time to learn TPRS because they figure kids will just pick it up on the playground and these kids are INVISIBLE!
Mainstream America has no idea the hell these Newcomers are put through!
ESL teachers know but we don’t speak up! There are 2 newcomers centers in Tennessee but I swear there will be more soon.
BVP shut up please and inform yourself instead of just trying to shock for attention. Krashen knows there is a silent period and takes the time to fight for bilingual education and wanna talk about abusive educational practices, immersion ESL! He cares about ELLs and I am straight up ranting now.
2. Grading stinks. I know.
Assessing is beautiful, though.
Can you do portfolio assessment instead?
3. I agree and I’m glad you mention that speakers can’t produce language. So, just ask them to respond nonverbally with TPR and Listen and Draw and get that on a rubric and you’re set. Now you’re in good standing to not test these kids for a while.
4. Get in that leadership role next year and do what you can to change things and down-play the ACTWHOEVER because they’re dummies. You’ve got a bigger burden than me, Russ, weighed down with bad standards, so if you have to play along with data turds a little, it’s not your fault. I still think you’re awesome for even wanting to try.
I like it that you weight it almost nothing! Your dept. head does not teach languages correct? Russ, something you ought to know about me is that I usually operate in the world of the ideal. And like I said I am a troublemaker. So ideally we would not have the Novices speaking and writing, especially not for a grade, and even more importantly, not in an anxiety-provoking situation. But if you have to, then maybe you use a rubric and observe them over a period of time? I really really like that idea. It jives with my desire to keep kids emotionally safe especially in the first stages (OK so maybe we do not want to call it not the silent period maybe, but the Babbling Period. But to me it is not important what we call it, we cannot force kids to output before they are ready, it is too stress-inducing! My kids have LONG silent periods because the culture at our very academically oriented school – and many schools – is that you want to look good, be right, and shine. So who wants to babble in front of their peers at the age of 12 like a baby with no self-inhibitions would? Few kids, that is who!)
So you look for evidence of them just doing this with a partner, or talking to the wall (my pal Lynn told me that someone she knows puts pictures of celebrities on the walls for kids to talk to, that was kinda cute), and you mark some kids each time and by the enf od the fortnight or whatever, you have all the kids marked on a rubric that is for the whole class, like the one I am about to make right now so you can tell me if this might work for you and get your dept. head off your case.
https://docs.google.com/spreadsheets/d/1-eyIGkkAahTanuDPGWwMWBZkfPZWfHFYhZiwjfERv2c/edit?usp=sharing
Anyone can edit. Here is the link to ACTFL I was using. I gotta go to work but this gives a rough idea of the concept: List the language from the standard across the top, then kids down the side, and just listen in until you have dates in each box for each kid. Then, voilà, you give everyone an A or B or whatever “meeting” is in your building, and you have DATES ON A SPREADSHEET to back up the As an Bs. Who could complain about that?
Link:
https://www.actfl.org/sites/default/files/pdfs/ACTFLPerformance_Descriptors-Presentational.pdf
I HATE PRESENTATIONAL MODE!!!!!!!!!!!!!!!!!!!!!!!
Was just ranting about this this morning to my poor husband who has to hear my rants every day.
I do not understand the point of presentational mode. For beginning language students and especially for adolescents. It is a presentation to an audience. That is for later. MUCH later. Really? In real life, who is going to “present to an audience in L2?”
The topic came up at breakfast today bc we were talking about a (kind well-meaning) teacher up at my former school (where hubby still teaches but is in his last weeks…yippee!!!!). Anyway this guy has a huge heart but does not get SLA. He always has kids doing fancy presentations (theater, songs and “movies” and such). I am sure the kids are having fun while practicing and perfuming these language like behaviors. But they are not acquiring language. That particular situation is even more insidious bc the guy is from Spain so it give that extra illusion and mystique of “authenticity.”
I believe very strongly that these types of activities are damaging to our work because they are so public. Who doesn’t love a performance, right? But it makes people (parents, adminZ, teachers, kids and the general public) actually believe “Wow look how great they speak Spanish!” When really they cannot function in the language AT ALL. They have memorized and practiced a bunch of lines so they cannot respond to language that is not in the script.
MAKES ME CRAZY!
oops perfuming! performing 🙂
“They cannot respond to language that is not in the script.”
I see that daily over here in Scotland. Kids are domesticated to memorize and cannot deviate from scripts. That’s why my Gaelic kids hear tons of questions about everything I can think of. Sure, they freeze too if a visiting assessor starts barraging them with questions in L2 but at least my kids have a fighting chance of responding.
Not to mention that Scottish kids do posters and powerpoints in most subjects, just like you said Jen. Why on earth would we want to crush their potential interest in a language with mind numbing tasks that only make the adults look good?
I hear you loud and clear, Jen.
Jen. I totally feel your rant. I attended a French immersion camp recently in my area. There was tons of forced output including a punishments and rewards bits in front of the whole camp. It is run every two years and our local central valley california world language organization pays for it. Luckily, I did not have students go to this camp.
I emailed Ben questions regarding a two day CI camp . It is very possible that I can become integral to the camp in the future as a few veteran teachers will retire and there are many new teachers who attended along with their HS students to go. There is major work to be done locally. This includes organization as well as having a vision grounded in good old SLA principles.
The future is bright but with much responsibility.
Although maybe the adapted WIDA stuff that Claire originally sent has stuff for us to pull in?
I would love to introduce these ESL aligned to TPRS rubrics to my ESL colleague in my building. If we use the same rubric our admin will like us more. So, let’s get that WIDA stuff in there.
That’s two requests, Ben.
(Claire slips Sean and Alisa a twenty.)
Actually, I am realizing the document I used is not going to fly with a lot of your administrators because some of you are required to use ACTFL (seriously?). Alisa, would you please email me and we can work out something that may work out better for foreign language. You have experience developing curriculum. I can’t find your email address. Mine is cnensor at gmail. com
Thanks!
I am not sure I have anything new to contribute other than a history of how I arrived at my rubrics and an explanation of why I use them (https://drive.google.com/folderview?id=0BxlEdumlZ-b0VVNQeWozT0M2VG8&usp=sharing). I’ve spent years thinking about grading & assessment. I can’t unlearn and unexperience what it’s like to be free from using more than just one rubric to grade everything, or how much teaching improved once I stopped adding up points and making judgments while following vaguely-defined rubric descriptions.
Before I even knew about Second Language Acquisition (SLA), I was trying to fit square pegs of Marzano Proficiency Scales into round holes of language learning. Briefly, his scales involve establishing a process (= 3.0, Meets the goal), then determining a simpler process required (= 2.0, Meets simpler goal), and a higher one (= 4.0, Exceeds the goal). My work in applying this to language learning degenerated to focusing on grammar, or word-level then sentence-level goals, that didn’t really reflect communication. I even made a new different scale for EACH assignment/objective.
The result was so structured that time was spent on assessing rather than providing CI. The Proficiency Goal Rubrics I use now are so global that they apply to anything we do. In fact, there’s no need to grade anything more than once per grading term (although I do collect evidence and report raw scores for certain activities, which I see as the “authentic assessment observations” mentioned lately). Even when grading once per term, which students self-assess themselves, there’s no need to circle traits and add up points, etc. My rubrics include three different levels of the following broad goals that can be used to describe ANYTHING done in class…
1) You can understand the target language.
2) You can be understood in the target language.
The way my rubrics are written, I’m not limited to have X number of either goal, but they’re both there to allow either one to be used as evidence of what a student can do when we observe/collect it. Here’s a comparison…not to criticize, but instead illustrate how I arrived at what I do after a few years of work…
a) Using Claire’s TPR rubric, determine how many commands students understand for each of the three types:
Stage 1. One-step oral or written commands with contextual, visual, or other supports
Stage 2. One-step oral or written commands with limited supports
Stage 3. Two-step oral or written commands with contextual, visual, or other supports
…or you could arrive at the conclusion “You can understand the target language”
b) Creating a read & discuss rubric vs. arriving at the conclusion “You can understand the target language”
c) Creating a multi-trait rubric for writing, or retelling a story vs. arriving at the conclusion “You can understand the target language”
So, we can use different rubrics for different things, or use just one to cover everything that pretty much sums up the most important goal of understanding the target language. For all of the “You can understand the target language” statements on my rubrics, there are a few levels of how well, and 2 of the levels also account for how often that occurs. There’s one thing to circle. In the end, kids get As and Bs and the point is to keep them enrolled, feel good, and not pay attention to grades, stress or even THINK about “how do I improve?” when the answer is always (and I do mean always) to Read and Listen to More Target Language (RLMTL).
I would caution that all this awesome work to create rubrics might take the PLC further away from the latest discussions, but go for it if this anchors you in a way that you need. Still, I have yet to run into someone who isn’t able to use my rubrics. If that’s you, email me so I can figure out how to make them completely universal.
“I am not sure I have anything new to contribute other than a history of how I arrived at my rubrics and an explanation of why I use them”
Lance is really saying…
I am not sure I have anything new to contribute (except the rubrics we are all dying for!) other than a history of how I arrived at my rubrics (basically walking us through every question Ben said we needed to discuss with perfect clarity) and an explanation of why I use them (because I’m badass and badass teachers develop and use rubrics).
Lance you wrote …Time was spent on assessing rather than providing CI…
This is KEY to me Lance. Also not to mention that every MINUTE we spend assessing is a minute not spent walking our dogs or smelling our roses or having a nice hot bath or going to yoga or just looking at the beauty of our children’s faces. Our own offspring, OR our students’. And those minutes are CRITICAL to maintain our mental stability and fill OUR tanks so we can fill our students’ when they show up. I have been through a special kind of wringer lately, and I can tell you that being able to show up emotionally and be PRESENT with kids is of the utmost importance. And we are not going to be able to do that if we just spent all night or all weekend or all spring break assessing their work. I love teachers, we are the givingest people, but we have to nurture our own spark and our own lives, we need to make time, lots of time, to enjoy our own existence so we can show up as a fully there adult for our students. Who, many of them, only see an adult like that in our classrooms.
Claire, come here in October for our COFLT conference and you and Lance present on rubrics! PLEASE! Do it you two!
Yeah! I’m there. No idea about childcare or how to pay for the flight, but I’m there.
I can’t wait to meet everyone here. You are such amazing teachers and friends.
I promise I am cuter than my photo, where Gravatar has decided to turn my skin pink and hair orange, sending off a creepy Donald Trump vibe.
Tina said:
…we are not going to be able to do that if we just spent all night or all weekend or all spring break assessing their work….
My feeling Tina is that you have returned the discussion back to where it needs to be – assessment in the service of more than accurate grading but in the service of our mental health as well. Thank you!
Although at this point I doubt if there is anyone in our PLC still taking time at home to grade stuff. I think we’ve come a long way in that area. Linda Li still does a ton of assessment and it has been a source of (lighthearted but real) contention between us this year.
RLMTL
Did you just make this up? I love it.
Yeah, I wrote that a couple months back in response to a comment by Scott Benedict on my blog. He targets input based on results from certain Power assessments of his (i.e. a particular kid needs either more listening, or more reading input). My response was that you could always use both, and it takes more effort to track and provide that specific input than you get back in results.
Am I a wrong to suggest that the rubrics can serve to educate students, parents and adminz on what we’re trying to accomplish? They can demonstrate many of the fine points of SLA theory. Maybe some Ts/departments/evaluators need this broken down for them and legitimized.
Lance are you saying they serve no purpose to you or your Ss and are all a waste of precious contact time?
Alisa, you are right, as usual. 🙂
Yes! This is everything for bullied teachers. They show-off the assessments we are already doing. Maybe Lance just isn’t in a situation where he would find it important, but many of us would agree with you Alisa. There is great potential for rubrics ” to educate students, parents and adminz on what we’re trying to accomplish” like you say.
…Lance are you saying they serve no purpose to you or your Ss and are all a waste of precious contact time?…
I’m not sure that anything more needs to be reported about acquiring a language other than understanding it and being understood in it. The finest point of SLA is that we acquire when we receive understandable messages. Certain rubrics de-emphasize that point. You can be as detailed as you want to be by parsing out exactly what it means to understand language in different skills, contexts, and how much/how often it occurs in different dimensions, but I prefer the clearest possible way.
“You can be as detailed as you want… but I prefer the clearest possible way.”
Rubrics are the “clearest possible way” for me, but if they aren’t clear for you, don’t use them. Just keep doing what you’re doing because you’re not testing and you’re using Authentic Assessment and you’re awesome, Lance.
“I still prefer a genuine exchange about what was read instead of a multiple choice quiz”
I love this! Simplify, throw out tests and quizzes, and just communicate. Lance and I use different approaches to get at the same thing: “genuine exchange of ideas” (so well said, Lance) but I love that we are two different sides of the same assessment coin.
Your version of story retells are a little different but just as authentic and much more practical. By eliciting one-word answers from volunteers who are ready to talk, as you do a whole-group writing, you’re noticing who is dictating words in the retell, and you’re brave enough to call it what it is: assessment. Data turds want to discredit this, but you say it like it is because your LANCE!
Assessments like that don’t have to be painful. Kids felt proud to see their dictations end up projected on the board. They felt noticed. Like you say, co-written text is ability-appropriate assessment for beginners. So we could just notice that like Lance, or for those who need data, we could use a checklist of some sort, checking off kids’ names who contribute. (Even though this wouldn’t contribute to a grade, it’s informative data.)
Lance works Rule #1: Never say you don’t assess, but he choses to not to document with rubrics, and that’s okay because he still assesses like a champ and asserts this, standing up to data turds and defending Authentic Assessments.
Lance’s retell is so intentional and practical (no extra work) and on-the-fly that you can speed up or slow down real time while the assessment is going on. It’s a lovely assessment/instruction hybrid, and it’s compassionate and real, like you, Lance.
Wires got crossed…I don’t do oral Retells. I do something more like Read and Discuss, or just Discuss.
The Timed Write Retells I mentioned are when students rewrite a familiar story instead of creating their own story (i.e. Free Write) using this paper (https://docs.google.com/document/d/1wH67hkZM_wlNbl5VyfgGcxbtLsbYBm8ep8dVfIXz8B4/edit?usp=sharing).
RLMTL is def. a t-shirt.
I printed off the simplest version of your rubric this morning Lance. Claire, I also printed up some of yours. Will keep everyone posted re: how I end up using them. I know it is the end of the year and all, but I always reserve the right to change what I do whevever I find something better.
I’m kinda using my current groups as test pilots for a buncha stuff. Mental health. Live authentic assessment. Equity. Acquisition happens when we allow it. Assessment not judgement. Fun. Simplify. Ease and joy.
Simple, simple simple. YES!
Angie Lance Claire and jen we are counting on you to help us hammer out some kind of preliminary document if possible once you have done some testing of possible rubrics. Claire maybe you can not let anything slip by and collect this early work with TPRS Rubrics into one place.
Maybe we could present current any end of year scrambled findings we get in the next few weeks to the Wed. evening group at iFLT.
I’m personally flying to the moon with Star of the Week, as per:
https://www.youtube.com/watch?v=h1HUVyDEz5A
Any rubric…EVERY rubric, in fact, that we could create, recreate, recycle, reword, etc. could be distilled to those two features of my rubrics:
1) You can understand the target language.
2) You can be understood in the target language.
If any rubric can’t be distilled down to those two things, we have no business assessing what it is we think we need to assess. My own journey has taken me to the point where I perceive any effort put into designing multiple detailed and more complex (i.e. often limiting) rubrics is just unnecessary work.
I wonder if grading is clouding how we assess.
They are NOT the same. The more grading categories we have, the more specialized assessments we need, each with their own rubric. With one measurement for a grade that celebrates what everyone can do, our assessments are the exchanges, clarifications, and adjustments we make in our speech. We can document the result of our assessment using evidence of student work. Perhaps that is the perspective, here, instead of documenting that we observed a student doing X, we just hold onto one of their products, toss it in a portfolio, and use it as evidence to give them one grade. We could also “use it as evidence” to give them one grade, if you get what I mean, because grades have nothing to do with language acquisition, and the only thing we have absolute control over is the quality and quantity of our input.
Again, this stuff isn’t new for me, but serves as an answer to “what the hell is Lance thinking?” I’ll engage in discussion, but I don’t know if I’m interested in contributing to the redesigning of rubrics and follow a progression that might lead me right back to where I am now. The change in mindset this brings about will be healthy for a lot of people.
Lance your clarity is a breath of fresh air. Simply put: less is more. The technical docs are for those being bullied. That is not my case but let me add a twist.
Aside from jGR (or ISR), is there anything in a document that mentions “compelling” input? I ask because Krashen has written that compelling input is much better than just comprehensible input. I understand that we are not entertainers but sometimes there are structures or “activities” for lack of better word that are more compelling than others.
I’m not trying to throw a monkey wrench in our assessment discussion but can we possibly include compelling input or check for student responses that can evaluate OURSELVES in order to guide our instruction. I’m a newbie here so thanks for the patience.
I organized a bunch of stuff according to various letter Cs because that’s what language people like. Compelling was not its own thing because I am under the assumption that it must be present in everything else.
I see this as being a valuable part of one of those Admin Checklists and for our own self-eval, but it really qualifies the “understanding” that students do. The more ways we qualify what/how students understand, the more detailed we get.
Lance, I always appreciate how you bring a very unique view of rubrics to the table. These are some big statements to think through and I had the following thoughts and questions:
`1. “I wonder if grading is clouding how we assess.”
Absolutely. You are right that grading is not assessing. They are not the same. Assessment is beautiful, grading is meh. It’s important only to some families and colleges, but it is not an ideal way to give feedback to students, families, and administrators. Like you say, “grades have nothing to do with language acquisition.” Assessments have everything to do with how we modify our “the quality and quantity of our input.” This is why I am not assigning grades for my ELLs, but using portfolio assessment instead.
2. ” We can document the result of our assessment using evidence of student work.” Yes, and this is further proof that nonverbal/nonwritten responses that are not recorded on paper must definately be recorded in a rubric.
3. “EVERY rubric…could be distilled to those two features of my rubrics:
1) You can understand the target language.
2) You can be understood in the target language.
If any rubric can’t be distilled down to those two things, we have no business assessing what it is we think we need to assess.”
I love how this showed that (good) rubrics are really, at their heart, just assessing communicative language. That’s what makes rubrics authentic: they are related to communication really happening during instruction and assessment.
3. “…any effort put into designing multiple detailed and more complex (i.e. often limiting) rubrics is just unnecessary work.”
This baffles me because you actually went to a lot of work making rubrics on your blog. I’m not complaining because I’m a rubrics hoarder, and I say the more the merrier. But why did you do this if it’s a unnecessary work?
4. Most of the statements you’ve made so far I agree with, however, there is something I am not sure I can agree with. Are you suggesting deconstructing analytic rubrics here, and if so to what end?
“The more grading categories we have, the more specialized assessments we need, each with their own rubric. With one measurement for a grade that celebrates what everyone can do, our assessments are the exchanges, clarifications, and adjustments we make in our speech.”
Are you suggesting we only use one holistic, summative rubric for assessment? And are you also suggesting we do away with analytic performance assessments of tasks like TPR, Listen and Draw, or other tasks students can do formatively? If so, I don’t agree with this at all. I comment here on the need for multiple measures to differentiate for students: https://benslavic.com/blog/authentic-assessment-russ-28-assessment-of-language-vs-assessment-of-content/comment-page-1/#comment-79118
A holistic, summative rubric is a fine to start and end the year, but formative assessments using multiple different rubrics and different assessments must be done constantly-some we fill in the rubric, some we just fill out in our heads.
Also, I didn’t understood this part: “The more grading categories we have, the more specialized assessments we need.” Like you point out, grading and assessment are not the same, and I don’t see any causality here.
…“any effort put into designing multiple detailed and more complex (i.e. often limiting) rubrics is just unnecessary work.” This baffles me because you actually went to a lot of work making rubrics on your blog…
Emphasis on “more” (i.e. MORE detailed/complex than what I’ve done). I essentially use a single rubric, with basically one Standard. A lot of my comments on this topic are explaining why I use mine and not other ones, and how it’s hard to climb back up the rabbit hole.
…Are you suggesting we only use one holistic, summative rubric for assessment? And are you also suggesting we do away with analytic performance assessments of tasks like TPR, Listen and Draw, or other tasks students can do formatively?…
Yep, and yep. We don’t need rubrics to tell us how to differentiate in real time, authentically. We make our modifications in real time. If not, we risk targeting something we have no/limited control over. We can use all of those tasks you mentioned as evidence in a portfolio, I’m with you on that.
…Also, I didn’t understood this part: “The more grading categories we have, the more specialized assessments we need.” Like you point out, grading and assessment are not the same, and I don’t see any causality here…
Right, which is why I don’t think we need rubrics for assessments since they happen in real time. Of course there’s causality. Almost every language department has some stupid grading break down that looks like this:
Homework – 10%
Participation – 25%
Quizzes – 25%
Tests – 40%
Before we can even THINK about what and how to assess, in order to give the Mr. Average Monster enough scores to eat up, we need at least 3 assessments per grading category so we don’t screw over kids. Unless we use one rubric for everything, we instantly need 4 rubrics; one for each grading category.
Does a rubric improve a student’s rate of acquisition? Does a rubric make what students read and hear comprehensible? I’m reading a lot about creating rubrics as a response to being bullied. That sucks. As an exercise, could you use these new blank-trait rubrics as a template and throw in anything you need? Would that back off adminz? Honestly, take any rubric they claim you need to use, and copy and paste the traits right into the blank space. You instantly saved yourself a ton of work. Oh, and students self-assess using this thing once per grading term. Even less work for you. Here are the blankies:
https://docs.google.com/document/d/1pKx3ujLAY11Kw78LqaPaVbpEVl7at1j2-y1M-AjZId0/edit?usp=sharing
Again, the grading categories are not assessment. Agreed, the grading categories (Homework – 10%) are artificial. But how does that relate to assessment with rubrics? Like at all? Grading is a construct that exists outside of the type of assessment used. And my message of Authentic Assessment is that we should simply change the type of assessment used.
“…we need at least 3 assessments per grading category so we don’t screw over kids.”
Like you said, grades and assessments are two separate things. I see bad grading policies like the percentages you shared and reforming our assessments as two separate issues. We don’t NOT improve how we assess because our department has adopted a poor grading policy. If we are in a position where we are required to submit “at least 3 assessments per grading category” and we can’t get out of that, wouldn’t you rather it be an authentic assessment? Wouldn’t more teachers have the pull with administrators to exchange a traditional “test” from the “40% test” requirement IF they had a rubric? I know the grades stink, but grading with Authentic Assessments stinks significantly less than grading with tests.
I agree that the rigidity of the categories of grades is unfair to kids, particularly the idea of a mandate on homework and it’s part in a grade. That’s a separate discussion with your principal on why homework is not necessary. Sorry, I am not able to fix that for you with Authentic Assessment. I know it stinks.
But that’s a minor problem compared to the fight against tests and quizzes.
But if you want to get rid of the tests and quizzes, I can help you with that. Just use the formative assessments that are authentically related to the story you are telling and then collect documentation. No more test shaming. I think we are on the same page there.
“…If we are in a position where we are required to submit “at least 3 assessments per grading category” and we can’t get out of that, wouldn’t you rather it be an authentic assessment?…”
Yes, but you don’t need rubrics to authentically assess.
“…Wouldn’t more teachers have the pull with administrators to exchange a traditional “test” from the “40% test” requirement IF they had a rubric?…”
Yes, but that would results in one new rubric per assessment they required. I don’t know how else to prosthelytize the One Rubric to Rule them All.
“Oh, and students self-assess using this thing once per grading term. ” Having students self evaluate for their final grade is what I do for the free writes. This highlights the metacognitive aspects of our work. They students become aware of what they should do. I will probably include a jGR form as well for their final. This is far LESS work that I did last semester.
I should also qualify that I don’t assess Performance (as ACTFL has defined it) because I find it a waste of time and an unnecessary step towards Proficiency that often costs schools mucho dinero. Instead I assess Proficiency, mostly because we don’t practice a thing. Here’s a post on that:
https://magisterp.com/2016/04/22/performance-vs-proficiency-why-i-choose-proficiency-or-at-least-not-performance/
See, I think that’s a problem.
I’m afraid the rest of the world (like my colleagues in Social Studies, Math, Science, ELA, and in ESL) we don’t use the ACTFL words “performance” or “proficiency” as much as “summative” (an evaluation of overall proficiency) and “formative” (an evaluation of performance on ongoing instruction). We have a professional responsibility to do formative assessments because they help drive our instruction.
I suggest people in this group seek to use more standard assessment terms if we want to get administrators on board. I like the ACTFL less and less daily.
I have said before that you should be doing 90% formative assessments, even though the data turds will try to tell you otherwise. Formative assessments like jGR and retells drive instruction.
The holistic rubrics you shared that you called “universal” are great as an alternative to a final exam or other summative assessment when used with portfolio assessment. You even mentioned collecting documents over the semester to use as evidence, and this is exactly what I’ve been doing for years. It’s great.
But it’s not enough.
We have a professional responsibility to do formative assessments because they help drive our instruction. It is not unreasonable for our administrators to occasionally ask us to document a few of our formative assessments. You don’t have to fill out a rubric every time, and it could be simple like TPR or jGR, but you have to assess formatively. Please be careful with the statement “I don’t assess Performance” because it could be interpreted as “I don’t assess formatively.” I try to avoid “I don’t assess” in general.
Could we put a pin in summative/formative assessments and come back to it later? I am going to revisit this because it’s important.
Claire, this comment is making my head spin. Like are we going to take this fight to that level, and take on ACTFL?
ACTFL is like THE organization. Like COFLT, the organization I am on the board for, is our ACTFL state affiliate, so they pretty much have their fingers in all the pies in all the states. BUT, I do not always agree with them. But they are our national standards, and I am pretty sure that most districts and principals expect WL teachers to follow the ACTFL standards. In PPS that is the case anyway.
What often happens though is that teachers SAY they are following the ACTFL standards, but they seem to think that ACTFL is asking them to assess the discrete packets of language that support the functions at the different levels. So say ACTFL says that a Novice should be able to exchange basic personal information (which they do), then teachers will drill forty-three ways to say personal things (I am from… Where are you from… I live in… Where do you live… My address is… etc) and then test the kids on them, either in an “oral interview” or a multiple-choice test or asking them to produce them in writing etc. THIS IS SUCKY. And it needs to stop.
So the reason this made my head spin is because taking on ACTFL is huge. It is like the little mice getting together to take on the cats. I have a question now. Do we want to work within the ACTFL framework or build our own? Because I see nothing in ACTFL (maybe I am wrong) that precludes using authentic assessments or rubrics or what have you. ACTFL provides this framework for describing the progression people make, going from Novice to Superior. The framework does not actually dictate instructional or assessment practices, but in the hands of many teachers, they twist it to make it align with their desire to measure, rank, divide, and dismiss kids…which I am coming to believe is a form of mental illness that some people in the profession have, like they somehow believe that people are not all equally capable and their job is to communicate to kids and families who is In and who is Out.
Claire said:
…nonverbal/nonwritten responses that are not recorded on paper must definitely be recorded in a rubric….
A light bulb went on for me on that one. It means to me that if some kid answers a question with a kick ass response I make a note somewhere and THAT MOMENT becomes the assessment and lots of little moments like that in class become the grade. jGR is different as a rubric bc it describes the student’s static behavior over the course of the entire class. It’s not specific. This rewards what they do in the moments of class, when assessment should mostly resemble instruction, right there, and is much more specific than jGR and I can’t imagine the rubric to record that moment would be very complicated to make. Thus, I give myself permission to assess in that way even if mommy and daddy at the district office might catch me doing it and say, “No Ben! We give actual TESTS to see what they have LEARNED! And those tests have to come from the list of words we warned you to teach. Because they come from the book!”
“It means to me that if some kid answers a question with a kick ass response I make a note somewhere and THAT MOMENT becomes the assessment..”
That’s authentic assessment, Mr. Slavic. You’re honoring kids and giving immediate feedback.
During retells, I let kids dictate to me and as I type on the projector. I’ve seen you guys do this too -Grant did a particularly good job with getting kids to dictate during the retell; he was silly and laughing with them and they didn’t feel like they were being tested. Notice good answers, type it, re-read it, and gush about how awesome it is and the kid will feel proud to see it projected on the board.
You’re doing it anyways, just save that text as evidence and watch the writing grow over the semester (tracking growth over time is something we do infinitely better than data turds with traditional tests).
Better yet, have a girls verses boys competition and color code words each group dictates. They will think it’s a game, but your administrators will see that this is superior to a test: it is authentic assessment. This assessment elicits the most appropriate early writing samples (dictation is writing for beginners) and it allows higher order thinking and real communication AND makes differentiation more intentional.
If you wanted to, add students names in a comment box in your word document to show who said what. It takes one second of class time and makes kids feel noticed. If you save every story in one long Master Story document, you can use the “Find” feature and see what individual kids have contributed all year, tracking individual students growth over time.
But I like your idea Ben of just any time, story or not, having a separate place where cool ideas went and were saved. I’m thinking some triumphant music is is in order as we ceremoniously type and read aloud the kid’s idea. Or maybe you could just have on a designated chart paper pad.
When I taught Elementary, I had the cutest little bulletin board with their pictures and a little dry-erase speech bubble and when I caught them saying something kind, they got to dictate it to me and it went in their little bubble. They loved it.
Getting feedback and seeing your writing published is the anti-test; it is the teacher honoring your work, so students take pride in their work. I say make as big a deal as possible out of students participating and sharing their lovely voices. And like Ben, be sure to call this what it is: assessment.
“and lots of little moments like that in class become the grade.”
Yes! After a while, these formative assessments build to a summative assessment (think a kinder final exam), using what Lance created or another holistic rubric to evaluate our collection of assessments-our portfolios. But you do need “lots of little moments.”
…notice good answers, type it, re-read it, and gush about how awesome it is and the kid will feel proud to see it projected on the board….
Why don’t we do this all the time? It’s like a free bail out move that keeps things going. I guess we don’t do it because speech is faster than writing and we would lose the flow. But the way you described Grant – can you share that clip here Claire? – makes me want to go do that writing down in the moment thing when they speak in all three of my block classes tomorrow! Dang boy there is so much to remember in this play!
https://benslavic.com/blog/important-new-grant-link/
Claire said:
…have a girls verses boys competition and color code words each group dictates….
This means that the Word Chunk Team Game is authentic assessment! And group work on top of that! I feel like this guy:
https://www.youtube.com/watch?v=l6i-gYRAwM0
Group assessments and peer assessments and self-assessments are all part of the puzzle. The fact that we use them means that we are getting an even broader view of students’ communicative proficiency.
It’s a cherry on top.
See, this authentic assessment goes pretty deep. There are deep roots for your little trees when you have them communicate as a group.
Claire—“But I like your idea Ben of just any time, story or not, having a separate place where cool ideas went and were saved. I’m thinking some triumphant music is is in order as we ceremoniously type and read aloud the kid’s idea. Or maybe you could just have on a designated chart paper pad.”
In my sixth period, we have a sort of class knowledge of the kids’ emerging punning ability. The kids recently had a visitor observing and proudly shared with her all the Chistes, from #1-#6. Chiste #1 was when Aidan said “AdiDOS” in February, to two kids when they got called to the office. We all took a moment to applaud in honor tof he first joke ever said by a kid in seventh grade Spanish all year long. My personal favorite is Chiste #3: “Chicambos – Clase, is the character a chico or chica? No, sings out Papa Smurf (a kid in class), it is a chicambos!”
Chic0=boy, Chica=girl, Ambos=both
Oh, yeah, on someone’s birthday they did not want to wear the lei that I have in my B-Day costume box, and our famous Aidan said, Oh, no LEI gusta.” So funny!
We all applaud whenever a new Chiste makes its way into the class memory. Making a class memory book where all the brilliance lives, so cute!
The music I see playing is “We are the champions” by Queen. 🙂
…f any rubric can’t be distilled down to those two things [understanding and being understood] then we have no business assessing what it is we think we need to assess….
And that understanding and being understand of necessity reflect the nature of language to be a “whole” thing and not pieces of something [quick quizzes]. This is all so new, assessing kids not just on what they can memorize or what they can identify as a one word answer, but rather how they do in the i + 1 ongoing FLOW of things. It’s wholistic work, not divisive.
I think I’m starting to get it.
There’s two parts to the story.
In TPRS/CI we formatively assess more so than the average ELA, Soc Sci, Sci, Math etc… teacher. So I agree with Lance’s — less work the better. Yet, I believe we should document moments where students have those “a-ha” moments and when they say some kick-ass things. Like Ben, says “in the moment”. Not only for admin but something for US where it can be useful.
I am open to it seeing as I am a new teacher and I have to reflect on my practices and document my assessments.
I also agree that jGR is a little static and documents student’s behavior ACROSS the lesson. It does not assess or evaluate the language acquisition process through non-verbal and verbal behavior in order to drive instruction.
That said, one thing is CYA another thing is to bow down to the ignorant dept. chairs and admin. If our rubrics arm us to do the work we do then so be it but if we are fighting a loosing battle then we should go out with a bang. (see Lance’s last Spanish gig)
Just to be clear, my grading and assessment practices had nothing to do with that last Spanish gig ending the way it did. If my stuff was just theoretical I wouldn’t be so resistant to change, but they’ve been successful in multiple classrooms/schools.
I’m gonna play the Devil, here. So we document some “ah ha” moments using the TPR rubric. This is the process:
i. Determine which Stage the command was
ii. Determine if the student’s “ah ha” moment represented Some, Many, or Most of the commands understood
iii. Modify our instruction because, as Claire mentioned, we have a professional responsibility to do so
My questions are 1) what do we do to modify our instruction? and 2) did we really need the first two steps in the process above in order to do that? I contend that modifications occur with or without a rubric, and if you have to analyze a rubric in order to modify instruction on another day, it’s already too late for that student. The amount of documentation is going to get out of hand quickly, especially if 5 different kids have 5 different “ah ha” moments that required 5 separate rubrics each day.
Would a better strategy for this entire topic be to find out who has the least possible wiggle room and the most documenting needs and then find something that satisfies them with the least amount of effort? Google Form with a few fields? Ben could start a new post requesting data like the email one?
“Just to be clear, my grading and assessment practices had nothing to do with that last Spanish gig ending the way it did”
I’m sorry Lance, I didn’t mean to say that it was your assessment or grading. It is just that it seems like there are people who are doing the assessment/data thing in order to appease admin or other people when in actually, they will not give us a chance.
In my opinion, it compromises the work we do. I’m of the people who think that we only have to prove to ourselves about the work we do and not other people who are ALWAYS going to attack us because they have drunk the “data” kool-aid.
So what I meant to say that you stuck to your guns. Admin was not supportive, so “goodbye”.
What I expect and want from the whole assesment piece is not doing more work for the sake of more work but to align my ethics regarding assessment in a way that supports language acquisition AND behaviors that faciliate the flow of communication. This include celebrating what the students do and a reflection piece for myself and students.
Steve you wrote: What I expect and want from the whole assessment piece is not doing more work for the sake of more work but to align my ethics regarding assessment in a way that supports language acquisition AND behaviors that facilitate the flow of communication. This include celebrating what the students do and a reflection piece for myself and students.
Claire also recently made me see that we also need to be amassing data to forward the “movement” if it can be called that. To arm ourselves to be able to show the bosses that we are actually getting the results we know we are getting.
So, if I may, here is the list of goals I see being promoting for this assessment work:
1. Little work, grading, and paperwork so we can keep our minds and herts fresh
2. Supporting students emotionally, not making anyone feel like they are not good enough or not smart enough or just not talented at learning languages
3. Supporting and documenting behaviors that lead to understanding spoken and written messages
4. Supporting and documenting behaviors that lead to facilitating an open flow of communication.
5. Celebrating student ideas, contributions, and progress
6. Students reflecting on their progress
7. Teachers reflecting on their progress, as providers of compelling, safe, and understandable messages. Steve you recently said that you want data on the compelling-ness of your instruction.
8. Forwarding the movement by amassing instruments that reach the above goals (and others we may add), while demonstrating the kids’ acquisition and abilities
I want to get clear myself on the goals here. I feel like we are falling down the rabbit hole, but in a good way, because at the bottom there are all kinds of new strange and interesting and yet-undreamed-of things.
“Would a better strategy for this entire topic be to find out who has the least possible wiggle room and the most documenting needs and then find something that satisfies them with the least amount of effort? Google Form with a few fields? Ben could start a new post requesting data like the email one?”
Yes. I am in no need for hardcore assessment documentation that is meaningful to TPRS. I still have to do my Beginning teachers program next year which will once again include a Pre-assessment (probably a free write) and a Post-assessment (probably a written retell). I will grade on paper but not input it into the grade book. It is really just CYA to clear my credential because it’s bogus and useless.
Steven, shouldn’t the Pre and Post assessment be the same (i.e. both free writes, or both written retells)? Otherwise, you have different variables.
Yes there are variable. An ideal would have been both the same. I never asked my BTSA supervisor. Both assessments address writing. I hope my binder passes. Last time I did.
In the pre, i gave a minimum number of words to write.
For the post, I had them do a dictation and looked at the spelling of new words.
The first time I did it last semester, I had students do a translation of new words they never seen. I did not like the effect it had on students. they felt dumb. So I am glad I just adapted what we already do.
Here’s how I see Pre and Post assessment:
1) Pre = kids do something they can’t do/suck at
2) The school year happens
3) Post =kids do the exact same thing now and don’t suck, WOW, it’s MAGIC!
Here’s a translation:
1) Kids don’t know the target language
2) Kids Read and Listen to More Target Language (RLMTL)
3) Kids know more target language than they used to
Witchcraft.
But isn’t it lovely that Krashen can use this formula to support best practices for us? Summative assessments are only beneficial in a handful of rare cases, but serve an important purpose.
That’s even more reason to use authentic assessments that focus on 90% formative assessments to collect the data we need to end data turd’s obsession with summative tests.
I am in no need either, thank the gods and goddesses, but recently I have realized that I may still need to start collecting more data, if only to further the movement/profession/what have you. So I am interested in using and piloting new approaches, in 2017-18, because of that, not because I need to cover my ass.
Same here Tina. The data is for me. Kinda like when we have reps counter? I want to develop a mini-system to self-evaluate myself?
Ex: How comprehensible was your lesson (insert criteria)?
How compelling wa your input?
Did you have a conversation or spin-off conversation with a student? If so how many
How slow did you go?
etc…
Yes, I agree those are key things to look at. How to measure “compelling”? I usually measure it in subtle ways, like looking at the kids’ eyes and posture. So coming up with a way to quantify that, I am stuck there.
Personally, I don’t stop class to fill out rubrics. I notice right then and there if the person got it, and I figure out how fast or slow or what other scaffolding would be appropriate. I notice one kid a class period and fill out the rubric, a different kid each day, and make mental notes and go out of my way to recognize that kid. As kids pack up, it takes all of 30 seconds to fill out the form. I don’t always fill in all criterion. I just check the box if it applies. It’s actually quite simple, so I recommend you give a try. You will be surprised at how easy it is to use.
Lance, I don’t think we are going to move to a separate form.
I know Ben speaks of mental health for teachers, but protecting kids comes first. There’s a war against testing that needs to happen, and we are training for battle.
You’ve got a lot going on, and I respect that you don’t feel you need to join in.
“…Personally, I don’t stop class to fill out rubrics…as kids pack up, it takes all of 30 seconds to fill out the form…”
– So how do you know whether that moment, perhaps 45min later, represented “Some, Many, or Most” of the commands understood according to the rubric? Remember, I’m questioning the use of these rubrics, not what you do in real time.
“…It’s actually quite simple, so I recommend you give a try. You will be surprised at how easy it is to use…”
– I’ve used tally/tracking systems. However easy it is to use them, it’s easier not to use them.
“…You’ve got a lot going on, and I respect that you don’t feel you need to join in…”
– I’m unemployed and have like two commitments each week so it’s certainly not about time. I somehow feel compelled to reply (if only to keep track of new comments), but at this point I think I should stick to my first comment that I don’t have anything new to offer, and will bow out of this one.
BAH, see? I can’t get away. Thoughts upon waking…
OK, so it’s rubrics. So far there’s one for TPR, one for Listen and Draw, and one for Retelling. What about one for each of the other activities under the 27 Strategies in Ben’s Big CI Book? I see the progression as follows:
1) Create a bunch of specific rubrics
2) Get tired of/confused by so many different rubrics
3) Organize and reduce them to the 4 skills (listening, reading, writing, speaking). Include a space on the rubric to “fill-in” what activity provided the assessment results, so one Listening rubric could be used with any activity that involves listening, and when it does, we write it down
4) Further reduce rubrics down to just 2 (i.e. Input/Output), and specify listening or reading in the fill-in along with the activity
N.B. Step 5 would be something like what I have, which is reducing to a single rubric that covers all bases, but still has the “fill-in” space to specify activity and whether it was a productive skill or not.
Whether it’s Step 3, or 4, there should be consistency in the rubrics. Instead of multiple traits like we’re used to, a single one like Angie’s (which is more of a scale than a rubric, but the terms don’t matter much) streamlines the process. Instead of ranges of frequency (e.g. sometimes, often), or quantity (e.g. some, many, most), the rubrics could be designed with a different measurement. The idea of processing came to me. We use our slow processor as the barometer for rock bottom understanding, and can ask the fast processor higher level questions. TPRS gets kids processing faster. Why not use processing speed instead?
ex.
Listening & Reading Rubric (as 1 of 2, or separated into first 2 of 4 skills)
Understands…
1 – Only with support
2 – Slowly
3 – Confidently
4 – Instantly
ex.
Writing & Speaking Rubric (as 2 of 2, or separated into second 2 of 4 skills)
Writes/Speaks…
1 – Only with support
2 – Carefully (= Slowly)
3 – Steadily (= Confidently)
4 – Fluently (= Instantly)
“if 5 different kids have 5 different “ah ha” moments that required 5 separate rubrics each day.”
“What about one for each of the other activities under the 27 Strategies in Ben’s Big CI Book? I see the progression as follows:
1) Create a bunch of specific rubrics
2) Get tired of/confused by so many different rubrics”
Lance, chill. No one is suggesting this. No one wants 27 rubrics. People here are pretty smart and they can do this without getting tired or confused, as long as we don’t get ahead of ourselves. Simplify, simplify.
If they confuse you, don’t use them.
You like how I have 2 number 3s? Numbers + Claire = typos.
Claire,
I know you are speaking generally. And I do agree that students with zero language proficiency are silent for a long time. I personally have done a lot with the ELLs in my school. The problem is that Tina basically says not to have novices perform tasks have them wait on their speaking and I agree. But a lot of TPRS teachers don’t respect the novice level which is sad. Leadership role not up for grabs. Sorry not an option. I think you’re right I am going to use other means to fix the issues I have because I can’t do anything else. jGR is out for me since all rubrics have to be common and agreed upon by the dept. I have to assess presentational speaking and I have to have common assessments and curriculum. So I will work from within and do what I can.
Do you think the department could agree on jGR? Is it just you three rockstars or is it also the head? What if the three teachers got together and talked as a unified front to the head?
One of the three rockstars as you put it was the roadblock, but she and I just had a super interesting conversation and she seems much more open to jGR than before. So we are working on it I will report back when I have something to report.
She is a cool gal and I bet she will come around!
I’ve been swamped with my BTSA portfolio. I’m gone for the weekend, now this thread has blown up. Where to start? Comments? The Onedrive docs?
Here’s my idea, very rough.
https://docs.google.com/document/d/1bBazO1G7EdyvvB7OEvxd9PxPt-vAqwXsnz3c-YJlGEM/edit?usp=sharing
Here are the rubrics I’m working with right now. I have to do some thematic-unit based assignments, but if you scroll down there are ones for free writes and translation activities. I don’t do any spoken assessment with stories, just classwork retells.
https://docs.google.com/document/d/1sB8dj2_JRgU4m80QBtyhYULMmvtiRrgENKYG2gYlbvk/edit
How lucky we have you, Claire, and your holistic ELL lens to see thru the idiocy.
As I understand it, the diff between interpersonal and presentational modes is that in presentational, there is no interlocutor with whom to negotiate meaning. So if you write a story, make a video or other ‘text’ for someone else to watch/read, that person can’t ask, in words or with their “I don’t get it” eyebrows, “what does that mean?” to get a grip on meaning.
Clearly presentational isn’t for the BICS level! It’s all about consuming/comprehending or creating WITHOUT support.
Thanks and you are so well informed on the foreign language piece and have experience developing curriculum. Email me, dear. I have a million questions.
You guys are blowing me away with all this! I really haven’t had the time to participate, and for that I apologize, but please know I’m with you in spirit and that I’m pretty certain our goals line up in how we want to treat our students in the classroom.
I’ve just accepted a full-time Spanish position in Monona, IA (my first school teaching position in my life-long home state of Iowa!). There seems to be lots of support there for me to do what’s best in my view. I’m replacing a teacher who is moving 15 miles up the road to another district, and this year was her first year teaching with TPRS, so now I’ve got CI peeps right down the road, not to mention my new students are now used to TPRS, of some variety.
I am excited to take what is coming out of the mill here, consume it all in the coming weeks, and get serious about starting fresh with a better approach to formal assessment. I’ve long been enamored by Lance’s simplicity rubric, but haven’t been able to defend it articulately enough to adopt it outright. I think we should have an assessment roundtable of some sort at iFLT. It might have to be in the evening though as many of us are presenting and coaching. Can we do that? Are others here interested?
OMG!!! Jim! I would love this! Except I am not going. But now maybe I am trying to twist and bend my schedule. HOly MOly this thread is crazy exciting.
I apologize bc I have nothing to add. All the rubrics I use are either lame ones I make up on the spot or they are adapted or stolen from y’all. But I will check to see if there is anything I have of value.
Congratulations on your new job, Jim. Like Jim, I’m swamped right now and can’t really process all of this important information.
I’m very interested in talking about assessment at iFLT. Evening works. I think we should Skype in Jen and anyone else who wants to be in the conversation.
I had a sort of a rubric on speaking that was really more of a way for students to reflect on their (perhaps pre-vocal) stage of spoken language.
Sometimes going from rubrics (descriptive) to the need to put things in the gradebook (numbers and GPAs involved) and then to defend them as objectively as possible is a question for me.
… I think we should have an assessment roundtable of some sort at iFLT….
I can’t do it during the day so that means if we are going to do it let’s do it on Wednesday night.
Put it in your schedule Jim and I’ll ask Claire to anchor it.
I love what you said here. It’s so Jim Tripp:
…I’ve long been enamored by Lance’s simplicity rubric, but haven’t been able to defend it articulately enough to adopt it outright….
You have to be from Iowa to be able to say things like that, as per this:
https://www.youtube.com/watch?v=xV7ZcVFSWWU
Jim you are Harold Hill II. He got into the music classrooms of River City with his message that anyone can play a musical instrument and you will be doing the same thing now next year in Monona with your message that anyone can learn a language.
While I don’t follow the reference to Harold Hill, I did enjoy the clip of the stubborn Iowans. Going from Minnesota Nice to Iowa Stubborn then. (Funny, I hadn’t ever heard that stereotype before.)
I am sitting today in the classroom of the teacher I’ll be replacing. Her first year doing TPRS. She loves it, and is just moving up the road to Waukon which is closer to her house. I see a Northeast Iowa network emerging…
btw, if anyone is looking for a Spanish job in SE MN, there are two open right now, one full time and one half time.
Hey Jim it is so awesome you found a new job. I am finishing my first year in Portland Public and I love it, so glad I made the move! I have found a great community of teachers who want to use more CI and TPRS and it has been really fun settling into that group! I hope you are as happy as I am with the change. It is good to get into a new place and learn and grow. 😀
I love the idea of a conference roundtable, though since I really don’t have to deal w/assessments & rubrics the way y’all do I prolly wouldn’t attend…I think we ought to have all the ones everyone has shared thus far in one easily accessible place…
Different rubrics fit different circumstances – clearly Angie needed one with point ranges while my ‘report card’ has only E (for Emerging) or P (for Practicing)…I spend most of the page trying to educate the parents on what we’re trying to do in a CI-based program.
And I use the term “roundtable” here loosely, nothing too formal, just a conversation where we can hash out some things. But you guys are already doing that here, so maybe for me it is just catching up with people as I see them and making them answer my questions for a little while. But if anyone wanted to meet one evening for a little bit to talk about how to assess in a simple way that translates into an easy grade, I’d love to join. I have a feeling it’s going to look somewhat similar to what I’m doing now. I want to get better at justifying it in edu-speak.
I’ll try to catch up on this thread soon.
Jim, I think that your idea or talking informally is good. Our various situations are our strength because I belive that SLA principles — the major ones do not change slightly. There needs to be a criteria for our assessments (whether formative — the ones during instruction or summative . I think that’s the first thing.
Even though ACTFL hides in false ideas of language teaching, it is still ORGANIZED. We as a whole need to organize even if it as simple as Lance’s two main points:
1) You can understand the target language.
2) You can be understood in the target language.
Even though we are designing now. I feel as if we are doing it without clarity. What’s the criteria grounded in SLA and is it authentic according to our common definition?
Jim I’ve asked Carol to find a venue for the Wed. night assessment round table.
Sweet!
I’ll be there.
If, after you nail down the logistics, you advise interested folks to show up with their various rubrics and docs it could be a very productive meeting!
In a slide in our presentation at iFLT Denver a few years ago we included our idea (very elementary/Progressive Ed) of collecting organic, in-the-moment data with nothing but a seating chart. This was a few years ago – so the thinking wasn’t as clear – in our sample there are jGR elements, traditional ‘student responsibility’ elements (forgot pencil/6th grade), and tallying sentences (language behavior).
I haven’t used it consistently enough to report on it’s success. Though now I picture the T working the room w/a clipboard & seating chart w/boxes for comments, maybe developing some shorthand for certain language behaviors that s/he observes in the moment…Oh yeah the comments would be dated… – as a data management tool could it be any simpler/Old School? Always there for when something noteworthy happens; if you see an empty box you try to engage that kid and get something down if you need it…you can also write-in reflections after the fact…
I love this Alisa. I can use the seating chart to see who’s on point and who is not. Actually, now that I think about it. We probably have to collect data BEFORE we decide on assessments. This way we can differentiate for students with particular needs. I’m going into this without assumptions — How can I interpret the data? There are many questions.
The other way to slice up ‘incidental’ data is by student – you have a page for e/student in a binder/folder and you flip to that kid’s page every time you wanna add a comment – then it’s easier to track that kid for report card grades…
As a parent I think we also just wanna hear something specific & concrete about what our kid “looks like” in class – engaged, on-task, contributes, enjoys…vs. head down, unpleasant, passive, disruptive…again my elementary lens…