Here is a question I got from a teacher. It’s a long post but if you are required to do summative long-term testing in your district it is an important read:
Ben,
I just found out that all teachers inour school have a new requirement for next year:
…all teachers must prove that the students improve or learn over the course of their class. What will be your year-end summative assessment to show student growth/learning? (example: pre and post test)….
I am wondering what such a pre/post test might look like. I appreciate any input. Thanks.
Lori
My response:
In our field of language education, the research indicates that this kind of assessment is theoretically impossible. It is because it is not possible to measure things that collect as a result of comprehensible input because they collect in the unconscious mind, which by definition cannot be pried open to be measured and so in this way differs from all other school subjects except perhaps chorus and gym where things are also whole brain/body based.
Leaving that aside for the moment, I will say that I remember having to do this in the Denver Public Schools. Each department in the school had to state goals called “Student Growth Objectives” (SGOs). We got paid extra if we met them. We did a pre-test, attached a goal statement to it, and then tried to reach that goal in a post test. We had to do two per year.
You likely have them in your district since they are everywhere, and especially in management systems where the district bosses have access to too much money for testing and evaluation when teachers are overburdened with too many students, but we’ll leave that ugly top-down topic alone in this discussion as well.
It is better to evaluate output.
In my files I found a couple of SGOs from past academic years:
Objective 1: Writing
Status: Approved
Decision: Pending
Organization: Abraham Lincoln High School
Role: Classroom Teacher
Rationale: This objective supports the unified improvement plan goals that address writing.
Population: Students enrolled in my 9th period French level 1 class.
Interval of Time: One school year
Assessment: Teacher-Made Assessment
Expected Growth: 80% of students enrolled in period 9 – a first year French class – will score between 4 and 6 points as per the DPS World Language Writing Rubric by April, 2015.
Baseline: 100% of students scored a 0 on the fall pretest.
Learning Content: Students will acquire a variety of French phrases and structures in order to express themselves in oral and written communication.
Strategies: Comprehensible Input, TPRS, embedded readings, reading of short novels, interactive communication in French. (If you currently use the UCI StarChart™, you can neatly point to Phase 3 if someone asks you for a specific writing strategy that will prepare students for the assessment.)
In this writing objective, since all my classes that year were level 1, I had a baseline of zero sentences. My students couldn’t write anything on the pre-test, which should be given as close to day 1 as possible for obvious reasons no matter what the level. This is sneaky. Read on:
I used the DPS writing rubric, which was a series of pictures with a beginning, middle and end, usually with four images per prompt, not unlike a comic strip. The expected written output ranged from 0 to 12 possible sentences.
So if a student could write twelve or more sentences about the prompt – then, by counting the number of clauses and evaluating the flow of the story that they wrote, they would score at 12 on the assessment.
The results were impressive. Many level 2 and most level 3 CI-trained students’ writing was far better than what most AP students not trained with CI could do. I have spent decades in both worlds and I know that that statement sounds like hyperbole but it’s not.
My first year students wrote beautifully. No memorization was involved because CI doesn’t involve memorization happens out of reach of the conscious mind. The kids wrote real sentences based on real sounds banging around inside their heads from all the tableaux and stories they did all year.
I might add that whenever we did writing assessments in DPS over the years, and teachers spent an entire day of inservice grading them, it was clear who the best writers were. The CI-trained kids throughout the district wrote organically – it was noticeable since we know which teachers use CI and which don’t when we graded them at the spring inservices.
The CI kids’ grammar was not only much better than that of the traditionally trained kids – their writing also made sense. The bumpers (grammar concepts) were on the front and rear of the car, where they should be. The engine was in the engine compartment. The non-CI trained kids’ work looked sadly unlike a car at all. The bumpers were tied to the roof, the engine was laying in the street. The steering wheel was sticking out the back of the car. It was a clown car and clear to everyone that these kids had been given no basis in the sound of the language to use as a euphonic starting point to then express an idea in writing.
Those glaringly-deficient-in-writing traditionally trained kids couldn’t express an idea in L2 because they had not heard the language enough during the year. They had only studied the parts of the car separately. As a grader, I routinely used to find writing samples of level 3 or even 4 kids, including AP kids, that were just embarrassing to try to read compared with what so many level 1 and 2 CI-trained kids could do.
So this is one example of a cleverly planned way to beat the system when the district or your building hacks ask for longitudinal/summative data. Choose writing to test and set a zero baseline with a level 1 class. Having a goal of first year students writing 4 to 6 sentences after a year of CI is ridiculously easy for them to do, even for students who seem to be half asleep all the time – you know the kind I mean.
However, you can also use speaking. Here is my second SGO (unrelated to the first), from the spring that year:
Status: Approved
Decision: Pending
Organization: Abraham Lincoln High School
Role: Classroom Teacher
Rationale: This objective supports the district goals.
Population: 80% of the students attending my period 9 French level 1 classes will score a 2 on the DPS World Language Speaking assessment by April 2015.
Interval of Time: One school year
Assessment: Teacher-Made Assessment
Expected Growth:
80% of students will score 2 on the spring 2015 WL speaking assessment.
Baseline: Results of the DPS WL fall 2015 speaking assessment. (Again, since these were level 1 students in the first days of the year, the baseline is zero.)
Learning Content: Students will listen to and read French. (No need to go into this in any detail because the district data gatherers usually don’t understand CI. All they are looking for is if you met your goal or not.)
Strategies: Comprehensible Input methods
This SGO is even easier than the first one above because if my students could just say one sentence – using French to simply communicate an idea – about a three or four panel prompt like the one used in writing – then she gets a 2 (see below). Every single student can easily do that in any CI class by the end of the year.
Basically on this rubric (below) the challenge is to get a 3 or 4 as per the rubric below. This involves doing nothing more than saying from 3 to 5 sentences about the prompt. When a student has heard nothing but stories all year, esp. stories built on images like we use now in the StarChart™, it is almost guaranteed that every single student will be able to do this easily after months and months of it (vs. doing worksheets).
Using these little panel comic-strip type prompts is an AP activity.
In a nutshell:
- 0: no effort to speak
- 1: one word spoken
- 2: one sentence spoken
- 3: student was able to articulate a simple story with a beginning, middle and end but with lack of fluidity.
- 4: student was able to articulate a simple story with a beginning, middle and end, expressed fluidly with command over the language.
Here are some comments I got in response to the above posts:
LORI:
Thanks so much, Ben. This helps considerably. Our school calls them “SLO’s”–student learning outcomes – but otherwise, this seems very similar. Both of your SGO’s are output-based; do you have input-based SGO’s as well or is the very nature of an SGO output-based? [ed. note: I touched on this above, saying that input skills, because of how language acquisition works as an unconscious process, are almost impossible to assess.)
My students will be in year 2, but their first year is project-grammar based, so I guess I will still use that first assessment as ground zero for the year, right?
I suppose that those rubrics are property of DPS and you cannot link that, but this still helps. Many thanks.
Ah, now I just noticed the rationale for out-put based:
“Since input (authentic learning) is hard to measure (it occurs in the unmeasurable domain of the unconscious mind), the easiest goals are measurable goals and that means measuring gains in the output areas of writing and speaking.”
In the world of education “easiest” trumps “best”. Grrr.
GRANT B:
This is a very good strand to initiate here. If we are not on top of this game, others will be determining how we measure growth.
Personally I think we have to find a way to measure input and acquisition if we’re going to claim that it’s what’s important. But, like Ben said, it’s hard to measure. I don’t have an answer to how to measure it. But I do know that if we don’t come up with some good ideas the Kings and Queens of Scantronland, Multiple Choice-Ville and Fill-in-the-blank-dale will come to reign supreme.
I was asked to show growth this year w/o advance notice. We’re in the process of building into the structure here. What I did was to have kids look at their free writes.
We documented date, # of minutes of writing, # of words written and then calculated # of words per minute. Before doing this, I made them go through and strike any English words that were not proper nouns. We also calculated the percentage of growth from first free write to last.
There were many many kids who increased over 200% from first to last with Words per Minute increasing from 5ish to 20ish in some cases. [ed. note: if you use the bar graphs when your kids do free writes, you will really make the SGO game easy to play for you and your students….)
What was even more interesting to me was that I didn’t ask them to do any writing until December. So, writing was already emerging in the fast-acquirers. Had I done a true baseline in September the growth would have been even more.
I know this is not a measure of quality of writing, but more of writing fluency. But I wonder if there isn’t room for this type of growth assessment as a measure of what learning looks like – “I can write more words than before” Not sure how we could easily measure “I can build longer sentences” or “I can create more interesting images.”
LORI:
I checked out the level 2 and 3 rubrics, too. They are super straight forward. With these APPRs coming down the pike in NY (probably in the rest of the country, too). Thanks to CI, my kids have soared in the writing department as well. That used to be the area I struggled with most. I used to chew my colleagues’ ears off, trying this method and that and never getting any satisfactory results. Now, all of a sudden, amazing writing seems to emerge all on its own. So, at least in this mode, progress would be very easy to prove.
BEN:
To briefly bring up the equity piece, the writing scores were extremely high from kids who have little English, poverty backgrounds, and who did very little actual writing in any of their other classes because their teachers had written them off.
Such kids learned how to write by listening to stories and reading in class. That’s how it works. You don’t “teach” writing – you just provide CI in the form of listening and reading and the writing falls into place as if by magic.
We can’t teach writing. That is totally impossible because of the conscious/unconscious piece. Trying to teach writing using conscious analytical procedures has been a stake through the heart of the ESL world for decades. It can’t work and they continue to try. Why don’t they just read the research?
Discrete grammar instruction and memorization of word lists didn’t factor into the pedagogy that brought the writing success described above. The students listen a lot and after years they can speak. They read a lot and after years they can write.
Try reversing that process and see what happens. You will have low overall scores, bored kids, and job loss fear. We’ve said it before but it bears repeating that when we focus on writing in and for itself, esp. when that is done too early, writing scores invariable go down, as per Krashen, Annick Chen, etc.). We actually proved that in DPS one year and we have numbers but that is a story for another time.
BEN:
I had one of my slowest writers once write a sentence that I know came straight from all the auditory CI he had heard this year, and he nailed the spelling as well. It reflects what you said, Brigitte, that “amazing writing seems to emerge all on its own”. Gosh, maybe Krashen is right about that natural emergence stuff!
Let’s be clear – this student heard it, he processed it, he went to sleep each night, his mind parsed in a lot and refused some of what he had heard that day, his sound-based vocabulary base built up slowly, he then tried to write and, in order to do that, he accessed that sound base, it turned into writing, and he barely had to sweat, bc the writing was so natural.
Had I spent the year teaching him to analyze the language with his conscious mind, his writing would have looked like the crap that the non-CI teachers in DPS (40 of them) gave us to grade last weekend from the scattered, rule-fed, fill in the blank scrambled minds of their students.
I even inherited a bunch of brilliant AP kids who had had 3 previous traditional French teachers. Trying to read what they wrote was impossible. Trying to repair the damage was impossible, even though I routinely brought donuts in. Their minds were grammar mush and my first year students that year could write much better by spring. Talk about a waste of tax dollars, let alone the serious eroding of student confidence that that brings with it!
SHARI:
Are all of your students year-long students? I only have students one semester at a time. The time span between semester 1 (French 1) and semester 2 (French 2) can be a year or more later! Can TPRS and CI still work under these conditions?
BEN:
CI knows NO limits in time, in my opinion. The more that the students experience the better, because when input in the form of listening and reading has been put into the students’ minds, it stays there. If it is an evening college class that is 10 weeks long, once a week, then doing comprehensible input during each available instructional minute you have is the best. Nine-week exploratory courses – same thing. One semester and year long classes – same thing.
The material collects and piles up like snow over a blizzard and from all that snow output magic happens. All output first needs a big strong foundation of input. How language teachers have been allowed for so long to ignore the research about how people actually languages is kind of criminal, actually, to state a truth. It’s professional criminal malpractice.
I would even say that CI is the ONLY THING that can work when there are long periods separating formal study as you describe above.
The gains are real with CI. You don’t get false reads as when teachers try to get kids to learn a language by memorizing rules and such, which is akin to trying to learn yoga from a book without actually moving your body into any of the asanas.
It’s like if you teach a kid how to ride a bike from a book, then a year later they are no better off than they were when they started studying how to ride a bike from a book. But if they actually rode the bike every available minute and then got off it for a year, they could get back on after the long break with ease. It’s all about wiring the brain – that’s all that CI really does.
