jGR 1, 2 and 3

To view this content, you must be a member of Ben's Patreon at $10 or more
Already a qualifying Patreon member? Refresh to access this content.

Share:

Facebook
Twitter
Pinterest
LinkedIn

67 thoughts on “jGR 1, 2 and 3”

  1. This is gold. It shows how you can, in fact, in spite of my own reservations, doubts and protestations, marry the other two modes with CI instruction.

    Brilliant rubrics and I am sure, James, that this is not just a reflection of your own personal preferences as an individual teacher but, as written above, can be used by mostly everybody in the group. These will certainly be well received by the rest of the group and, if I may say so, I think are revolutionary in what they bring.

    Now we can begin the process of discussion leading to what we could call final docs, as jGR is now pretty much a final doc and in active use by many of us. I prefer to make the rubrics for levels one and two, pretty much as James has done, and worry about levels for more advanced levels later.

    You perfectly found a place for the quizzes, for translation of texts. I would ask if dictee should be included in interpretive or presentational. Probably interpretive. Yeah, interpretive.

    So interpretive includes:

    auditory quizzes
    translation of texts
    dictation

    And presentational includes:

    storyboards
    freewrites

    Wow. I ‘m glad this question came up. Can anyone say breakthrough?

    Now, next, with your permission, let’s get these into categories and name them. This is perfect for us to set up to use next year. Imagine. Like jGR, people can tweak these two new rubrics for their own needs.

    On the names, I’m thinking jGR 1 (jen’s Great Rubric) continues as the Interpersonal Mode rubric, then we add:

    jGR 2 (james’ Great Rubric) is the Interpretive Mode rubric, and
    jGR 3 (james’ Great Rubric) is the Presentational rubric.

    I want to keep jGR bc I “think” in that way now and it is like code for us on the PLC.

    (Note: I don’t like that people can currently possibly send our rubrics around the net and maybe bend it and change it for the worse and put out distorted versions of our work. Should we consider copyrights on jGR 1, 2 and 3?)

    Of course, I’m just yammering here as I am geeked about what James has done and need to get back to my class. We can decide how to react as a group and how to incorporate it. But I don’t think the above ideas by James have any weaknesses. They are concise, accurate, and for the first time in the instruction of languages, release us from the old and tie us to the actual new in terms of assessment.

    Congratulations, James!

  2. I think James has done a lot of good work in coming up with standards-based grading that fit well with what we do in the CI classroom. The only issue I see is that, though the rubrics fall under Presentational and Interpretive, I’m not sure how they really explicitly tie to the ACTFL Performance Descriptors (http://www.actfl.org/publications/guidelines-and-manuals/actfl-performance-descriptors-language-learners) (or even the Proficiency Guidelines: http://actflproficiencyguidelines2012.org).

    For me, I want to assess students using standards that are explicitly tied to those descriptors. Yes, I know that they focus on output, which is definitely not the focus in the CI classroom, BUT I think that the kind of output that they’re talking about is completely doable in the CI/TPRS classroom (i.e. progressing from word level communication to simple sentence/question communication). I would use those activities (quizzes, translation of texts, storyboard, freewrites) and hold them up to proficiency-based standards and assess that way.

    My idea for standards-based grading is here: https://docs.google.com/document/d/1W0HHNQtOl2-GQtljzXvRDZdIGF83gPF4t3NWG36EeCY/edit?usp=sharing

    I guess my whole point is that, though I love James’ standards-based grading system for the TPRS classroom, it’s only really possible to say that they align with ACTFL’s standards on a surface level, right? Yes, activities like freewrites are presentational, but shouldn’t the standard be looking at what’s emerging in the output that the students are creating? (This is what the performance descriptors and proficiency guidelines point toward.) Are they able to “communicate information on very familiar topics using a variety of words and phrases, but struggles greatly using sentences that have been practiced and memorized.”? Well, then in my eyes in Spanish 1 they are right on target (that would get the student a 3.0 for the presentational standard).

    Also, yes, the ACTFL Standards (http://www.actfl.org/sites/default/files/StandardsforFLLexecsumm_rev.pdf) specify the WHAT of the language classroom, which all of those activities definitely fall under (for example Freewrites would definitely be standard 1.3 – “Students present information, concepts, and ideas to an audience of listeners or readers on a variety of topics”), but I am also concerned with the HOW WELL, but again, not in a way that I think is unrealistic for the TPRS/CI classroom (based on the standards I came up with, aligned fairly explicitly to the performance guidelines, though I include JGR and a standard for quizzes).

    Maybe I’m over-thinking this, and please don’t think that I don’t appreciate all of the work that you have done James. I just wanted to air my two cents. Ultimately I wanted to create a standards-based grading system that wasn’t specifically tied to any one activity and used output as only an indication of where a student lies on the proficiency continuum, but that does not mean that output is the focus of my classroom. Comprehensible input is still the river that carries my students to fluency, but I think looking at the output that happens in the presentational mode, for example, can be a nice indicator of where they are at along that river.

    Thanks for all of the great discussion and for these new standards to sink my teeth into :).

    1. Just an additional thought: when you say that students will be able to translate “into English written Latin texts not previously seen but containing familiar vocabulary and content”, how well do they have to do that? The whole text then? How much of the text would you say would be sufficient that they would have to translate to allow them to get a 4.0? What if they only get half of it translated, but are missing a lot of words? I guess I’m trying to get at more of a specificity, as my students would probably ask questions like that. And parents as well.

      Also, how do you handle a student that can, by some fluke, translate texts into English, but struggles with dictation (due to, maybe, a difficulty with decoding auditory sounds, etc.)? Do they get a 2.0 or a 4.0? Or something in between?

    2. Nathan I went to that Performance Indicator page and felt a negative visceral reaction. That doc was written by eclectic teachers. The performance indicators differed from the Proficiency indicators in that the former talked about what happens in a classroom and the latter about actual acquisition, as per these statements taken from the site:

      Performance: Performance is the ability to use language that has been learned and practiced in an instructional setting.

      Proficiency is the ability to use language in real world situations in a spontaneous interaction and non-rehearsed.

      These two statements from the same document could not be more opposed to each other. That is trouble.

      The first is all about school and measuring what kids learn in school and do – it turns language acquisition into seeing how well you can get the seals barking and clapping in the circus.

      The second describes real things that emerge when language is actually acquired. It describes what we do and what Krashen is all about – real language gains.

      I am offended by the performance indicators.

      Nathan you also said:

      …shouldn’t the standard be looking at what’s emerging in the output that the students are creating?…

      My response to that:

      No. I also don’t jump off my bike in a 100 mile ride to see how I’m doing.

      Also you said:

      …I wanted to create a standards-based grading system that wasn’t specifically tied to any one activity…

      My response:

      We don’t do activities. We do CI. We focus on structures.

      And you asked:

      …does not mean that output is the focus of my classroom…

      I respond:

      Good. Because if you want to measure output in your classroom before they have had at least 1000 hours, you are going into the garden and hoping to see flowers and mature vegetables and vines and all that great stuff you see in fall gardens, but in May. The plants have to grow first before we can enjoy them at harvest time.

      And you said:

      …looking at the output that happens in the presentational mode, for example, can be a nice indicator of where they are at along that river….

      I say no to that as well, in the same way that we can’t dig into the ground, pull out a seed to see how it’s doing, pick at it, analyze its growth, and then stick it back into the ground and expect it to grow, patting the ground a bit and telling it that it’s doing a good job. Languages are acquired unconsciously and there is a reason that certain things are kept hidden from meddlers, like the formation of a child, and all the important stuff in life. I see the performance indicators as being the product of too many smart people who never really got Krashen going to work for ACTFL and never having been called on their failure to grasp the single most important factor in language acquisition and the single most important concept on this entire website, in my opinion, that we cannot acquire a language by thinking about it and measuring its progress. Of course, that’s just me.

      I trust our friendship can take a little friction here. I certainly don’t mean to criticize. You know that. I just have to state my own personal truth and reaction that the document about performance indicators feels in some deep way just wrong, schoolish and therefore foolish, and not at all in touch with what I think Krashen is saying about how we acquire languages.

      Of course, keep in mind that I am so right brain dominant I could not possibly see this thing accurately. It’s just what I think as a hippy freak. I like how Grant has taken a more middle ground on this thing. What he says is do-able. Go read that.

      1. I agree with you that I don’t like the Performance Indicators (I think it’s dumb that they’re divided up into the four ‘skills’ reading, writing, speaking, etc and I also agree that it’s very school-ish and doesn’t really work well with what we know about language acquisition). That being said, I think the Proficiency Guidelines are good and basing SBG off of them isn’t too out of the question, which is what I did.

        You said, “We don’t do activities. We do CI. We focus on structures.” to which I totally agree. The issue I had with James’ standards is that they seem to be tied to specific activities, which I find too narrow for the type of CI I do / want to do / may do in the future in my classroom (thinking Movie Talk, discussing a photo / youtube clips). I needed more holistic standards, I felt, that would allow me more wiggle room as far as activities to do in class AND that tie to something that, as you say, “describes real things that emerge when language is actually acquired. It describes what we do and what Krashen is all about – real language gains.”

        I also agree with you “that we cannot acquire a language by thinking about it and measuring its progress.” Grading, I think, IS ridiculous for acquisition, so I also completely agree with you on this. And I can understand how James’ system allows us to cut out a lot of the crap that comes from traditional grading schemes. For me, though, I find it helpful to see how students are doing progressing along the proficiency continuum — you all in DPS do that with your finals, right? My version of SBG just brings that into classroom on a day-to-day basis. We may ultimately disagree about the role of measuring of HOW WELL in the language classroom, but I think that’s small beans when compared to how much we agree about how language acquisition is accomplished in a classroom setting.

        One thing, too, is that the freewrite is completely output in our classroom, so why do we do it if our classrooms should be constantly centered around comprehensible input? What’s the purpose of a timed write, then?

        And of course there are no hard feelings! Discourse is important and expressing opinions is also important, and I think these discussions are good for us to have. Especially since I consider you all colleagues and friends that are striving for the same things!

  3. Grant Boulanger

    Thank you for initiating this dialogue!

    I, like James, am seeking a META rubric for Interpretive. Something that encompasses the standard, as written, and can be applied to any topic or theme.

    The standard (Communication 1.2) says, paraphrasing, that a student will UNDERSTAND and INTERPRET _written_ and _spoken_ language on a variety of topics.

    I agree with Nathan that we have to somehow get at the “how well” if we’re going to say they are approaching, meeting or exceeding the standard. But the simplicity of what James has put forth is a huge strength in my eyes.

    I am envisioning the following:

    Translate written text from current learning: very accurate, accurate, somewhat accurate, not accurate
    Read a text to another person in English: 4. Easily, accurately with no hesitation 3. Accurately but with hesitation 2. Inaccurate and haltingly 1. hubbawha?
    Responds in writing to simple questions on current topic 4. Always accurate (here you review their quiz grades. if they’re always getting 9 and 10, then it’s a 4, if 7s and 8s, then a 3
    Dictation: 4.3.2.1
    When asked a question in TL in classroom discussion: 4: almost always respond in TL 3.can almost always respond in L1 – 2. usually can respond in L1 – 1.can’t respond –

    This is what I’m after, I think. Every two weeks, I can go through this rubric and mark where they are on each of these items (maybe more or fewer) and then I can see where there at. If I draw a vertical line down the paper and the kid’s hitting mostly 3s, he gets a 3 for those two weeks.

    This would correspond to how I’m grading jGR currently too. Every 2 weeks I put in a score based on what I’ve observed. This would also be efficient. I could rotate weeks. This week jGR1, next week jGR2 with jGR3 reserved for once or twice per term.

    What are your thoughts???

    1. In my ideal world I assign no more than 3 numbers for each kid, one each for interpretive, presentational, and interpersonal. (Many of us are already doing one of these numbers via jGR.)

      When you start dividing interpretive and presentational into even smaller elements and giving each of those smaller elements its own number, you increase your work load a ton. That happened to me this year and next year I was hoping to simplify everything to just three numbers per kid.

      1. Grant Boulanger

        So, if the standard says understand and interpret written and spoken language, then we have to account for that. And I’m not saying that your porposal doesn’t, I’m thinking about defending this in future dept meetings and to administrators. So, stating it using some of the same verbiage used in the standard itself facilitates that.

        I agree that it’s more work, but it’s still much less work than creating a different performance rubric for every single time I assess them.

        Here’s what I’ve got so far:
        Write current text in English
        Read current text to another person in English
        Responds to written comprehension questions on current topic
        Responds to simple oral comprehension questions on current topic

        Each of these has a 5,4,3,2,1. This allows me to demonstrate that a kid has trouble responding to oral comp questions but can anwer the question on paper just fine.

        Here’s what each “grade” means:
        5= Exceeds Standard
        4= Meets -> Can do Easily and Accurately
        3= Approaches -> Can do mostly accurately with hesitations
        2= Below -> Inaccuracies, takes effort
        1= Far below -> Unable to do task

        If we’re after fluency in interpretation, I think the “can do easily” helps us get at that.

        so, once every two weeks sit down and put an X in the box four boxes for each kid. Then eyeball it. If they have 4,3,3,2 they get a 3. If they have 2,2,2,3 they get a 2. There won’t be radical fluctuation, I don’t think. Perhaps if one freezes when asked a comp question directly in class, then that score could go down, compared to the ability to translate text.

        anyway, I’m liking these four categories and it’s simple enough for my needs. Is it written in a way that covers all situations in a CI class in which they are asked to interpret language w/o the ability to clarify meaning?

          1. Grant Boulanger

            yes, correct. a composite score for each of the three modes. Rotating every 2 weeks between jGR1 and jGR2, wth rare presentational evaluations based pretty much solely on free writes during the first 2/3 of the year and a few orals in the last 1/3. More frequent presentational evals as the year continues.

  4. (Sorry for the rambling today. My students are doing the Tripp-inspired project so I’m bouncing around answering questions and motivating while writing in between.)

    Grant, rotating weeks is what I have been doing this year when I had more standards. I found, though, that having so many things to check was just a pain in the butt. I agree with you and Nathan that what I am planning doesn’t get explicitly at the “how well” a student performs on any given task, but would it be bad if I said I don’t really care? I grade all the steps–like “able to translate,” interpretive, level 4–as pass/fail. Was it good enough or not?

    Nathan, you expressed concern with this when you said: “I guess I’m trying to get at more of a specificity, as my students would probably ask questions like that. And parents as well.”

    In my experience so far, this problem goes away if I am super liberal with allowing retakes. If I say, “No, this isn’t good enough,” but then in the same breath I say, “But when you do better next time your grade will go up without being held down by this earlier bad attempt,” I avoid being the target of animosity. Students, if they are serious about raising their grade, will gladly take the retake, and parents will start being mad at their kids–not at me–for not taking the retake.

    Keep in mind that in a CI classroom students for the most part do very well. And the teacher has a very good idea of where every student is just from the day-to-day interactions in L2. So I pretty much know before giving a translation assessment who’s going to do well and who’s not going to do well. And the students know it, too.

  5. Nathan, as far as aligning everything explicitly with ACTFL verbiage, I have tried to do that in the extended, “teacher version” of my standards. See the document at the link below and let me know what you think.

    https://docs.google.com/document/d/1VkAw12ws23P9-J6EnW_dCmzWC21U4VbfWRJ-HEntV68/edit?usp=sharing

    Basically I stretched it a bit, but at least I have something to say about how every task on my rubrics does align in some way with ACTFL. And keep in mind that as a high school teacher I only go in my wildest dreams, after four long years, to intermediate. According to ACTFL itself, my students in a 4 year sequence hang out mostly in Novice Land.

  6. Oh, Nathan, I almost forgot you raised this really good point:

    “The issue I had with James’ standards is that they seem to be tied to specific activities, which I find too narrow for the type of CI I do / want to do / may do in the future in my classroom (thinking Movie Talk, discussing a photo / youtube clips). ”

    So, the idea is that the tasks I have for the various levels of interpretive and presentational are so narrow that they won’t allow for movie talk, L&D, RT or another good idea we might have mid-year next year. I don’t think having set tasks like this are that limiting. Let me try to explain.

    First of all, the tasks for each level have to be CI friendly. They can’t be based too heavily on output. But, like you said, Nathan, they need to be able to accommodate all the various TYPES of CI we do in our classes, as well as the various TYPES of CI we have not yet discovered. The tasks need to be serious heavy-hitters and able to fit after a huge variety of CI-delivery-methods.

    All of those things you described, MT, RT, L&D, Y&D, etc., are to my mind CI-delivery-methods. They are not tasks that need to be put into an assessment scale. In class the teacher facilitates, for example, some Read and Discuss that is interspersed with RT (Readers Theater), or maybe the teacher does something similar but around a video on YouTube via Y&D, or maybe the teacher wants to teach some structures with a picture via L&D. All of these methods just deliver the CI in (hopefully) sufficient variety to satisfy our students’ need for variety.

    There is no assessment built into L&D or Y&D or RT by design, the assessment comes after in the form of quick quizzes or freewrites or essential sentences or something else that is fair and won’t interfere with the CI or with our students’ fragile egos. We as a community have been deciding on these forms assessment, but in my time here I have noticed that new assessments don’t come up anywhere near as often as new methods to deliver CI in a compelling way.

    So our forms of assessment, the tasks for each level of the rubric, can be stable, but the CONTENT of assessment, the CI and its method of delivery, can vary in any number of ways.

    So, for example, if you want to do some Look and Discuss (L&D), do it and then give your kids a Quick Quiz on it (interpretive, level 3) or have them do a freewrite on it (presentational, levels 2 and 3), etc. In this way the students become very comfortable with the various heavy-hitter assessment types in our classes, but they won’t become bored because the method of delivering CI can tolerate variety.

    1. …all of those things you described, MT, RT, L&D, Y&D, etc., are to my mind CI-delivery-methods. They are not tasks that need to be put into an assessment scale….

      That is why Grant used the term META rubrics that don’t cripple us with detailed notation of grades, esp. about output, but that can be done, as Grant said to me, every week or two.

      I will be interested in Nathan’s reaction to your comment, James. This is the hard part of defining what our goals are, how we want to align with what ACTFL document and to what degree. That is our choice. We get to do what we want.

      We can all go in different directions with this but I think it wiser to come together on a final version in the next few weeks or months that we all agree on. We can then be ready with these two new documents (maybe we can turn them into posters like jGR 1 as well) and all be on the same page to test them throughout next year.

      One word, let’s keep this thing simple.

  7. And you had this question, too, Nathan:

    “Also, how do you handle a student that can, by some fluke, translate texts into English, but struggles with dictation (due to, maybe, a difficulty with decoding auditory sounds, etc.)? Do they get a 2.0 or a 4.0? Or something in between?”

    If the student is struggling majorly with dictation, which hardly every happens if your dictations are kept in bounds, then the student would end up with a 1 or a 1.5 on the interpretive rubric. Remember, the 2 is reserved for those who have shown proficiency with dictations. So even if the student translates perfectly, the overall interpretive rank is kept low until the student shows that he can complete a dictation satisfactorily. (This doesn’t mean a perfect, flawless dictation.)

    If this sounds harsh, remember that I am super liberal with retakes. The students can schedule them after school or do it again the next time a class does a dictation during class time. In either case the rank goes up–NOT WEIGHED DOWN BY THE EARLIER, BAD ATTEMPT–as soon as the student does a good dictation.

    In my experience a translation of an unseen passage is more difficult/rigorous than a dictation. So I put it higher up. Your experience might be different. But if a student shows he can translate but not take dictation, that to me is a huge sign that we need to work on his dictation skills in a major way.

    Of course the tasks for the different levels can be altered on an individual basis if a student is on an IEP.

    1. Hi James,

      In answer to Nathan’s question about kids who don’t do so well with dictations and where they would fall in your rubric you answered:

      “If the student is struggling majorly with dictation, which hardly every happens if your dictations are kept in bounds, then the student would end up with a 1 or a 1.5 on the interpretive rubric. Remember, the 2 is reserved for those who have shown proficiency with dictations. So even if the student translates perfectly, the overall interpretive rank is kept low until the student shows that he can complete a dictation satisfactorily. (This doesn’t mean a perfect, flawless dictation.)”

      Let me ask you a question James. What are you assessing in a dictation?

      I may be wrong in interpreting your answer but it seems to me that the purpose of a dictation is not to test our students’ writing abilities or lack thereof . Rather, we are testing their ability to hear and recognize the sounds and transcribe them . After all, the only requirement for a good grade in a dictation is the ability to copy the right version from the board.

      Dictations to me are just another source of comprehensible input, and another source of repetition of whatever structures/language you are working with. We don’t tell them that of course b/c we want them to believe they are writing and we want to give them confidence.

      I did a dictee yesterday with my kids based on a Y&D we did in class, and one day after having done the reading. So my kids had heard, had read, and had translated the script we built around the clip, and they knew the stuff very well, at least auditorily. Yet when we did the dictation, it was obvious they couldn’t write that stuff yet.

      But it doesn’t matter b/c all I m grading is their ability to copy the correct version from the board on the second line on their piece of paper . Dictation should only assess that IMHO.

      Dictees I believe are not meant to teach kids how to write. The only thing that will help students to write, I think, is reading. Just like listening leads to speaking, reading leads to writing.

      Dictation, IMHO is just another form of CI, hence the need for complete silence while doing it.

      If I were to assess my kids on their ability to write correctly based on what they heard, they would all flunk.

      The reason for that is that every language is different in its idiosyncratic orthography. Some are somewhat transparent, some are somewhat opaque, and some are in between.

      So in some languages such as Spanish or Japanese there is a transparent grapheme to phoneme correspondence making it easily decodable , whereas French is way more opaque, and the sounds don’t correspond to how they are written, making it more difficult to decode and write.

      I think that Latin would fall in the category of transparent languages because the correspondence between the sounds and their orthography is easy.

      So are we to penalize students who are struggling in writing by no fault of their own, b/c they’ve chosen to learn a more opaque language?

      May be I misread you James and perhaps you can explain what you meant.

      Thanks!

      1. I think what you said here, Sabrina, “Rather, we are testing their ability to hear and recognize the sounds and transcribe them . After all, the only requirement for a good grade in a dictation is the ability to copy the right version from the board.” is what James’ means when he says a student is ‘proficient’ at dictation in his classroom. Now of course he has to chime in on this, but that’s what I came to understand from him!

      2. sabrina,

        Nathan is right. I completely agree with you about what constitutes a good dictation. That’s why if a student is “struggling with dictation” (i.e., not copying from the board), then the student needs to work on that before moving up the ranks.

        And I agree that a dictation is NOT writing, it’s listening. That’s why I put it under interpretive and not presentational.

  8. James:

    I can see how being super liberal with retakes (as I think anyone should be when doing SBG) would alleviate many issues with the ambiguity of ‘not good enough’. On this, as far as getting at the ‘how well’ we probably just have differing views. The standards I came up with are based on a proficiency ‘how well’ since they mainly came from the proficiency guidelines. Your standards mostly came from what we do in the CI classroom currently as well as the ACTFL standards. Thus, the two versions of SBG get at different things in the CI classroom.

    To comment on your “teacher version” of your standards, I like how it explicitly ties to the ACTFL standards. If you look at our standards side-by-side for, say, Presentational Language Level 1, you’ll see that they’re very similar.

    For 3.0, for example, yours says: “In addition to 2.0 content, students will be able to complete a 50 word Latin free-write in five minutes about Latin texts which they have read.
    Presentational, Novice: “Communicates information on very familiar topics using a variety of words, phrases, and sentences that have been practiced and memorized.”

    While mine says: “Student can communicate** information on very familiar topics using a variety of words and phrases, but struggles greatly using sentences that have been practiced and memorized.”. The only difference is that, in mine, I differentiate between communicating at the word level and communicating at the sentence level, as that would be something for intermediate level, which would not be something to look for until Spanish 4 (Intermediate Mid). I also don’t focus on the word count, but rather look at the content of the language in the freewrite to ‘assess’. (i.e. when I look at the language in the freewrite, I will look for words and phrases and look to see if there are any attempts at fully formed sentences, and if so, they’ll get a 3.0, which is the target for them. This, I think, is not a ridiculous expectation for a Spanish 1 student, especially in the TPRS/CI classroom.)

    I now understand better the difference between activities for assessment and activities that you do in class and how they relate to your standards. I think what you’re saying makes sense and I agree with you. I didn’t realize it really, but my standards and the formative assessments that I would use with them are also specific, but don’t begin to describe all of the things I would do in my classroom. So yeah, moot point there on my part.

    I think reconciling our two versions of SBG might prove difficult as they come from two different places (proficiency guidelines vs. ACTFL standards). I like the simplicity of your standards though and I like how you’ve tied each standard more closely to ACTFL, so I would say that they should be the ones we officially get behind in the PLC, while in practice we may do things a bit differently (i.e. I use all of the different formative assessment activities that you do, my focus is just more on the kinds of language that is being understood (interpretive) or produced (presentational). In order to make your standards generalized to all languages, we would just have to change Latin to Target Language and then teachers could change that as they see fit.

    1. Awesome. Honestly, I don’t know if getting the whole PLC behind this should be the goal. We’re all behind jGR because we all get it and get how it helps classroom discipline a ton. It’s just good in so many obvious way to everyone.

      But with linking SBG and ACTFL performance guidelines or ACTFL’s “random document number 7485329438 that you haven’t read,” I don’t feel there is going to be utility for everyone.

      And we all want to keep stuff simple so desperately. My form of SBG is simple for me. But for another person it might not work. And in a lot of ways we are only able to make stuff as simple as our particular school environment allows. I can get away with marking stuff pass/fail in my situation at my school, but others might not be able to get away with that. Some might need all these different levels of “very good, sorta good, not good, crappy.” My principal next year is apparently all about data, so my grading situation may be changing quickly, too.

      As an aside: I try to avoid sorting and labeling students as much as possible. If I had it my way, I wouldn’t even give them grades, wouldn’t even use the pass/fail divide. But that’s a discussion for another day.

      1. Yeah! The way I’m seeing it, we could compile into one post our different versions (or maybe just links to a google doc of our different versions) of SBG for the Interpretive and Presentational Modes and then people can adapt or use whatever they’d like from that. Because you’re right — we all use JGR in some way because it’s so intuitive and helps with classroom discipline, but we may have slightly different focuses when it comes to how we measure interpretive and presentational in our classrooms, thus our standards may differ. I think as long as we’re tying them in some way to ACTFL and we’re being true to Krashen and acquisition research in our classrooms, then we’re all good.

        I too agree that I wouldn’t give them any grades if I could. I’m really interested in proficiency and seeing how students stack up against the different levels (Novice, Intermediate, Advanced) over time, but I wish I didn’t have to tie that to a grade so much. Which is why I try to make it as easy as possible for the kids to succeed.

        So, yeah, maybe once we each feel comfortable (you, myself, Grant, whomever else) with our versions of SBG, we can just start compiling them into a post on here that’s put in the SBG category. Not quite a solo document like Ben may have wanted, but I think it’s more realistic for the different teaching environments in which we teach.

        1. Thanks Nathan I hadn’t read that response. Excellently put. I would however, like to post a generic document for jGR 2 and 3 just for people like me who are lazy, and that would be the only reason bc your point about 2 and 3 needing to reflect individual teaching preferences is a very important one. I don’t that even the ACTFL people who wrote all that stuff, and Krashen, certainly, bc we know he said it, expect us to do anything in any lockstep way, since it is the spirit of the change and not the letter of some law that should illustrate the changes we make in our own classrooms.

          And James said:

          …and we all want to keep stuff simple so desperately….

          I concur on that so much. So I think the above approach is the way to go. Let’s see how it turns out and not let this thread get dropped in the rush of things that April brings.

  9. I must be doing a different form of dictee. When a kid writes down their effort on line one, and then makes whatever corrections necessary to have a 100% accurately written text on line two, they succeed because they copied correctly from the correct version on the screen onto line two, which is the line I grade. They get a perfect score for copying perfectly, whether it’s one word or the entire sentence that they have to bring down in correct form from the first line down to the second line. All I want to know is that they can copy from the correct version I give as we work our way through the dictation text. They get the perfect grade when they copy the text on line two perfectly. You guys sound like you’re grading the top line with the line one first attempt mistakes before they’ve been put into corrected form on the second line. Otherwise it’s like grading a six month old child on how well they can write.

    Like Sabrina said above:

    …the only requirement for a good grade in a dictation is the ability to copy the right version from the board….

    1. For my first and second-year students, they have to copy the entire text on the second lines in a different color. Then they circle the words on the first line that don’t match. Then we discuss why what they wrote doesn’t match what I wrote.

      Toward the end of the second year I tell my students that if they have two or fewer errors to correct on a line, then they may write only the corrections for that line. Any line that has three or more errors must be copied completely.

      For third and fourth years, students copy only words that they misspelled, so they have to compare and analyze the two lines to decide what they should write down.

      I grade their ability to 1) see their mistakes and 2) copy text correctly.

      1. More Harrell gold. I didn’t think the French dictee approach could get any more efficient but it looks like it just did. Robert can I put this into my description of dictee here on this site, with credit of course?

    2. nah. We’re all doing the same dictee. That’s why I said most everyone gets to level 2 at least because dictee is really a confidence-builder and only punishes those who are really checked out and not paying attention/making corrections. Sabrina had the same question, above, and I said a bit more there.

      1. Grant Boulanger

        James, when you say, “students will be able to complete a 50 word Latin free-write in five minutes “, do you distinguish between someone who lists words and someone who writes a coherent and continuous set of sentences with a clear beginning, middle and end? Is writing a list of latin words enough to pass standard?

        1. I always say “don’t worry about spelling or grammar, just worry about word count.” So the idea is to get as much detail, as much L2 down as possible. A list of words would be fine, as long as I can understand how they all relate to the story at hand. In my experience so far this year students don’t stay long in that “list of words mode.” Even in level 1, toward the end of the first semester and into the second semester, they start exploding with full sentences.

          I’m nervous: Was that the right answer? 🙂

  10. The right answer in terms of DPS and our Chancellor of TCI Diana Noonan – and this is what we ask the kids to do on their post tests at the district level each year – is:

    1. They can write up to 12 unrelated sentences about an image at the end of level 1.
    2. They must write a paragraph with a beginning, middle and end about a series of six images (i.e. they have to write a story) at the end of level 2.

    We hope to have something done in our work this June for level 3. But we may not, as coming up with district assessments that align with standards just for levels 1 and 2 has taken us the past six years.

  11. Ben,
    You state the writing goals of DPS for level 1 & 2.
    I don’t understand why students should be writing at the lower levels.
    Is it to show them how hard it is to write? To give the teacher a break? To give the teacher information as to what’s in students’ heads? (I don’t need to read 80+ stories to know how bad the top, middle and lower kids can produce).
    So I do VERY little writing, but it seems that you folks at DPS are taking this seriously and maybe I’m not doing something right, or depriving my kids of something essential.
    What am I missing?

    Thank you

    1. Grant Boulanger

      I’m anticipating Ben’s answer to be that writing shows us what students have acquired.

      I think it’d be super doable for a kid to not have written all year and be able to write to this test.

      1. Grant that was a better answer than mine. All I told Laura was about the history, but yes to this:

        …writing shows us what students have acquired….

        And I know you probably experience this, Laura, that they write better the less the focus is on writing.

        I think we have discussed this here before. The famous story in DPS is how Annick Chen was told by her principal at Lincoln four years ago that the entire building was to make writing the primary focus of the year in all areas with extreme focus on writing for the entire year. And if you knew the principal, you’d know that it was done. (When President Obama spoke in our parking lot last year, he was out there and fit in perfectly with the secret service agents, in fact I thought he WAS one until someone told me he was the newly promoted former principal of Lincoln.) Annick’s students – she was teaching French then and so was being tested on the district level; we have no district test for Chinese – their writing scores plummeted. The greater the focus on writing, the lower the writing scores, apparently. Now doesn’t that make perfect sense in terms of what we know about how CI works?

        So triple yes to this with strawberry icing on top:

        …it’d be super doable for a kid to not have written all year and be able to write to this test….

        Thank you Grant.

        1. I just read a study that I put in my literature review for my paper in which a class that focused and focused on grammar did worse on a grammar test than CI students. Ill give y’all the study whe. I’m on my computer later. I think it was a Krashen study.

          Speaking of sharing my research, for anybody who has emailed me and I haven’t responded: I’m sorry, I’ve been extremely busy, I mostly check my email from my phone and when I’m actually on a computer I forget to respond to people. Email me again as a reminder. And then again in a week to remind me. I’m hoping to be finished with my paper by the end of this weekend so ill then be able to respond and share

          1. My paper, as of today, is finished!!! At least, I think it is. As long as my advisor doesn’t ask me to add anything else to it. I’m worried that she might want it to be longer, it’s only about 40 pages (half of what I was hoping for) but it’s pretty solid and I’ve rounded up some good research and the results from my own action research is pretty damning evidence that CI instruction is the most effective method.

            If anybody is interested in reading the 40 page behemoth and giving me feedback or suggestions on things to add or revise, email me at: christopherroberts9@gmail.com

  12. Laura you and I are completely in agreement on this point you make:

    …I don’t need to read 80+ stories to know how bad the top, middle and lower kids can produce….

    This tags on the discussion we had over the past few days with James and Nathan when I talked about how the ACTFL performance indicators offend me (and I hope I didn’t offend Nathan in the process).

    As I related having asked Krashen this very question about ANY writing or reading or speaking (only auditory input) in level 1 and he shook his head like it was a very reasonable idea, we don’t need to measure writing performance at those two levels. Then why do we do it in Denver Public Schools?

    We have had some very delicate conversations with our data people in DPS (trust me, they are a breed apart and well funded, very well funded, and they have a lot of clout). And we have also had to talk with a lot of traditional teachers in our district who looked at what Diana was doing as nuts. Can you see my point here? We had to compromise.

    It’s kind of like what Obama is doing leading us to universal health care. He knows it is best for all but he also knows he won’t get it by the powers that be and so he makes the changes he can. Diana has done the same.

    The four skills, which used to BE the standards before we just recently in the past few years have rightly dumped them for real standards (the Three Modes of Communication), had to be included in our assessment instruments district wide. What would the grammar teachers have done when there was no writing on the test? They would have called for Diana’s head and she’s not quite done bringing change to Colorado.

    So in this one meeting about three years ago we told the data team to measure, weigh, the results this way in DPS (I may be a little off on the exact percentages):

    Reading: 40%
    Listening: 40%
    Writing: 10%
    Speaking: 10%

    I think this applies for both levels 1 and 2. That’s the short answer. As I said, we will address level 3 in our writing team work this June.

    Thank you for bringing this up, Laura. It needed to be asked. I’m totally with you on this. Not all in this group are. Speaking of the group, say hi to Anne for me next time you see her in the hallway!

    1. Just a quick thing, Ben you didn’t offend me one bit! If you look above, I commented to your comment so you can see what I replied there :). I enjoy any and all discussions and don’t view them as attacks, but rather a discourse between colleagues and friends. (And I also don’t like the performance indicators/guidelines, which I talk about in response to your comment. Now the PROFICIENCY GUIDELINES on the other hand, I can get behind mostly).

      1. Nathan, I get a bit frustrated with ACTFL sometimes because there are just so many documents that are all separate but to be read together. I’ll list the three main ones I know, and can you tell me if I’m missing any?
        1) Proficiency guidelines (http://actflproficiencyguidelines2012.org/)
        2) “5 C” Standards (http://www.actfl.org/sites/default/files/pdfs/public/StandardsforFLLexecsumm_rev.pdf open a .pdf)
        3) Performance descriptors (http://www.actfl.org/publications/guidelines-and-manuals/actfl-performance-descriptors-language-learners)

        I finally see now that I have keyed my standards to number 3 above, the performance descriptors. Honestly I didn’t even know about number 1, the proficient guidelines. I’ll pour over those and probably change everything in my plan for next year again.

        Are those the three main documents? Am I missing any?

          1. Ahhh! I am seriously starting to freak out with all this stuff. I am sitting here trying to wed all of this into one vision (better than grading, anyway) but everything is just so dang confusing. Now something about 21st century skills with another preface and new footnotes and more arrows and glossy boxes and matrices?! When does it end?!

          2. Robert Harrell

            That’s the problem: there will always be someone else who thinks of a new way of “packaging” education or who believes certain things must be taught but are not currently being taught. It will never end. We can only decide what matters to us. Quite frankly, some of the things that matter most to me are never assessed.

  13. “So in this one meeting about three years ago we told the data team to measure, weigh, the results this way in DPS (I may be a little off on the exact percentages):

    Reading: 40%
    Listening: 40%
    Writing: 10%
    Speaking: 10%”

    In the grand scheme of things, that’s a huge victory right there.

    1. Il y a and I appreciate that you see this. Yes. And it is because my boss Diana is totally boss. She is unstoppable. Just today she has a conference call set up to share what we do in DPS with Grant’s district in St. Paul, MN. This call involves two supes, two principals, two teachers who are not convinced (but the supes, who saw Grant teach, are, so they are screwed), and Grant, le Norseman. I think he’s got his Viking hat on today. Diana and Grant if you read this go kick some ass!

      Now, I’m assigning some homework: everybody go to Grant’s site and buy at least one article of pottery from Grant the Potter (another possible name for him) so that he can support his teaching habit with some real income. I have some of his work, my a.m. coffee cups, and they are not like drinking coffee out of one of those cups you get at King Soopers, trust me. You can request custom items, by the way, here:

      https://www.facebook.com/boulangerpottery?filter=2

      Check out the photo of his classroom with the coffee cup there. Nice!

  14. Thank you Ben for your prompt response,
    I certainly do understand having to compromise. We may have to do quite a bit of it coming up at our school.
    I have had my level 1 kids write maybe three times so far (once when I lost my voice). It gives them panic and makes them be very resistant. I see no point in writing, just as I don’t expect my year old grandson to write after several thousand hours of listening to English spoken around him.
    It is important for me to know I’m not alone in thinking that writing is quite useless at the lower levels. Then, I feel better if I have them writing only if I need a break. I’ll only make it part of their grade if I must in order to compromise.

    I think Anne is in Quebec now with her students (this has been vacation week for us).

  15. …I see no point in writing, just as I don’t expect my year old grandson to write after several thousand hours of listening to English spoken around him….

    Laura this is the entire ball of wax, and the table it sits on. And the room it’s in. Hell, it’s the building.

    There are two of us who think this way, in terms of NO WRITING in level 1. I will send myself an email to publish your comment above as a post. We’ll piss off people.

    So what? I will do it if I have to for the 10% thing but I may have done two freewrites this year, as I have grabbed every available moment to lead them to fluency bc fluency precedes writing output. Hell, fluency precedes mastery of all aspects of literacy.

    Thank you for speaking your truth so plainly about writing. This is the place for that. I wish Anne were there to comment.

    But actually I know that for four years, with the Hogs, now in college, she did almost no writing for four years, I kid you not, and when I saw them/got to to teach them French in a Maine workshop three years ago, I realized I was in the presence of the most talented, by far, group of language students I had or will ever see. Students who make college kids who are 4%ers majoring in the language look slow.

    What Anne did with them in four years was off the chart. You who were at that workshop remember. When she herself taught the Hogs in front of those 70 people, jaws among the teachers dropped. She didn’t know or care about output then.

    Output happens later. Yes, we compromise, but output happens later.

  16. Yep. I have also set up my tent in this camp. No writing in level 1 this year. Well, almost none. Dictée and essential sentences on occasion. Why mess with the elegant neurological choreography?

    1. I’m really trying to get all things “pure output” out of my rubrics. Students are there to acquire language, not produce it.

      What’s the least amount we can give that will 1) give us a break, 2) make them and parents and admins think they are learning something, and 3) not destroy their egos with failure?

      I like what you have mentioned, dictation and essential sentences. I’ll probably still use freewrites, too, because we can ignore errors. But beyond that, what else is there?

      1. Just because jen and I have taken this position doesn’t mean we aren’t going to do some output. We work in schools! People need to think they are learning. Do those things, of course! Do the freewrites. Make up some other markers so we don’t have to. Sometimes I have them repeat after me and tell them things about accent and how the mouth/aperture works to make certain French sounds and all, and they just love it. I could even grade them on that, make that a part of the output rubric. It’s all fake, but I like having a job. We align pedagogically with Krashen, but we also align with the dude (probably some old white dude) who signs our paychecks. I didn’t really know anything about the ACTFL Performance Indicators until Nathan put our attention on them. Now I can talk about them and drop the term in the same sentence as ACTFL. It’s because I care. Hell yeah.

        1. Since you teach French you can do that super neato thing where you have students say some phrase (I don’t know what it is) in front of a candle and make sure they don’t blow it out. I heard of it one time from a university professor.

          How sad that because we work in educational institutions we have to do things that are counterproductive to kids’ education?

          1. 1. I don’t know about the candle thing. Let us know! My talents revolve around the formation in the mouth of the French U* and the French R**. Both are bitches to say but we can make them sound authentic.

            Someone stayed at Rue Gît-le-Coeur in Paris. Hear-Lies-the-Heart street. Badass, n’est-ce pas? Say it. If you can say “street” in French you pretty much have it.

            *make your mouth round and say the letter e as a long e.
            **find the place in the back of your throat where the G as in Golly is formed. Now move it back about a millimeter. Now say Rat. But say the R from the back, just behind the American G, of the throat. It is about as far away from the Amerian R as it can get.

            Now say rue. Now practice for 10,000 hours.

            2. The thing about having to do things that are not productive is part of the teaching deal. I accept it. Plus, they aren’t totally unproductive if they’re fun and if they kids feel that they are learning. A lot of times when I get them speaking, way too early of course, they end up laughing, so that is a good thing.

          2. I imagine the candle thing revolves around the letter “p”. In English it is aspirated (i.e. followed by a puff of air); in French it is unaspirated (no puff of air). If an English speaker says “Peter Piper packed a peck of pickled peppers”, you may actually run the risk of extinguishing the candle – at the very least it will flicker wildly. A French speaker would cause scarcely a flutter.

      2. I just read this entire thread for the first time today. I go back to my high school job after a one year hiatus, and want to revamp my syllabi. I like what I was hearing from all the back and forth about simplifying assessment. (Great discussion!)

        I am considering adopting something new. Where can I find the latest document(s)? If you want to email me with anything, trippatmail2jimdotcom.

        Man, I got one of those Boulanger coffee mugs before MCTLC in October… best purchase I made all year!

        1. I have two mugs made by Grant. They contain soul.

          There have been many simplifications of jGR since last year, Jim. Diana Neubauer has one called dGR and Erica has one called eGR. And then there was one but I can’t remember who did it that was REALLY simple. It was in a comment here about a month ago and I should have made it into a separate article and categorized it but I forgot. Maybe you will get some guidance on this in the comment fields below.

  17. …why mess with the elegant neurological choreography?….

    Especially when we have research that supports that statement. I love the way you express it right there. That is the way to say it. We are so proud with our assumptions, when we should just stay with what we know really happens neurologically.

Leave a Comment

  • Search

Get The Latest Updates

Subscribe to Our Mailing List

No spam, notifications only about new products, updates.

Related Posts

The Problem with CI

To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to

CI and the Research (cont.)

To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to

Research Question

To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to

We Have the Research

To view this content, you must be a member of Ben’s Patreon at $10 or more Unlock with PatreonAlready a qualifying Patreon member? Refresh to

$10

~PER MONTH

Subscribe to be a patron and get additional posts by Ben, along with live-streams, and monthly patron meetings!

Also each month, you will get a special coupon code to save 20% on any product once a month.

  • 20% coupon to anything in the store once a month
  • Access to monthly meetings with Ben
  • Access to exclusive Patreon posts by Ben
  • Access to livestreams by Ben