In what ways does the student teacher:
- exhibit a varied repertoire of evaluation methods? How does the student teacher decide which particular method of evaluation to use? Are students included in the process?
- base his/her instruction on standards that are measurable via the assessment instruments employed?
- provide students with rubrics or task descriptions that clearly indicate successful and exemplary performance standards?
- use a variety of assessment measures as data that uncovers individual needs of students as well as drives subsequent instruction?
- employ evaluations that are not graded but are used for comprehension check and student feedback? How often is this done?
- use performance-based assessments that teach as much as they assess? To what extent are such projects a part of the class’s ongoing work?
- use grades in the classroom? To what extent are they used as a motivator? To what extent are students involved in the process of developing criteria for excellence?
- encourage learners to evaluate their own work and use the results of self-assessment to establish individual goals for learning and improved performance?
- use information from a variety of assessments (both standardized and self-constructed) to reflect on the effectiveness of their own teaching — and modify instruction accordingly?
- demonstrate awareness of and redress the potential cultural and linguistic biases embedded in assessment tools and practices?
- maintain careful records that show individual and whole class achievement in all content areas over time?
Often the kids finished these worksheets for homework, so I didn't always get to see everyone doing all the work him- or herself. What I did see as I walked around during class time, I recorded on rubrics I had typed up for the purpose. I immediately made a list with all fifteen students' names on it, but after having to scribble out five names every day for a week, I decided to create another one with only the names of the ten kids in my group. The homework system actually worked quite well, because the students didn't feel rushed knowing that taking the sheet home to finish was an option. I would record all the data I could during the lesson itself, and then when I collected the work back from them completed the next day, I'd review what I had marked the day before and continue filling in the chart based on their written work.
The example below is a good one, because I did manage to see pretty much the whole class in action, and captured real-time data. The red pen is from when I went home and noticed that students were using new or different strategies. In this example is my record sheet from a lesson in which I had introduced the doubling strategy of multiplication. The C-A-M-E scale is one I had been trying out, which I did not ultimately end up using because it felt oddly cumbersome and limited. But I tried it this day, and it did accomplish its purpose. Notice how I had objectively measured the column "Got It?" in these terms, and though the message gets across, it didn't really work for me.


The example above illustrates my use of the after-the-fact rubric as I have just described. This table simply describes all fifteen kids' results from a math quiz. Instead of recording just a number grade, it was hugely helpful to have a real breakdown of exactly how and where people struggled, as opposed to just the fact that they did. I was able to identify which factor pairs kids omitted, and which ones lots of kids omitted. I see where gaps in skip-counting exist, and I can tell at a glance which multiplication strategies are most comfortbale for which children.
Access to these data gives me two advantages. First, I know how to plan my instruction for the next day or week, since I now know where the class collectively needs more practice. And second, I have a great record to use for parent conferences or another meeting about a student. It could be used to indicate concern, or, hopefully, the growth a child has made from point A to point B.