A fellow teacher and I just tried implementing SBG in our classes and ran into similar situations that you mentioned in your article. Some ideas of reassessment (and the number of students who were reassessing) started to make us wonder if this was really the best way to grade students. It also made us wonder if walking down this road might be futile based off of your brilliant way of putting physics as the “dead frog problem”.
So I guess I have two questions for you:
1. Do you still use SBG after this study?
2. Have you made any major changes from your study?
I thought the response I sent him might be of interest to others. So, I’m reproducing it here.
FWIW [regarding the “dead frog problem”], I’m not sure this problem is insurmountable within SBG, but I do think it’s an easy trap to fall into, and the solution is not obvious (at least to me).
The answers [to your two questions] are “sort of” and “yes”. Let me elaborate.
Whether and how I’m using SBG depends on which course you’re talking about; I teach several. I’ve used some form of it in two different contexts, which I’ll address separately.
First, the calc-based intro physics sequence (Physics 1 and 2), which I wrote about in that paper: I haven’t gone back to a full-on SBG implementation since the paper, but elements of SBG have worked themselves into the course. One of the things I liked best about SBG was the fact that while grading exams (or anything else), I was asking myself “About how well does this student show that he/she understands standard/topic XXX” and making a holistic judgment from the whole, rather than asking “How many points should I take off for this slip? How many points is that wrong, but somewhat understandable, approach worth?” So, now when I grade, I make a fairly short list of the overall topics/skills covered, and assign a SBG-style 0-4 (or 0-6 or etc.) score to each. This past semester, that meant each student got a three-component score on each midterm exam: “5/2/3”, for example, meaning a mastery level of 5 on the first topical chunk, 2 on the second, and 3 on the third.
I also use a larger grain-size for the standards/chunks. That sacrifices specificity, but helps enormously with practicality, and also reduces the killing of frogs.
My overall course score is still a fairly traditional weighted average of these various scores, as if they were points. I do give similar scores for homework, labs, etc., which count for a share. So far, it seems that in the competitive attention economy of college, I just can’t get enough students to pay attention to something that doesn’t have a direct impact on their grades. So, I’ve sold out in that respect [at least in the intro-level/service courses]. (For now?)
I’ve tried different things for the “reassessment” portion of SBG, which is both central to the approach and the source of most implementation pain. Here’s what I did before I started messing with SBG: I let students do each exam twice, once as a closed-book, individual, timed, in-class exam and again as a collaborative, open-anything, take-home due one week later. That is IMMENSELY valuable for student learning, because whatever they didn’t get on the exam, they can learn during that week, and they have motivation to do so. I then count both scores (or sets of scores) equally towards the overall course grade.
Now [last time through the course], my SBG twist is to tell students that after the take-home is submitted, graded, and returned, they can make an appointment to see me and reassess any specific standard/chunk in person. Before they show up, they have to completely re-do the relevant exam problems, which I’ll check to make sure they’ve gotten right now, and I’ll ask a few questions to make sure they get it rather than, say, having copied a friend’s solutions. Then I might ask a few related “What if…” questions to gauge how solid their understanding is. If I’m satisfied, I’ll replace their take-home score for that standard/chunk with a new one. (The in-class score, however, stands.)
I was disappointed by how few students took advantage of this opportunity. I’d say that out of 40-50 students, maybe 5-6 made a concerted effort to reassess everything they could, and an equal number made a half-assed attempt to fix a few at the very end of the course.
For things like lab reports, I occasionally offer students the opportunity to rewrite/resubmit for a replacement score, but not so often or generally that they learn to take the first submission too casually.
Second, for my Thermal Physics course for upper-class physics majors (14 students): I tried something radical. I articulated 33 separate learning objectives (LOs), 32 of which corresponded to sections of the text and one of which was “I can summarize the big picture: the major topics and important ideas and principles of this course, and how they all fit together.” (Keeping the frog alive?) Then, I scheduled every student for an individual half-hour oral exam EVERY FREAKING WEEK, during which I’d put them at the whiteboard in my office and quiz them to see how well they’d gotten the previous week’s LOs. There were no written exams, and homework was not collected or graded. However, I did warn them that many of my oral exam questions were directly based on some of the assigned homework, so doing the homework was pretty important to doing well in the course.
I would also allow students to make additional appointments to reassess earlier standards that they’d done poorly on. Fewer took advantage of this than I expected. In principle, they could also reassess during their normal slots if we had a bit of extra time, but we rarely did.
Overall, it worked pretty well: Most students kept up week to week (which is always a challenge), and I feel that I learned much more about what students do and don’t understand, and how they think, than through traditional written assessments. I could also offer bits of tailored feedback during the orals. The down-side is that giving seven hours of oral exam in my office every week darn near killed me, and my wife has forbidden me from ever doing that again.
Maybe some kind of blended solution, with some written assessments and a couple of orals during the semester, would work. But then, I’d lose the week-to-week motivation and measure of progress.