Do I still use SBG? Um…

I recently received an email from a teacher who had tried implementing standards-based grading, and who had read my paper on the subject, and who asked me:

A fellow teacher and I just tried implementing SBG in our classes and ran into similar situations that you mentioned in your article. Some ideas of reassessment (and the number of students who were reassessing) started to make us wonder if this was really the best way to grade students. It also made us wonder if walking down this road might be futile based off of your brilliant way of putting physics as the “dead frog problem”.

So I guess I have two questions for you:
1. Do you still use SBG after this study?
2. Have you made any major changes from your study?

I thought the response I sent him might be of interest to others. So, I’m reproducing it here.


FWIW [regarding the “dead frog problem”], I’m not sure this problem is insurmountable within SBG, but I do think it’s an easy trap to fall into, and the solution is not obvious (at least to me).

The answers [to your two questions] are “sort of” and “yes”. Let me elaborate.

Whether and how I’m using SBG depends on which course you’re talking about; I teach several. I’ve used some form of it in two different contexts, which I’ll address separately.

First, the calc-based intro physics sequence (Physics 1 and 2), which I wrote about in that paper: I haven’t gone back to a full-on SBG implementation since the paper, but elements of SBG have worked themselves into the course. One of the things I liked best about SBG was the fact that while grading exams (or anything else), I was asking myself “About how well does this student show that he/she understands standard/topic XXX” and making a holistic judgment from the whole, rather than asking “How many points should I take off for this slip? How many points is that wrong, but somewhat understandable, approach worth?” So, now when I grade, I make a fairly short list of the overall topics/skills covered, and assign a SBG-style 0-4 (or 0-6 or etc.) score to each. This past semester, that meant each student got a three-component score on each midterm exam: “5/2/3”, for example, meaning a mastery level of 5 on the first topical chunk, 2 on the second, and 3 on the third.

I also use a larger grain-size for the standards/chunks. That sacrifices specificity, but helps enormously with practicality, and also reduces the killing of frogs.

My overall course score is still a fairly traditional weighted average of these various scores, as if they were points. I do give similar scores for homework, labs, etc., which count for a share. So far, it seems that in the competitive attention economy of college, I just can’t get enough students to pay attention to something that doesn’t have a direct impact on their grades. So, I’ve sold out in that respect [at least in the intro-level/service courses]. (For now?)

I’ve tried different things for the “reassessment” portion of SBG, which is both central to the approach and the source of most implementation pain. Here’s what I did before I started messing with SBG: I let students do each exam twice, once as a closed-book, individual, timed, in-class exam and again as a collaborative, open-anything, take-home due one week later. That is IMMENSELY valuable for student learning, because whatever they didn’t get on the exam, they can learn during that week, and they have motivation to do so. I then count both scores (or sets of scores) equally towards the overall course grade.

Now [last time through the course], my SBG twist is to tell students that after the take-home is submitted, graded, and returned, they can make an appointment to see me and reassess any specific standard/chunk in person. Before they show up, they have to completely re-do the relevant exam problems, which I’ll check to make sure they’ve gotten right now, and I’ll ask a few questions to make sure they get it rather than, say, having copied a friend’s solutions. Then I might ask a few related “What if…” questions to gauge how solid their understanding is. If I’m satisfied, I’ll replace their take-home score for that standard/chunk with a new one. (The in-class score, however, stands.)

I was disappointed by how few students took advantage of this opportunity. I’d say that out of 40-50 students, maybe 5-6 made a concerted effort to reassess everything they could, and an equal number made a half-assed attempt to fix a few at the very end of the course.

For things like lab reports, I occasionally offer students the opportunity to rewrite/resubmit for a replacement score, but not so often or generally that they learn to take the first submission too casually.

Second, for my Thermal Physics course for upper-class physics majors (14 students): I tried something radical. I articulated 33 separate learning objectives (LOs), 32 of which corresponded to sections of the text and one of which was “I can summarize the big picture: the major topics and important ideas and principles of this course, and how they all fit together.” (Keeping the frog alive?) Then, I scheduled every student for an individual half-hour oral exam EVERY FREAKING WEEK, during which I’d put them at the whiteboard in my office and quiz them to see how well they’d gotten the previous week’s LOs. There were no written exams, and homework was not collected or graded. However, I did warn them that many of my oral exam questions were directly based on some of the assigned homework, so doing the homework was pretty important to doing well in the course.

I would also allow students to make additional appointments to reassess earlier standards that they’d done poorly on. Fewer took advantage of this than I expected. In principle, they could also reassess during their normal slots if we had a bit of extra time, but we rarely did.

Overall, it worked pretty well: Most students kept up week to week (which is always a challenge), and I feel that I learned much more about what students do and don’t understand, and how they think, than through traditional written assessments. I could also offer bits of tailored feedback during the orals. The down-side is that giving seven hours of oral exam in my office every week darn near killed me, and my wife has forbidden me from ever doing that again.

Maybe some kind of blended solution, with some written assessments and a couple of orals during the semester, would work. But then, I’d lose the week-to-week motivation and measure of progress.

Posted in Learning & Teaching, Pedagogy, standards-based grading | 2 Comments

a crazy collision of a teaching idea

I love it when two different ideas collide in my head, even when I’m not entirely sure whether the result is a beautiful synthesis or an ugly wreck.

first idea

In a conference paper I wrote exploring what we could learn about physics teaching by studing effective video games, I said:

To me, themes of grounding, exploration, assessment, and feedback suggest that we ponder how the process of learning physics can be re-conceived in such a way that students learn physics by encountering and exploring some experiential “terrain” in which physics ideas are manifest, receiving immediate, obvious, and natural feedback about their developing understanding and competencies. One key ingredient for making feedback immediate and apparent, in a manner that scales, might be the development of students’ self-assessment capacities in parallel with content learning. Students must learn to self-check and peer-check in the same way that practicing scientists do.

When I wrote that, I was vaguely envisioning possibilities such as having students accompany each problem solution they produced (on homework or an exam) with a detailed self-critique of the steps they were more and less confident about; including frequent peer critique of assignments; and so on.

second idea

I do a fair amount of computer programming (and used to do much more), and for several years I’ve practiced test-driven development. Loosely speaking, this is an approach to programming that advocates:

  1. Writing test code that automatically tests all desired aspects and features of the program one is developing;
  2. Organically growing a program in small steps, beginning with a version that functions correctly for a trivial subset of the desired purposes, and then gradually adding functionality a bit at a time, each time producing code that passes all tests (for the features so far implemented) before proceeding; and
  3. Writing the test code for a new feature (or for the successful repair of a newly discovered bug) before writing the code for the feature or making the bug repair, so that the test initially fails, and then the feature/fix is done when the test succeeds.

The test-first discipline is difficult to self-enforce, but I find it quite helpful for anything beyond the simplest of programs. For one thing, it’s the only realistic way I’ve found to reliably produce a thorough, complete, trustworthy safety net of test code. Having such a safety net allows me to freely tinker with working code to “refactor” it and make it better and more elegant. For another, having to think through the test cases up front helps me clarify exactly what the desired feature needs to do and how the code implementing it needs to be factored, which leads to fewer dead ends and “Oops, I need to do this a different way” moments. For a third, writing code so that all the logical pieces are independently testable forces me to break my program up into simple, loosely-coupled chunks, which is generally good practice.

the collision

For the last few days, I’ve been teaching myself to program in R in order to do some statistical analysis. The code I’ve been developing has started to get more complex, so yesterday I figured out how to set up a framework for automatically testing R code.

The collision occurred this morning, while rereading that conference paper to prepare for an upcoming talk. It occurred to me: “What if we treated building knowledge like building a computer program, and established self-tests for the things we wanted to learn before we learned them? What if we could get students to buy into this approach?”

What would that mean in practice? I’m not entirely sure. To develop test code, I have to figure out what “feature” I want to add to my program, and envision how the program will behave (as seen from the outside) when the feature is implemented. I’ll then write test code that tries to make the program exhibit that feature in a variety of circumstances, and squawks whenever the program’s behavior deviates from the expectations coded into the tests. Then I’ll run the test suite, which will of course squawk like crazy since I haven’t yet actually written the feature into the program. Only then do I begin engineering the feature into the program, testing as I go, until the test code gives me a thumbs-up. That’s when I know I’m done.

Actually, I’m not quite done then. The final step is to review the feature’s implementation, and see whether I can make it cleaner, faster, more elegant, or otherwise better. While so doing, I intermittently re-run the test suite to make sure I haven’t unwittingly introduced a bug.

Let’s think about this with “learning some physics” in place of “developing software”. My first step is to identify the “feature” I want to add, which is like identifying the physics I want to learn. Next, I have to specify that feature: How, precisely, will the program behave when the feature implemented? What will it do in various circumstances? In learning physics, this maps to specifying what the desired knowledge will allow one to do that one can’t currently do. I think students are often fairly fuzzy on that, and introducing this into students’ learning process could be a big win. I’m in favor of anythign that gets students to think about physics as “capacities to develop” rather than “stuff to know”.

Next comes the hard part: writing the test code. In the physics case, that means (the learner, not the instructor) setting up self-tests of some kind(s) that one can’t currently pass, but expects to be able to pass after learning the desired thing. That means operationalizing the specified capacities very concretely. This will likely be the hardest step: basically, making up test questions (or other assessments) before one has learned what the test will be testing. And it’s a bit of a trick, too: In the process of doing this, one is already beginning to do the learning. I find that as I write test code for a desired program feature, I’m already beginning to see the shape of the implementation code in my mind. I’m sure that somebody, somewhere, in some context has sagely observed that clarifying a problem is often the most important step in solving it.

Once the self-test is developed, go learn. Unike the vague and rudderless learning that students often do, however, guided by no objective more specific than “Get ready for a forthcoming but unknown exam”, one has very specific questions/challenges that one is seeking answers/solutions to, some thing I have called “question-driven instruction” elswehere. I firmly maintain that in real life, most of us learn something (like how to program in R) with a very specific objective in mind (like analyzing a particular data set), and that having a clear goal helps us learn more effectively and efficiently by providing direction and by giving us a way to structure and organize what we learn by its utility rather than by the chronology of a textbook or course.

The feature is implemented when all the tests in the test code are passed. Similarly, the student is done learning (that bit) when she can successfully complete the test tasks she originally laid out. Imagine the self-confidence that could arise from that!

Of course, it’s common in test-driven programming to discover bugs or incompletenesses in the program that one hadn’t anticipated and built into the tests, often when the program is tried in a new context. Similarly, a learner will no doubt discover weaknesses in her understanding, revealed by contexts and questions she had not originally contemplated. No worries: The appropriate thing to do is to immediately develop new tests that capture this weakness, such that the tests are currently failed but will be passed when the weakness has been satisfactorily remedied. This is an expected part of developing a program organically through test-driven development, not a failure of the system.

If you’re trying to envision this process playing out in a physics course, please keep in mind that that this is an iterative process, feature by feature, and that each iteration attacks a fairly small chunk of the overall problem. If you’re imagining asking students to “design self-tests that will let you know when you understand Newton’s laws”, you’re thinking of chunks that are way too large. A better example might (possibly) be “design self-tests that will let you know when you can predict an object’s position at any future time, given the object’s initial position, velocity, and (constant) acceleration.”

Might such a pedagogical approach be practical? I have to say “of course it is!” — at some level — because that’s the way most people learn most things, outside of school. When our learning is in response to a specific need, meeting the need is the test. If the need is complex, we break it down into compoents, and learn what we must to overcome them one at a time. Only in “school learning” do we try to pack knowledge into our heads because we’re told we should learn this stuff and that it will be useful eventually, and only in school do we depend on someone else to tell us whether we understand what we hope we do.

Would it be too slow, with all that test-inventing taking time away from the actual learning? Many people’s perception of test-driven software development is that it’s slow, because one often spends more time writing the test code than writing the code that actually implementats the features. However, practitioners of test-driven development also tell you that such time is not lost; it is invested, and recovered with dividends as the program gets increasingly complex, bugs appear that must be found and squashed, new features must be added on without breaking old ones, and the program must be checked out before being trusted. “I *think* this program works, because it seems to when I fiddle with it, and all the programming decisions I made seemed right at the time” is not a comfortable place to be.

Is this collision an ugly wreck or a beautiful synthesis? You decide.

Posted in Learning & Teaching, Pedagogy, problem/project-based learning | 12 Comments

Post-Holocene Education?

Tonight, Prof. Ben Ramsey of the UNCG Religious Studies Department gave the University’s inaugural Future of Learning lecture. I attended. I feel like I’ve been ambushed and beaten up, intellectually speaking. This was not the “bright new horizons in pedagogy” talk I’d expected. Ramsey’s title was “After Learning: Education on a Hot Planet,” which merely hints at his considered, complex, provocative, and deeply distressing message.

I am still processing what I heard, and my memory for detail (as opposed to gist) is not the best. I need to review a video of the lecture; what follows is certainly a clumsy and oversimplified summary.

Ramsey’s delivery was simple and passionate. He displayed only one slide, and spoke naturally and forcefully to the audence with only an occasional glance at his notes. At the outset, he revealed that his sister was in the very final stages of a long battle with colon cancer, and he clearly brought some of that emotion to the urgency of his speech.

Ramsey’s first point was that we are now living in the post-holocene epoch, which some call the “anthropocene”: the geologic era in which humanity, due to the scale of its population and the power of its technology, alters the planetary climate and ecology. Citing the sudden and permanent disappearance of the cod ecosystem off the coast of Maine, he pointed out that the collapse of an ecosystem is not a pleasant thing. Although we tend to think of ourselves as separate from nature, he argued that we must begin thinking of ourselves and nature tightly and inextricably intertwined.

The term “hot planet” in his title refers to more than just global climate change. He also meant it to represent the social, political, and economic stresses and consequent crises arising from a socio-cultural-ecological system pushed past its limits, and he painted a vivid portrait of this system: Foxconn factories enmeshed in suicide-prevention nets to stop workers from leaping to their deaths, rural Indian farmers walking out into their fields at night and drinking pesticide to quietly end their lives when drought and rising costs bankruped them, “exceptional” storms becoming the norm. He read an extended excerpt from Bill McKibben’s book Eaarth claiming that we no longer inhabit the generally comfortable planet so ideally suited to life that we’ve lived on for our last 10,000 years. We’ve pushed our socio-cultural-ecosystem over the edge and changed it, and now we find ourselves on a different, less friendly planet with melting ice caps, growing deserts, acidifying oceans, disappearing species, growing numbers of hungry people, and a “hollowing out” of the middle class. Ramsey declared that the species humanity now most resembles in its collective behavior is “global locusts in an extended plague phase.”

The planet can’t be fixed by clever engineering or technology or political action, because it isn’t broken. It’s different. We don’t have a problem, we have a predicament. Predicaments aren’t solved; you just land more or less softly.

How did we get here? Ramsey argued that twice in our history — once at the onset of the Industrial Revolution, and again in the 1970s and 80s — we let business “escape from the household” and run amok. He drew a very careful distinction between “economics” and “chrematistics.” He claimed that Aristotle’s definition of economics connoted “management of the household”, whereas chrematistics means the short-term pursuit of financial gain (“mammon”). In recent decades, we have returned to the excesses of the robber barons and prioritized near-term material profit over sound management of our collective household.

He specifically attacked “economic growth” as a false god of our age, something we unquestioningly pursue and look to as a solution to our problem. He labeled growth a mirage, arguing that we cannot continue to extract resources and expel by-products at 1.5 times the rate that the planet can regenerate and absorb. Nevertheless, worldwide economic policies establish growth as their explicit goal, and as the solution to economic ailments.

This mind-set taints education, too. It sees people as “human capital”, and the goal of education as producing more human capital to fuel economic growth through production and consumption (a perspective recently voiced by our aggressive new governor). A curriculum focused on employable skills, on analytic thinking, and on “linear” problem solving serves only to perpetuate the problem.

Ultimately, Ramsey’s argument is that we desperately need to take an economic, rather than a chrematistic, perspective on education. We need to teach future citizens how to manage our house, not how to be human capital in the pursuit of growth. He’s not sure exactly what that kind of an education would look like, but it needs to develop other kinds of thinking beyond the analytic and linear — metaphorical, dialectical, synthetic — thinking that can help us cope with (not fix) incredibly complex, nonlinear systems. It needs to make us spend some time living under big overarching ideas like “the holocene is over” and “the Earth is full,” not arguing about them but finding out where our thinking takes us as we absorb them.

Ramsey’s talk struck me as deeply pessimistic, but he tempered that by saying that if he didn’t believe in the power of education to help us navigate the future, he wouldn’t be here giving this talk.

So where does that leave me? Damned if I know. Ramsey’s lecture was either prophetic or a raving mania of fear and cynicism. It would be far more comfortable to write it off as the second, but I’m not finding that easy to do. Ramsey framed his talk as being about intellectual honesty, not about being right or engaging in arguments. Intellectual honesty compels me to take his perspective seriously.

On my walk home, I found myself briefly jealous that I teach physics, instead of some more relevant discipline wherein I could engage students in deep thinking about such important topics as economics vs. chrematistics and our future on an increasingly hostile planet. And then, I thought that perhaps physics was not so irrelevant after all, if framed as an arena for learning to understand complex interconnected systems. Perhaps learning how to model and understand the physical universe, from the quantum level to the cosmological, is a suitable warm-up for coming to terms with our socio-cultural-ecological system’s new dynamics?

Posted in Culture, Learning & Teaching, Politics | 7 Comments

If flash cards are the answer, we’re asking the wrong question.

I’m in St. Louis. I’ve just finished a two-day conference at Washington University that brought together leading cognitive science/cognitive psychology researchers with education researchers and innovators from various STEM (science, technology, engineering, and mathematics) disciplines. As conferences go, it was pretty good: a manageable number of generally high-quality (invited) talks, neither too short nor too long, by respected scholars; adequate time for questions and discussion; enough elbow room around the scheduled events for networking and hallway chat; a sense of conference mission; and a few breakout sessions for greater interactivity. Really, the organizers did a good job.

So why am I disappointed and frustrated? Two reasons:

1) There’s still a huge gap between the phenomena that the “coggies” are studying and those that we in higher STEM education are (or at least should be) wrestling with. I find it telling that the coggies, in talks about learning, generally refer to “recall” and “reproduction” when they’re being careful about what learning outcomes they’re seeking. When my students are struggling their way through the analysis of a complex physics scenario, they’re doing a whole lot more than recalling or reproducing something.

I tried raising this a few times during the conference, but rather than dialogue aimed at understanding and closing the gap, I elicited two kinds of response: defenses of why coggies study basic recall and training mechanisms, and assertions that cognitive science also has work that speaks to the general sorts of concerns I’m presumed to have. Yes, I know: research on “executive function” and “metacognition”, for example. Unfortunately, such research had a low profile at the conference. More seriously, much of that research is also conducted in highly simplified, abstracted situations far removed from the messiness and contextual dependencies of real classroom learning.

I am absolutely *not* criticizing how the coggies do their thing. Physicists have spent much time and many federal dollars researching highly simplified, isolated systems, too; that’s where theory-building begins. (Thus the old joke about the spherical cow.) However, I have a problem with glibly calling memorization and visual identification “learning” and framing it as something of direct relevance to my teaching or my education research work, as this conference seems to have at least implied.

What frustrated me was the lack of dialogue about how to connect such low-level, yet undeniably foundational and relevant, research to the actual work of STEM teaching and learning. Giving research-based advice about strategies for efficient studying with flash cards doesn’t cut it.

2) Even among the STEM education researchers and innovators at the conference, most of the presentation and discussion was about narrow, localized topics: specific innovative curricula, getting students to approach physics problems strategically, one department’s project to overhaul its upper-level courses, capstone courses to teach engineering design, etc. We all came with our parochial concerns and peeves and pet ideas, and spent most of our time seeking an audience for those.

Yes, cross-fertilization is a good thing, and is perhaps the primary benefit of typical conferences. This, however, wasn’t supposed to be a typical conference. I wish we had tried to take advantage of all those different viewpoints by rising to a higher level, seeing how they fit together into a more general pattern, and articulating a broader research agenda for the general community. (Admittedly, I’m something of an extremist about “going meta” at every opportunity.)

I suspect that this kind of big-picture-emerges-from-many-perspectives outcome is what the conference organizers were hoping for. Could it have been instigated by structural changes to the conference design? Perhaps. Would more explicit metacommunication about our joint purpose have helped? Maybe. Was the conference about as successful as it could possibly have been, given the state of the fields represented and the psychology of academics? Quite possibly.

The day closed with some talk about repeating the event in another year or two, to see what fruit arose from this year’s frenzy of cross-pollination. Maybe by then, we’ll all be better at seeing the chasms that divide us and planning the bridges we need.

Posted in Educational Research, Learning & Teaching | 2 Comments

No-Bullshit Teaching

This is a post where I try to put some ideas I’ve been wrestling with for a while into new words, hoping for new insight. What follows may or may not be worth a hoot. Caveat emptor.

The more I do this “teaching” stuff, the less tolerance I have for the bullshit involved. I’m realizing that the driving force behind many of my pedagogical experiments and innovations is a desire to reduce bullshit.

For the present purposes, I’ll define bullshit as “statements or actions that are not strongly, deeply, openly, and genuinely aligned with actual goals and values.”

Using the assessment and reporting (“grading”) system of a course to coerce students into acting in their own best interest—or what we at least believe to be in their own best interests—is bullshit.

Assigning tasks for students to do in order to drive them to study what I want them to study, rather than because I really care that they can complete those specific tasks, is bullshit.

Structuring my course around an explicit system of instruction with “rules” for how I should teach and assess (no matter how pedagogically enlightened and research-based), and then spending more energy figuring out how to conform to the system rather than what my students are thinking and what I can do to help them, is bullshit.

Any time the students are asked to focus on something other than the direct, bare learning objectives of the course, or I have to focus on something other than moving them closer to achieving those objectives, bullshit is happening.

Why is formal education so rife with bullshit?

I think the primary cause is something I wrote a paper about a few years ago, albeit in a slightly different context. In Illuminating teacher change and professional development with CHAT” (2009; abbreviated, cleaned up a bit, and in press right now as “Viewing teacher transformation through the lens of cultural-historical activity theory”), I used cultural-historical activity theory (CHAT) to analyze an intervention my colleagues and I did with secondary school science and math teachers. Long story short, I argued that the root of many of teaching’s difficulties is an inherent contradiction in how we see and treat students: as self-invested learners whose aspirations we support, and as recalcitrant subjects whose conformity we coerce. As I put it in the paper:

The activity system treats students as the object of activity, as if they were “raw material… at which activity is directed” (CATDWR, 2003, ¶4), despite the unavoidable fact that they are willful individuals making a transition to adulthood. Students’ dual status as both object and community member lies at the root of the contradiction. The issue is sovereignty and whether students act or are acted upon.

Unfortunately, we are stuck in an institutionalized educational structure that deeply embeds this contradiction. Vanishingly few of even my most curious, internally motivated students feel they can afford to ignore the bullshit.

Short of overhauling the entire system (which I’d love to do, but doubt I can achieve on any reasonably finite time scale), what options does that leave me as a university teacher?

First, let me sharpen my definition of bullshit in an instructional context. A course plays out on two separate but interrelated planes: the plane of behavior engineering, involving all the artificial constraints and of grading, deadlines, work requirements, and so on; and the plane of intellectual learning, involving the actual sense-making activity of learner wrestling with content and instructor trying to help. Bullshit is any time the first plane interferes with the second in any way whatsoever, even by merely distracting attention.

One option is to (somehow) narrow the gap between the two planes, making the behavioral engineering plane as minimal, unobtrusive, and well-aligned with the learning plane as possible. That could mean, for example, that when I want students to understand how the concept of entropy bridges microscopic and macroscopic models of thermal systems, I should assign them the task of “make sense of how the concept of entropy bridges microscopic and macroscopic models of thermal systems.” The corresponding assessment should be “explain, illustrate in multiple ways, and reflect upon how the concept of entropy bridges microscopic and macroscopic models of thermal systems.” Or something like that.

Of course, most students need scaffolding in order to accomplish that, and I can provide scaffolding by way of suggested readings, suggested stepping-stone tasks to wrestle with, subsidiary questions to discuss, and so on. If I make this scaffolding the official assignment and assessment structure, however, I’ve just introduced bullshit.

As many of you will no doubt be hollering at me right now, the weakness of this approach lies in the fact that many students will deliberately or unwittingly undermine such an approach by finding ways to “game” it, and/or by availing themselves of the tremendous freedom provided to shoot themselves squarely in the foot. Why? Because we’re embedded in an institutionalized sea of bullshit, and they’ve been raised to view learning through that highly discolored lens. Students function in an attention economy where the immediacy and consequences of demands on the behavioral engineering plane determine how time and effort should be allotted. “Yeah, I wanted to spend some time figuring out that physics thing, but I had a history paper due and a chem exam coming up…”

The other option is to go completely renegade and subvert the system entirely: refuse to play the game. On a lesser scale, this could mean cheerfully letting students shoot themselves in the foot rather than introducing any behavioral-incentive bullshit, and hoping that eventually some will develop the requisite internal motivation. On a greater scale, it might mean taking the issue of grades off the table (and refusing to buy into the institution’s use of grades) by stating up front that “Every student enrolled in the course get an A. Now, let’s stop thinking about grades and do some learning.”

I’m not sure I’ve got the guts to do the latter—certainly not before tenure. I definitely feel driven in that general direction, however, by the distinctly nauseating odor of bullshit.

Posted in Learning & Teaching, Pedagogy | 1 Comment