a crazy collision of a teaching idea

I love it when two different ideas collide in my head, even when I’m not entirely sure whether the result is a beautiful synthesis or an ugly wreck.

first idea

In a conference paper I wrote exploring what we could learn about physics teaching by studing effective video games, I said:

To me, themes of grounding, exploration, assessment, and feedback suggest that we ponder how the process of learning physics can be re-conceived in such a way that students learn physics by encountering and exploring some experiential “terrain” in which physics ideas are manifest, receiving immediate, obvious, and natural feedback about their developing understanding and competencies. One key ingredient for making feedback immediate and apparent, in a manner that scales, might be the development of students’ self-assessment capacities in parallel with content learning. Students must learn to self-check and peer-check in the same way that practicing scientists do.

When I wrote that, I was vaguely envisioning possibilities such as having students accompany each problem solution they produced (on homework or an exam) with a detailed self-critique of the steps they were more and less confident about; including frequent peer critique of assignments; and so on.

second idea

I do a fair amount of computer programming (and used to do much more), and for several years I’ve practiced test-driven development. Loosely speaking, this is an approach to programming that advocates:

  1. Writing test code that automatically tests all desired aspects and features of the program one is developing;
  2. Organically growing a program in small steps, beginning with a version that functions correctly for a trivial subset of the desired purposes, and then gradually adding functionality a bit at a time, each time producing code that passes all tests (for the features so far implemented) before proceeding; and
  3. Writing the test code for a new feature (or for the successful repair of a newly discovered bug) before writing the code for the feature or making the bug repair, so that the test initially fails, and then the feature/fix is done when the test succeeds.

The test-first discipline is difficult to self-enforce, but I find it quite helpful for anything beyond the simplest of programs. For one thing, it’s the only realistic way I’ve found to reliably produce a thorough, complete, trustworthy safety net of test code. Having such a safety net allows me to freely tinker with working code to “refactor” it and make it better and more elegant. For another, having to think through the test cases up front helps me clarify exactly what the desired feature needs to do and how the code implementing it needs to be factored, which leads to fewer dead ends and “Oops, I need to do this a different way” moments. For a third, writing code so that all the logical pieces are independently testable forces me to break my program up into simple, loosely-coupled chunks, which is generally good practice.

the collision

For the last few days, I’ve been teaching myself to program in R in order to do some statistical analysis. The code I’ve been developing has started to get more complex, so yesterday I figured out how to set up a framework for automatically testing R code.

The collision occurred this morning, while rereading that conference paper to prepare for an upcoming talk. It occurred to me: “What if we treated building knowledge like building a computer program, and established self-tests for the things we wanted to learn before we learned them? What if we could get students to buy into this approach?”

What would that mean in practice? I’m not entirely sure. To develop test code, I have to figure out what “feature” I want to add to my program, and envision how the program will behave (as seen from the outside) when the feature is implemented. I’ll then write test code that tries to make the program exhibit that feature in a variety of circumstances, and squawks whenever the program’s behavior deviates from the expectations coded into the tests. Then I’ll run the test suite, which will of course squawk like crazy since I haven’t yet actually written the feature into the program. Only then do I begin engineering the feature into the program, testing as I go, until the test code gives me a thumbs-up. That’s when I know I’m done.

Actually, I’m not quite done then. The final step is to review the feature’s implementation, and see whether I can make it cleaner, faster, more elegant, or otherwise better. While so doing, I intermittently re-run the test suite to make sure I haven’t unwittingly introduced a bug.

Let’s think about this with “learning some physics” in place of “developing software”. My first step is to identify the “feature” I want to add, which is like identifying the physics I want to learn. Next, I have to specify that feature: How, precisely, will the program behave when the feature implemented? What will it do in various circumstances? In learning physics, this maps to specifying what the desired knowledge will allow one to do that one can’t currently do. I think students are often fairly fuzzy on that, and introducing this into students’ learning process could be a big win. I’m in favor of anythign that gets students to think about physics as “capacities to develop” rather than “stuff to know”.

Next comes the hard part: writing the test code. In the physics case, that means (the learner, not the instructor) setting up self-tests of some kind(s) that one can’t currently pass, but expects to be able to pass after learning the desired thing. That means operationalizing the specified capacities very concretely. This will likely be the hardest step: basically, making up test questions (or other assessments) before one has learned what the test will be testing. And it’s a bit of a trick, too: In the process of doing this, one is already beginning to do the learning. I find that as I write test code for a desired program feature, I’m already beginning to see the shape of the implementation code in my mind. I’m sure that somebody, somewhere, in some context has sagely observed that clarifying a problem is often the most important step in solving it.

Once the self-test is developed, go learn. Unike the vague and rudderless learning that students often do, however, guided by no objective more specific than “Get ready for a forthcoming but unknown exam”, one has very specific questions/challenges that one is seeking answers/solutions to, some thing I have called “question-driven instruction” elswehere. I firmly maintain that in real life, most of us learn something (like how to program in R) with a very specific objective in mind (like analyzing a particular data set), and that having a clear goal helps us learn more effectively and efficiently by providing direction and by giving us a way to structure and organize what we learn by its utility rather than by the chronology of a textbook or course.

The feature is implemented when all the tests in the test code are passed. Similarly, the student is done learning (that bit) when she can successfully complete the test tasks she originally laid out. Imagine the self-confidence that could arise from that!

Of course, it’s common in test-driven programming to discover bugs or incompletenesses in the program that one hadn’t anticipated and built into the tests, often when the program is tried in a new context. Similarly, a learner will no doubt discover weaknesses in her understanding, revealed by contexts and questions she had not originally contemplated. No worries: The appropriate thing to do is to immediately develop new tests that capture this weakness, such that the tests are currently failed but will be passed when the weakness has been satisfactorily remedied. This is an expected part of developing a program organically through test-driven development, not a failure of the system.

If you’re trying to envision this process playing out in a physics course, please keep in mind that that this is an iterative process, feature by feature, and that each iteration attacks a fairly small chunk of the overall problem. If you’re imagining asking students to “design self-tests that will let you know when you understand Newton’s laws”, you’re thinking of chunks that are way too large. A better example might (possibly) be “design self-tests that will let you know when you can predict an object’s position at any future time, given the object’s initial position, velocity, and (constant) acceleration.”

Might such a pedagogical approach be practical? I have to say “of course it is!” — at some level — because that’s the way most people learn most things, outside of school. When our learning is in response to a specific need, meeting the need is the test. If the need is complex, we break it down into compoents, and learn what we must to overcome them one at a time. Only in “school learning” do we try to pack knowledge into our heads because we’re told we should learn this stuff and that it will be useful eventually, and only in school do we depend on someone else to tell us whether we understand what we hope we do.

Would it be too slow, with all that test-inventing taking time away from the actual learning? Many people’s perception of test-driven software development is that it’s slow, because one often spends more time writing the test code than writing the code that actually implementats the features. However, practitioners of test-driven development also tell you that such time is not lost; it is invested, and recovered with dividends as the program gets increasingly complex, bugs appear that must be found and squashed, new features must be added on without breaking old ones, and the program must be checked out before being trusted. “I *think* this program works, because it seems to when I fiddle with it, and all the programming decisions I made seemed right at the time” is not a comfortable place to be.

Is this collision an ugly wreck or a beautiful synthesis? You decide.

About Ian

Physics professor... science education researcher and evangelist... foodie and occasionally-ambitious cook... avid traveler... outdoorsy type (hiking, camping, whitewater kayaking, teaching wilderness survival skills to high school students, etc.)... amateur photographer... computer programmer and amateur web designer... and WAAY too busy!
This entry was posted in Learning & Teaching, Pedagogy, problem/project-based learning. Bookmark the permalink.

12 Responses to a crazy collision of a teaching idea

  1. EKPhys says:

    That’s a spectacular collision. I’m mort familiar with T-D software development, but your explanation sure makes it clear. Lets try to write a “narration” of a physics learner thinking throughthree development of an idea this way. Your suggestion of a2D projectile problem sounds like a good one, though even maybe a bit too complex. Maybe just a horiz launch… Want to try that together??

    • Ian says:

      Hi, Eric.

      I was actually thinking one-dimensional kinematics, not projectile motion, when I wrote that… Though looking back, I see that it could equally well apply to any multi-dimensional problem with any constant acceleration. My point was simply that students should identify something they can’t currently do but can envision doing, generally as a small step away from where they are currently, and design a litmus-test task they can use to “prove” to themselves when they’ve really gotten it.

      I’m hesitant to write a simulated “narration” like you suggest, because I suspect that the thought processes that we (as content experts) would come up with are unlikely to match the genuine thoughts of learners, and so I’m not sure what the point would be. (I’m not saying there isn’t value in the exercise, just that I don’t immediately see it.) If you want to take a whack at one to show me what you’re thinking, I’d be happy to react to it.

      An interesting little Physics Education Research project might be to give real students the task of coming up with such a self-test, and then record their thinking and product in great detail (think-aloud protocol, recording small groups discussing the challenge, etc.). Hm…

  2. Last summer as we were programming the problem database for the Global Physics Department, one of the collaborators suggested we try the test development approach. I confess that I had never heard of it, so I dug in. I was really intimidated. I was thinking of cool features, building them, and then hoping I hadn’t broken something. The test approach turned all that on its head, and, while I could see some benefit, I was unable to get myself to do it.

    Having gone through that, though, I have to say that your idea here sounds really interesting. I like the notion of asking a student to determine what they would like to be able to do (like your example about the projectile question). I like that then they might be motivated to keep that goal in mind and constantly check themselves until they pass that test. Do I have the paradigm right?

    Here’s something I’m not clear on with the analogy, though. When I want to solve a mechanics problem in Mathematica, I don’t use the test development approach, because I keep fiddling until I get it. That’s very different from a program that has a backend and a frontend, like an interface, for example. If I’m doing something for myself (solving a mechanics problem in Mathematica), I don’t worry about others using it at all, so there’s not usually an interface I have to deal with. Now, maybe, I could benefit from the test approach and I’m just not seeing it. I guess my question at this point is what would an interface look like in your analogy?

    • Ian says:

      Hey, Superfly.

      Test-driven development is really only practical and sustainable once you’ve gotten fluent enough with writing and running tests to reduce the friction. There’s no substitute for getting and learning a good unit testing framework for your language: JUnit for Java, mUnit for MATLAB, rUnit for R, etc. Once you’re comfortable with that, it becomes a whole lot easier and more natural.

      BTW, the way I handle the computational physics courses I teach is to come up with a programming assignment (in MATLAB), and then to develop my own solution using a TDD approach. When I’m done, not only do I have a solid solution to check students’ work against, but I’ve also got a complete suite of tests that I can use to check the programs they submit. That makes grading/feedback **VERY** efficient! They submit a program, I simply run my test suite on it (a one-liner), and send them a response summarizing what my test code told me.

      Since you’re a Mathematica guy: http://goo.gl/LoYfh

      Re your second paragraph: Yes, I think that’s a fair summary of the paradigm.

      Re your third paragraph: Imagine that the Mathematica function you’re writing (or calculation your doing) — say, finding the magnetic field due to some arbitrary current loop by integrating the Biot-Savart law — is complex enough that it’s not immediately obvious whether the result it produces is correct or not. What do you do? Well, you try it out on a variety of special and limiting cases that you know the answer to, and make sure that it gives you the proper results. All that a “test suite” would do is automate that process. Each test function would call your to-be-written calculation function, pass it a specific set of special-case arguments, and check the result that comes back.

      This may or may not be worth automating for a one-off, relatively straightforward, single-piece bit of coding. It really comes into its own for complex code, code with multiple parts or functionalities that have to work together (reading in and interpreting a data file, aggregating it with other data, doing some data transformations, doing a calculation with that, etc.), or code that’s going to be reused in a variety of situations. It’s not just about “interfaces”.

      Just like knowledge that is not a single isolated bit, must be reused in a variety of situations, etc…

      • Joss Ives says:

        Ian. I played around with unit testing for python and Mathematica this past summer after chatting with you about the test-driven development paradigm. For my comp-phys course I ended up going with a lot of types of assignments submissions that didn’t seem to work well with the paradigm, but that was probably mostly my noob-ness more than anything (they did animations in VPython or many of their submissions were graphical). But I did adopt your “not good enough until it does what it is supposed to do” grading philosophy and I can see how the unit testing would really make the feedback more efficient, and improve the level of objectivity in the grading. I will have to revisit unit testing for some obviously appropriate assignments and see what happens.

        Since you are sending them the output of the test as feedback, have you considered just giving them the test up front so that they can test their own code before sending it to you?

        As for the main point of the actual post, I need to let it swish around in my brain for a while.

        • Ian says:

          Hi, Joss.

          I don’t usually send them the output of the test code, because it’s a bit cryptic. I usually interpret the code, and send them my summary of the problems. (The code often gives me the line number of the assertion failure in my test code, and I check the line to see what the nature of the failure was.)

          But, with a little modification of the test code, I think it could be useful. Frankly, giving them the test code (probably compiled, so they can’t inspect it and circumvent the specific tests) is a brilliant idea, and I’m embarrassed I didn’t think of it myself.

          Thanks!

          • Joss Ives says:

            It would be interesting to see if giving them the test code acts as some sort of scaffolding and
            improves their ability to self-test their code or if it acts as a crutch since something is automatically testing their code for them.

          • Ian says:

            Joss: 8-(

            Tough question.

            In the past, I’ve insisted that they test their code thoroughly before submitting it, and even that they describe how they tested it, but that doesn’t seem to work so well. I think I need to teach them *how* to test their code. (Not automated unit tests, though; that takes OOP, which is beyond the scope of this class.)

            Maybe specifying a set of test cases and expected results that they should ascertain for themselves? With perhaps more detail early in the semester, leaving more up to them later?

  3. Pingback: Rubik’s cube test development | SuperFly Physics

  4. Ian says:

    FYI, the always-provocative Andy Rundquist posted an intriguing set of thoughts about this idea over on his blog.

  5. Joss Ives says:

    I think scaffolding their self-test process at first and then backing off is probably the way to go. I used a module system quite similar to yours for my comp-phys course in the fall and I am thinking that I will require, with each module submission, a note or some output which specifies some test cases that they used to determine if their results were reasonable. It was often very frustrating the number of times I would have to send a module back to a student telling them in what way it didn’t work properly and they would resubmit, but with some new problem. There was definitely a lack of self-testing on my students’ part.

  6. Pingback: Methods of Teaching and Learning | E-LEARNING

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>