« Do You Have What it Takes? | Main | K-12 Events at SIGCSE »

Getting Students to Test Their Programs

By Karen Lang

Animations and creation of games really does motivate students. Doesn't every student assume they can take an introductory computer science course and get a job at Electronic Arts making video games?

While animations and game development can be a motivational tool, it can also be a good lesson in function design and proper testing. I find that the students, when faced with creating an animation, get caught up in the thrill of seeing something move on their screen and their good programming habits go out the window. The building of an animation and/or game requires a new level of complexity, with the need to incorporate several functions and possible classes or structures. Because of the extra complexity, there is even more need to take it slow, provide good documentation, and test each function as you build it. What I find is that students are so fixated on the end goal, they just throw together all their functions in a hurry without testing and then run the program to see if it works. When something doesn't work as expected, they don't know quite where to start to debug it. Despite my admonitions to test as they go, they rarely do.

A couple of days ago, I had a student look at me, extremely frustrated, because his animation would not work. His cat was supposed to move across the screen, and there it sat, still as could be. Yet he stated loudly, over and over, "I know it works!" I looked at his code and there wasn't a single test case. I asked him how he knew it worked, when he hadn't tested the code, and the lack of cat movement proved otherwise. He stepped me verbally through his logic, swearing it all made sense. He was so resistant to doing the grunt work of thorough test cases. I told him to go back and test every condition before concluding it worked.

How does one prevent this from happening? I emphasize testing and I deduct grade points for inadequate testing. This one student realized he couldn't avoid it, if he wanted to see his cat move across the screen. Eventually he had to succumb and test his function to find his error. Do you have any ideas or strategies in cases like this?

Karen Lang
CSTA Board of Directors

Comments

Karen,

It's admirable that you emphasize testing at all. For so many CS teachers, testing is an afterthought and usually consists of plugging in a few "obvious" values, verifying that any (any!) output was generated, and calling it good.

At risk of being redundant with your post, here's my strategy:

Integrate Testing Early. I start requiring tests around the fourth week of my semester, once they have started showing mastery of method invocation syntax and are starting to write their own methods.

Assign Testable Programs. I place a greater emphasis on functions rather than procedures, as they are more testable. I don't introduce private variables and methods until near the end of the semester. With public instance variables, it's easier inspect objects. I eschew console I/O as long as possible. The test cases themselves are the input and output. I use an IDE that allows the student to invoke a method directly rather than bury it in a main method.

Make Testing Easy. I ought to be dirt-simple to write tests. If you're using Java, the JUnit package is OK, but more complex than it used to be. Try Viera Proulx's tester library for a simpler interface. Emphasize writing smaller functions rather than statement-packed procedures with lots of side-effects.

Make It Easier to Test Than to Not Test. Refuse to help a student with their program until their tests are written. Design problems with interrelated methods and classes, so a change in one might affect the behavior of another.

Testing Must Be Worth Something. I give at least 20% of the points to the tests and about 50% of the points to their up-front design documentation. Reward the quality of tests, not necessarily the quantity. Whether or not their program actually works is less important to me than getting the design and tests correct. Usually, the student discovers that if they do the design and tests, the program just takes care of itself.

Yes, the students grumble about all the testing they have to do. Yes, the often write the tests AFTER the program. But I stick to my guns: no help without tests.

If you can quantify and structure the input and output specifications, followed by a few sample inputs and outputs, then perhaps, a system may be created which would check the student's code and grade it automatically based on a number of test cases which the faculty may devise once, and reuse many-a-times.

Back here in India, we do have Machine Graded Programming Tests (MGPTs) for programming assignments and challenge problems in C and Java. I wonder, whether the same model may be applied for animated programs. Need to ponder a bit more on this!

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)