Yet another breathless account of the wonders of computerized learning appears in this weekend's New York Times Magazine in an article entitled "The Machines are Taking Over: advances in computerized tutoring are testing the faith that human contact makes for better learning."
The article opens with a scene of an actual human being tutoring a fellow species member. While her tutee works on a problem (calculating average driving speed), the tutor provides lots of interactive feedback. Neil Heffernan, the tutor's fiance, catalogued the various different types of feedback she gave under such categories as “remind the student of steps they have already completed,” “encourage the student to generalize,” “challenge a correct answer if the tutor suspects guessing”). According the the article, Heffernan then "incorporated many of these tactics into a computerized tutor," which he spent nearly two decades refining. Now called ASSISTments, it is used by by more than 100,000 students "in schools all over the country." The article describes the experience of one of these 100,000 students with the program's interactive feedback:
Tyler breezed through the first part of his homework, but 10 questions in he hit a rough patch. “Write the equation in function form: 3x-y=5,” read the problem on the screen. Tyler worked the problem out in pencil first and then typed “5-3x” into the box. The response was instantaneous: “Sorry, wrong answer.” Tyler’s shoulders slumped. He tried again, his pencil scratching the paper. Another answer — “5/3x” — yielded another error message, but a third try, with “3x-5,” worked better. “Correct!” the computer proclaimed.In other words, it's the same old binary right-or-wrong feedback that nearly every educational software program has been using for decades. As the article notes:
In contrast to a human tutor, who has a nearly infinite number of potential responses to a student’s difficulties, the program is equipped with only a few. If a solution to a problem is typed incorrectly — say, with an extra space — the computer stubbornly returns the “Sorry, incorrect answer” message, though a human would recognize the answer as right.True, the program is still a work in progress. But what's being refined, according to the article, isn't the feedback. Rather, it's the program's ability to detect when a student is getting bored, frustrated, or confused (via facial expression reading software, speed and accuracy of responses, and special chairs with posture sensors "to tell whether students are leaning forward with interest or lolling back in boredom."):
Once the student’s feelings are identified, the thinking goes, the computerized tutor could adjust accordingly — giving the bored student more challenging questions or reviewing fundamentals with the student who is confused.Or "flashing messages of encouragement... or... calling up motivational videos recorded by the students’ teachers."
Also being refined is the "hint" feature, which users click on when stumped. Human beings (particularly teachers) track common wrong answers and have other human beings (particularly students) come up with helpful hints. These hints are then incorporated into the next generation of ASSISTments.
Cognitive Tutor, a more established software program that is "used by 600,000 students in 3,000 school districts around the country," also limits its feedback to hints and right-or-wrong responses. And it, too, is being refined based on data from human users:
Every keystroke a student makes — every hesitation, every hint requested, every wrong answer — can be analyzed for clues to how the mind learns.Ultimately, this data will be put to use not to refine feedback on particular student responses, but to help decide how to space out material and schedule periodic reviews.
But it's carefully tailored feedback on particular responses by particular students that makes human tutoring--the inspiration for all these programs--as powerful is it is.
In my earlier post on Cognitive Tutor, I wrote that programming sufficiently perspicuous feedback for mathematical problems "strikes me as even more prohibitive" than the feedback I labored for years to provide in my GrammarTrainer program. Last night I ran this impression past a mathematician friend of mine who cares a lot about effective math instruction. She emphatically concurs.
When it comes to educational software developers--as opposed to educational software users--there is some somewhat perspicuous feedback on whether their answers (answers to students' educational needs) are on track. As I write earlier, that feedback isn't particularly encouraging.