Thursday, January 1, 2015

Favorite comments of '14: gasstationwithoutpumps and Ze'ev Wurman

On Conversations on the Rifle Range 12: Teaching to the Authentic Assessment:

gasstationwithoutpumps said...
I'm confused by "What I find inauthentic is judging seventh and eighth graders’ math ability based on how well they are able to apply prior knowledge to new problems."

Ability to apply prior knowledge to new problems is precisely what should be measured for students—the problem is that very few exams do that, and "teaching to the test" makes it even harder. A math test should not be a test of memory, but of ability to apply what their math skills to new problems.

I agree with your statement "I do have a problem when part of this is learning how to write explanations that will pass muster according to scoring rubrics." Elementary educators and test writers alike often have very strange ideas about what they will accept as an explanation. Good math explanation is a skill that few ever develop, and rubrics are almost useless in judging explanations. For math tests to be about math and not about writing skills, the scoring should be based solely on ability to do the math, at least until students have been taught proof techniques in high school, when some formulaic explanations can be requested.

Ze'ev Wurman said...

I think you have it completely backward.

K-12 schools are about offering instruction at levels that are achievable by every student that gets good instruction, not about selecting the elite few that can go beyond their instruction and actually apply what they learned to new problems of the type they have not seen before.

If that were the criterion, 99.9% of teachers would fail immediately. Why do you think they have to attend all those interminable "professional development" hours if they could "apply their prior knowledge to new problems"? After all, they supposedly already know the math, or the literature, from their college days. All that is left is to "apply it to new problems."

In fact, not only would essentially all the teachers fail on the spot, but most population would. Only at the PhD level one is expected to apply knowledge to truly novel situations. And how many PhDs do we have? Less than 2% of the population.

Consequently, all those "new problems" cannot be new or novel otherwise everyone would fail. Instead, Smarter Balanced will offer pretend novel problems. Those students that were drilled on those "novel" problem (hence making them rote) will easily succeed. The unlucky ones whose teachers actually believe the ed-school crap they are fed, that "student need to struggle on the test and apply prior knowledge to new problems" will simply fail on those questions.

Talk about incentives for teaching to the test. Or about the damage ignorant highfalutin educrats can inflict.

1 comment:

gasstationwithoutpumps said...

I just posted this on the original page, but I'll repeat it here:

Late reply here, but since the comment was featured in the end-of-year review, I'll respond to Ze'ev.

Engineers are expected to apply their skills to new problems all the time (at the B.S. level, not the Ph.D. level). At the PhD level, people are expected to find and define new problems, not just solve them.

Ze'ev is also (deliberately?) misunderstanding what is meant by "new" in my comment. A "new problem" means new to the student, not new to the world. The idea is that by learning to solve problems that are not identical to ones that they have seen before, the students learn to generalize and apply their knowledge. Initially, the amount of novelty is small, so that students only have to generalize a little. As they progress, the amount of novelty in the problems increases, so that by the time they get to college, they should be able to apply their math knowledge fairly broadly, in areas that they have never previously been exposed to.