Tuesday, October 21, 2014

Conversations on the Rifle Range 12: Teaching to the Authentic Assessment

Barry Garelick, who wrote various letters under the name Huck Finn, published here, is at work writing what will become "Conversations on the Rifle Range". This will be a documentation of his experiences teaching math as a long-term substitute. OILF proudly presents episode number 12:



Back in September, when I was doing my sub-assignment for the high school, I attended a math department meeting the day before school began. Sally from the District office presided, and among the many things she told us at that session was that this year the students in the District would not have to take what is known as the STAR test, by order of the superintendent of the District. “And as you know, the Superintendent is like the Pope. What he says goes.”

While this last was uttered partly in jest, the reaction in the room was celebratory. The STAR exam has been an annual ritual in California and in May of each year about two weeks are devoted to a review and prep for this test, which is keyed to California’s pre-Common Core math and English standards. Such activities inspire accusations that schools are “teaching to the test”. But now in the midst of a transition to implementing Common Core math standards, California was looking at the Smarter Balanced Assessment Consortium (SBAC) exam that would be given officially starting the following school year. (Actually, they don’t call it an exam; they call it an “assessment”. You’ll forgive me if I call it an exam.) For now, however, the state would be field testing the exam. What this meant was anyone’s guess: perhaps this first go-round on SBAC would be to provide a baseline to see how students scored prior to full implementation of Common Core. Or perhaps it was to fine tune the questions. Or both. Or neither.

In any event, when I started my new assignment at the middle school, I had to have my classes take a practice SBAC exam. The day before I was to take all my classes into the computer lab for the practice exam, I attended an after-school faculty meeting.

I had started my assignment at the school earlier that week, so the principal introduced me to the group. I was welcomed by applause, and urgings by fellow teachers to help myself to the tangerines that were brought in for the occasion. I took two tangerines, and as if he were taking that as his cue, the principal started the discussion.

“As you know, we are in transition to the Common Core, and one aspect of this is the SBAC test that will be given this spring. We want students to have a chance to practice with some sample questions. This does two things. It will get them used to the computer interface, because the test is given entirely on computers. And secondly it will get them used to the questions which are not the typical multiple choice question like on the STAR tests. The SBAC is more of an ‘authentic’ test.”

He went on about how Common Core will change the way we teach, but the words all blurred together in my mind amidst phrases like “critical thinking”, “higher order thinking”, and “deeper understanding”. I do recall a conversation between two teachers at my table. One mentioned she saw some of the questions and said "Yes, there are still multiple choice questions on the test. I was very disappointed to see that."

Well, OK, I like open response questions too, but I get rather tired of the “it’s inauthentic if it’s multiple choice” mentality. I took the CSET math exam required in California to be certified to teach math in secondary schools. The multiple choice questions were not exactly easy; I would hesitate to call the exam “inauthentic”. What I find inauthentic is judging seventh and eighth graders’ math ability based on how well they are able to apply prior knowledge to new problems that are substantially different than what they have seen before or have worked with.

On the day of the practice exam the assistant principal took charge of my first group—the first of my three pre-algebra classes and probably the most cooperative of all of my students. He spoke in a loud, commanding voice and gave instructions on how to log on, what page to go to, what things to click on, and had everyone do things at the same time. I only know that I could not duplicate this feat for any of my classes; students would rush ahead, ask me to explain again what I had just said, and inevitably asked “Will the test affect our grades?” I explained that it was for practice and did not affect their grades, nor would the actual test given later in the semester, but the question kept coming up. When it came time to take my fifth period algebra students to the computer lab, I had written on the white board: “No, this test will not affect your grades.”

A boy named Peter exuberantly agreed. “Yes, Mr. G, that’s a great idea, because…” I couldn’t hear the rest amidst the noise of the class, which then followed me outside the classroom to the computer lab. The instructions had to be repeated several times, as I had done throughout the day.

Because this was a practice test, I felt no compunction about giving students help in answering the various questions. For the most part, questions were reasonable, though the students found some difficult. One question I recall on the seventh grade test was “Enter the value of p so that 5/6 - 1/3n is equivalent to p(5-2n). Seventh graders have only learned about how to distribute multiplication, but not to factor. While the answer of 1/6 seems to jump out at adults, this problem presented difficulty to most of my seventh graders, probably because they hadn’t seen anything like it. The variable “n” also was a distraction. I gave them hints like “Can p be a fraction? What fraction would you multiply 5 by to get 5/6?”

On the eighth grade test, one open-response item was quite complex, involving pulse rates versus weights of various animals, which students had to analyze in terms of slope and a “trend line”. One of the questions was “Interpret the slope of the line from Item 1 in the context of the situation.”

At the end of sixth period, I dismissed the students, and went back to my classroom. I realized that when Common Core kicked in students would be “taught to the test” for these all of these particular types of questions. I have no problem with teaching to a test if the test covers material that should be mastered. I do have a problem when part of this is learning how to write explanations that will pass muster according to scoring rubrics.

As I got ready to leave for the day, one of my sixth period students popped her head in the door. “Mr. G, will the test today affect our grade in the class?” I said it wouldn’t, but not for the last time that week.

7 comments:

gasstationwithoutpumps said...

I'm confused by "What I find inauthentic is judging seventh and eighth graders’ math ability based on how well they are able to apply prior knowledge to new problems."

Ability to apply prior knowledge to new problems is precisely what should be measured for students—the problem is that very few exams do that, and "teaching to the test" makes it even harder. A math test should not be a test of memory, but of ability to apply what their math skills to new problems.

I agree with your statement "I do have a problem when part of this is learning how to write explanations that will pass muster according to scoring rubrics." Elementary educators and test writers alike often have very strange ideas about what they will accept as an explanation. Good math explanation is a skill that few ever develop, and rubrics are almost useless in judging explanations. For math tests to be about math and not about writing skills, the scoring should be based solely on ability to do the math, at least until students have been taught proof techniques in high school, when some formulaic explanations can be requested.

Ze'ev Wurman said...

gas...,

I think you have it completely backward.

K-12 schools are about offering instruction at levels that are achievable by every student that gets good instruction, not about selecting the elite few that can go beyond their instruction and actually apply what they learned to new problems of the type they have not seen before.

If that were the criterion, 99.9% of teachers would fail immediately. Why do you think they have to attend all those interminable "professional development" hours if they could "apply their prior knowledge to new problems"? After all, they supposedly already know the math, or the literature, from their college days. All that is left is to "apply it to new problems."

In fact, not only would essentially all the teachers fail on the spot, but most population would. Only at the PhD level one is expected to apply knowledge to truly novel situations. And how many PhDs do we have? Less than 2% of the population.

Consequently, all those "new problems" cannot be new or novel otherwise everyone would fail. Instead, Smarter Balanced will offer pretend novel problems. Those students that were drilled on those "novel" problem (hence making them rote) will easily succeed. The unlucky ones whose teachers actually believe the ed-school crap they are fed, that "student need to struggle on the test and apply prior knowledge to new problems" will simply fail on those questions.

Talk about incentives for teaching to the test. Or about the damage ignorant highfalutin educrats can inflict.

Auntie Ann said...

A kid might be able to do the 5-2n problem without factoring. If they start with setting the two expressions equal to each other, then multiplying both sides by 6 to get rid of the fractions, they could see that there is clearly a 5-2n on each side. Then they'd have to avoid the pitfall of:

5-2n = 6p(5-2n)...get rid of the two 5-2n's, and, voila!: 0=6p

If they can get through that to 1=6p, they should have it without needing to factor.

SteveH said...

What I find incredible is that schools blame the students (they just need more engagement) and they depend on state tests(now things will be different!), as if they have little control over what they teach and how they test in class. As for a feedback loop, do they really wait until once a year to make corrections to what and how they teach? One parent-teacher meeting I was in talked about how the state test results showed a lower score for "problem solving". Their solution? Focus more on problem solving.

How can state tests properly judge understanding or critical thinking? What is the role of teachers who see students daily? Are they potted plants?

Auntie Ann said...

Last year, a friend of ours pulled her kids out of our private school the day her third grader reported that the teacher chewed out his entire class for doing so poorly on their standardized tests. She now homeschools.

Of course, the teachers and the school seemed to solely blame the kids...the 8-year-old kids, not the adults who have held those kids' education in their hands for at least three years.

And, until last year, it was an Everyday Math school.

gasstationwithoutpumps said...

Late reply here, but since the comment was featured in the end-of-year review, I'll respond to Ze'ev.

Engineers are expected to apply their skills to new problems all the time (at the B.S. level, not the Ph.D. level). At the PhD level, people are expected to find and define new problems, not just solve them.

Ze'ev is also (deliberately?) misunderstanding what is meant by "new" in my comment. A "new problem" means new to the student, not new to the world. The idea is that by learning to solve problems that are not identical to ones that they have seen before, the students learn to generalize and apply their knowledge. Initially, the amount of novelty is small, so that students only have to generalize a little. As they progress, the amount of novelty in the problems increases, so that by the time they get to college, they should be able to apply their math knowledge fairly broadly, in areas that they have never previously been exposed to.

Ze'ev Wurman said...

Late or not, your comment still makes little sense.

"Engineers are expected to apply their skills to new problems all the time"

Wrong. Engineers are expected mostly to apply their skills to variants of well-studied problems. When they need to solve problems of new nature, it takes weeks, months and years, certainly not on a 2-3 hour examination. Clearly you are not an engineer. I am.

Further, it is you who is misinterpreting what I meant by the word "new" (and unlike you, I will not attempt to attribute intent to this error). What I rather clearly meant is that there is a pretense among educrats that most children are able to solve problems that are "new" or "novel" to them. This is also clear from the fact that I suggest that such "novel" problems will become rote the moment they are administered for the first time -- better teachers will drill their students on them.

The idiocy of this demand is nicely summarized in Andre Toom's essay here. http://toomandre.com/travel/Port2010/PORTUGAL.pdf