From the Smarter Balanced Assessments, a Common Core-inspired, standardized test consortium now consisting of about 12 states.

**Extra Credit:**

Discuss how these problems exemplify the phenomenon (discussed earlier) that often the "deep concepts" are relatively easy, while more complex calculations involving those concepts (conspicuously absent in these problems) are where the real challenge emerges.

## 14 comments:

What a computer input nightmare! Those problems would take about 3 seconds with a paper and pencil, but at least 30 on the computer--with a far higher risk of input error.

Fourth grade. And still the results will be bad for many schools. It reminds me of what our town has done with NCLB. The test took the raw percent correct score and converted it into a "proficiency index" with a low cutoff point well below 70% correct. Then they converted it to the percent of students who get over that low cutoff. We're looking better now. Next, our town looked at how we compared to others in our small state. We're fourth in the state! Send out the press releases about quality education.

Steve, I want to believe you really are what you claim to be. But every once in a while you post things that just don't make sense. The raw percent correct does nothing to show whether a cutoff point for a particular test is high or low. The math involved is not particularly complicated, and I would expect anyone who had an education in a technical field to understand this.

Steve makes perfect sense.

The raw percent correct does nothing to show whether a cutoff point for a particular test is high or low.And Steve didn't say it does. He is saying that states, school districts, etc., set cut-points for "basic", "proficient", "advanced proficient" and so forth. The cut-point for "proficient" may be well below 70% correct--say 40%. If 80% of the students taking the test scored better than 40%, the what gets posted publically is that 80% of the students in a school district are at the "proficient" level. What that statistic does not tell you is what the average raw score is. If 80% of the students averaged 45%, that tells a different story than saying that 805 of the students are proficient.

What Barry said. The problem is that the method used to come up with a proficiency index hid the correlation to percent correct. The only way to judge whether a proficiency cutoff is high or low is to look at the actual questions and to correlate the proficiency number with the percent correct score. I once tried to map the cutoff index back to a rough percent correct score and it was about 60 percent for our state. (This didn't include another fuzzy category that was something like nearly proficient.) Then I looked at the questions on the test and the grade level to judge how low the cutoff was.

There are two issues. The first is to see how low the cutoff point is on an easy test, and the second is to see how states and towns manipulate those numbers to put them in their best light. My comment really focused on the latter.

My daughter (5th grader) came out of the SBAC tests - both math and English - feeling uncertain about how she did. Questions as easy as these wouldn't have thrown her (though she did complain about the interface for entering fractions).

The tests are adaptive, so kids out there are getting questions with different difficulty levels. I'm not a fan of the adaptive test - it seems like the adaptive nature of the SBAC test will add another layer of opaqueness to the testing results.

re Auntie Ann - in my child's school the kids were finding it very difficult to answer these questions where you have to click on the answer (rather than just use the numbers on the keyboard). The task was especially difficult because the county just switched to Chromebooks and not all kids were familiar with using a touch pad. The school administrator had some discretionary funds and bought several hundred external mice just before testing started. I haven't heard of any other local elementary schools getting the external mice - so how's that for a level playing field?

Steve and Barry, you are just digging a bigger hole. Saying 80% of the students averaged 45%, does not tell a different story than saying that 80% of the students are proficient because both statements are meaningless. The only way to know whether the cut off scores are set too high or too low, is to give the tests to a representative sample and come up with a percentile ranking. You cannot have passed an introductory statistics course without knowing this.

Steve, first you couldn't understand the meaning of the word between in a straightforward math problem. Then you indicated that you thought females were receiving preferential admissions to the Ivy League. Now this. Enough. I don't believe you are real.

Good job knocking down that staw horse, anonymous!

Let me simplify what battery and Steve are trying to say: if there are too many failures, lower the standards. The students are not performing well on the adaptive tests so the test makers lower the entry point for proficient. Instead of creating a more reliable test.

Let me simplify what battery and Steve are trying to say: if there are too many failures, lower the standards. The students are not performing well on the adaptive tests so the test makers lower the entry point for proficient. Instead of creating a more reliable test.

Let me simplify what battery and Steve are trying to say: if there are too many failures, lower the standards. The students are not performing well on the adaptive tests so the test makers lower the entry point for proficient. Instead of creating a more reliable test.

"Saying 80% of the students averaged 45%, does not tell a different story than saying that 80% of the students are proficient because both statements are meaningless."

I'm saying no such thing. Reread my posts.

"first you couldn't understand the meaning of the word between in a straightforward math problem"

You have a long term issue here. You have to do better than ad hominem arguments.

Post a Comment