A = Substantially exceeds the standard

B = Meets the standard

C = Making progress toward the standard

D = Making less than acceptable progress toward the standard.

F = Does not meet the standard.

(Or so I'm sold by last week's back-to-school night hand-out.)

Our school district does not give out plusses and minuses, so there's no need to define A- or B+.

And yet... Besides the sinister, Orwellian overtones of "the standard", there's the unexplained overlap between D and F and the large gap between A and B.

So here are my questions:

1. Does someone who is considered to exceed the standard but not "substantially" receive an A or a B?

2. How can someone be identified as "substantially" exceeding the standard when most assignments and tests don't measure skills that exceed the standard?

3. How does the system ensure that subjective teacher judgments don't determine whether a standards-exceeding but not obviously "substantially" exceeding student gets an A or a B?

## 5 comments:

If it helps any (it probably doesn't), I expect this hurts the teachers brains as much as it does yours.

Of course most of the children are above average, using the math standards taught today.

Completely incomprehensible (and reprehensible as well). I'm guessing that no one gets a D or F anyway?

This year, in our 4th grade (traditionally the first year of "real grades" as the kids call it), we actually have 3 grading scales: A-F (all expressed in detailed percentages, i.e., A = 93-100%, etc.), another scale which is I = Initiates effort, M= Meets expectation, W = Working towards expectations, S = Support needed (which all of us think of as A-D, naturally) and then a third grading scale for specialty classes and the like O= Outstanding, S = Satisfactory, N = Needs Improvement.

My head hurts, too.

It makes more sense to me to grade as follows;

Create tests that measure a lot of easy stuff, a lot of grade level stuff, and a bunch of stuff above grade level.

Take the raw scores, which can range from 1 (you get a point just so the math works), to several hundred (for some genius type kids) with "grade level" calibrated at 100.

Then you take the natural log of the scores.

So Lenny from "Of Mice and Men" might score 40 on his tests, so his grade will be 3.689... Joe "trying hard" might get a 70 (4.25), On Grade level Sally would get a 100 (4.6) and perhaps, if this is a math test, Srinivasa Ramanujan might have scored a 536 as a kid (a mind boggling 6.28 grade).

If you want more granularity you can choose a different base for the log (2, or 1.5 or something)-- everything crams together if you use log10 though...

4.5-5 would be a reasonable thing to try to achieve on tests (there might be factors that make it difficult, but it is not superhuman).

I haven't thought it out completely, and I imagine things would need to be curved and given to a large number of students to figure out where the middle should lie, but its a system of nearly infinite granularity that is expressed with numbers less than 10...

How exactly does this grading system work? Is it subjective or is it based on actual tests? If it's based on actual tests, why not just use test results to determine a grade?

Giving a student an A in Science because they got 19 out of 20 questions right is a commonsense grading method. The whole meeting standards or working towards the standard seems so fuzzy. What are students and parents supposed to do with this information?

Post a Comment