The week's Education Week reports that U.S. Secretary of Education Arne Duncan plans to set aside $350 million of the $4.35 billion in discretionary aid in the Race to the Top Fund to improve student assessments:
Testing experts say that money could serve as a down payment for scaling up tests that would better measure students’ critical-thinking skills and improve teacher and student engagement in the assessment process.Many education experts would like replace the multiple-choice tests that dominate today's No Child Left Behind Testing. Paraphrasing Randy Bennett, a scholar at the Educational Testing Service, Education Week notes:
Such tests... are not ideal for identifying whether students can take multiple pieces of domain-specific knowledge and analyze, integrate, and apply them in unfamiliar contexts..Furthermore:
Researchers familiar with international benchmarking argue that those critical-thinking skills are precisely the type that will be in demand as the global economy becomes increasingly knowledge-oriented.Education Weekly cites two examples of such tests. First, there's the College and Work Readiness Assessment, a computer-based test used by private high schools:
A typical ... question might present examinees with a dossier of materials relating to a child who had a roller-skating accident at school. The materials could include newspaper articles, technical reports about the skates, data about competitors’ products, sales figures, medical reports, and the number of documented accidents. Then, the student would be asked to analyze those materials and write a memo about whether the skates are truly dangerous, and to justify his or her conclusions drawing from the information.The second example is recently piloted subset of the 2009 National Assessment of Educational Progress in science, which used "interactive computer tasks" to prompt students:
to engage in the entire process of scientific inquiry, in which they must participate in a simulated experiment, record data, and defend or critique a hypothesis.While such tests have typically been costly, because they must be scored by humans, Education Week cites experts as saying that advancements in technology could help score these tests:
The high costs of scoring such a complicated assessment with an almost unlimited number of answers... could be mitigated by advancements in natural-language-processing software—essentially programming that proponents claim can judge written essays as accurately as human readers and reduce, though not eliminate, the need for costly human evaluation.Even with what is still pie-in-the-sky technology (I've worked in Natural Language Processing!), the proposed new measurements sound dangerously subjective to me, and also highly language-intensive in ways that will disfavor bright, analytically-minded kids with language delays.