The largest textbook company in the world has teamed up with the 16th largest school system in the US--and one of the highest performing ones--to boost students' futures even more by teaching them 21st century skills.
You can read all this partnership in a paper by an Associate Research Scientist at Pearson by the name of Emily Lai, entitled “Creating Curriculum-Embedded, Performance-Based Assessments for Measuring 21st Century Skills in K-5 Students” and published by the American Educational Research Association.
The partnership between Pearson Publishing and the Montgomery County (Maryland) Public Schools, notes Lai, began in 2010 “with the goal of developing Pearson Forward, a digital curriculum featuring embedded assessment and professional development resources,” centered around 21st century skills.
21st century skills, explains Lai, include the following:
Critical thinking (encompassing analysis, synthsis and evaluation), creativity (encompassing fluency, flexibility, originality, and elaboration), collaboration, metacognition, motivation, and intellectual risk taking.These, she assures us, have been operationalized based on a thorough literature review, which appears, from Lai’s bibliography, to be based predominantly on publications by education journals.
So important are these skills, Lai explains, that they will be assessed every 9 weeks during a 3 week period, each one in the course of 1-2 class periods, through Pearson’s Performance Based Assessment, (a variety of “embedded assessment,” or assessment that is “seamless with instruction”). Assessment tools include “holistic” rubrics, checklists, and student self-ratings. The 8 assessment tasks (assigned every 9 weeks) include “open-ended or ill-structured tasks,” tasks embedded in “authentic, real-world contexts,” and strategies for “making student thinking and reasoning visible.” The goal of all this assessment? “Tracking progress to predict success in post-secondary education.”
As Lai notes:
Each of the skills targeted in the curriculum entails both cognitive and noncognitive or affective components… Cognitive components of these constructs include knowledge and strategies, whereas noncognitive components include attitudes, traits, and dispositions.Different assessment tasks target different 21st century skills. A task assessing intellectual risk-taking might look at whether, when a student is given a particular reading task, he or she chooses a story that is already familiar to them, or one that isn’t, since:
choice of unfamiliar story [is] arguably more of an intellectual risk than the choice of a familiar story.Tasks assessing motivation similarly vary by subject (after all, different kids are more or less motivated in different subjects). Pearson’s task for assessing motivation in reading is:
a task that paired a teacher observation tool designed to capture students’ use of strategic behaviors with a student self-rating tool designed to capture more affective aspects of motivation, such as the student’s interest, self-efficacy, and goal orientation.Tasks assessing metacognition might include open tasks that
allow students to decide what relevant information to use or how to use the information to solve the problem,” as opposed to “closed” tasks that “are characterized by more teacher control and structure.Such tasks should also make student thinking and reasoning visible, which is
typically accomplished by embedding some sort of informal teacher-student interview into the assessment.For example, during a ramp-construction project:
Students were encouraged to share their thinking with teammates as they worked together. We provided a set of interview questions for the teachers to pose to individual students as they worked:
How is it going?As Lai notes:
What are you doing right now?
Why did you decide to build the ramp this way?
What is working well about your ramp?
What would you change about your ramp?
Using this tool, teachers could observe the extent to which students were able to share their thinking and explain their ideas to others, both key indicators of metacognition at the Kindergarten level.A task that measures “creativity” could be a time-limited response to a prompt:
We point out aspects of tasks that should not be varied, such as the time provided to students to respond to prompts (when assessing the creativity indicator of fluency, for example)The most important 21st century skill, of course, is collaboration, and Lai’s proposals here are commensurately elaborate. A task assessing collaboration might involve:
Ill-structured tasks that cannot be solved by a single, competent group member… Ill-structured problems are those with no clearly defined parameters, no clear solution strategies, and either more than one correct solution, or multiple ways of arriving at an acceptable solution.Collaboration-assessing tasks might also involve time constraints that make it impossible for one person to complete the task:
For example, to assess collaboration in math, teams of 2nd -grade students were required to design and create a mosaic using multi-colored tiles and then to devise and implement a method for representing the data on tile color by creating a graphical depiction of it (e.g., a bar graph showing the number of tiles of each color used to create the mosaic).In rating students based on these collaboration tasks, teachers should consider:
the quality of the completed group work project… the student’s ability to work respectfully and productively with others, and the student’s self-reported collaboration skills and contribution to the group.As well as:
factors related to students’ use of helping behaviors (e.g., communicating respectfully, soliciting diverse opinions).Citing a 1995 article written by N. M. Webb and published in Educational Evaluation and Policy Analysis, Lai writes:
As Webb explains, assessments that occur in group contexts can fulfill several different purposes. For example, teachers may wish to determine how much a student can learn from collaborating with others, whether a group of students can complete a product together, or whether individual students can communicate respectfully with teammates. Group processes that support one goal may not support another goal. For example, if the goal is to measure a student’s ability to learn from collaboration, then group processes such as co-construction of ideas, identification of conflict, giving and receiving elaborated help, and equality of participation should all be encouraged. In contrast, if the goal of group assessment is to determine whether a group can successfully complete a task on time, then group processes that facilitate student learning, such as trying to ensure equal participation among all group members, may be counterproductive. In this case, it will be more efficient to use processes that maximize group productivity, even if they minimize learning opportunities. Such processes might include letting the most competent student in the group perform most of the work.All of this, of course, is for the sake of the children--as the emphases on assessment (as opposed to instruction and remediation) and on predicting (as opposed to influencing) who will be successful in post-secondary education make abundantly clear.
In other words, in no way is it about multi-million dollar backroom deals between powerful companies and school boards that shut out all meaningful input from parents and traffic in educational buzz words and double speak. And in no way is it about branding half-baked assessment tools as “Pearson Forward Performance Based Assessment” tools and then associating them with a famously high performing school district whose current reputation lends them credibility (however much their relentless deployment--every nine weeks over a three week period,1-2 class periods for each 21st century skill--might help diminish this reputation in the future).