Saturday, November 21, 2015

Explaining answers to easy problems vs. doing mathematically challenging problems

A comment I posted on Barry and my Atlantic article engendered a second thread on Dan Meyer’s blog when I reposted it there. What I wrote, in part, was:

The American approach is to build conceptual understanding through time-consuming student-centered discovery of multiple solutions and explanations of relatively simple problems. An internationally more successful approach is to build conceptual understanding through teacher-directed instruction and individualized practice in challenging math problems.
I got a little flack for my sweeping statement about an “American approach” so I followed up with:
I should clarify what I mean by “American approach”: the approach inspired by national movements like the Common Core and the NCTM standards.
The various objections fell into several categories:

1. The pedagogy I’m calling “American” is rare throughout the U.S.: most classrooms still follow a traditional model.

But even if most students are still sitting in rows with the teacher in front, more and more are using Reform Math textbooks like Everyday Math and Investigations, which solicit multiple solutions and verbal explanations for relatively simple math problems. Even if teachers matter more than textbooks, textbooks can place a ceiling on how challenging the material is. That's why traditional texts that date back to the 1960s and earlier are so much better than today's textbooks: they don't place such a low a ceiling on mathematical challenge. Instead they provide math-expanding opportunities for those who can handle them.

2. International comparisons based on test scores are unfair because Europe is “white” and Chinese students cheat. (Yes, one commenter actually said this, repeatedly).

But being white doesn’t make you good at math; China is only one of several Asian countries I discuss; and the many Chinese (and other Asian) nationals who disproportionately populate the top PhD programs and math-intensive careers here in the U.S. probably didn’t get where they are by cheating on math tests.

3. International comparisons based on performance on the PISA test are unfair because other countries track out their lowest performing students prior to age 15-16, the age range of students taking the PISA.

I’d be curious to see statistics on how large this effect is; I’ve looked around a bit and found nothing. Presumably our scores, too, are affected by dropouts and no-shows.

4. International comparisons based on the relative mathematical difficulty of high school exit exams are unfair because these don’t tell us how most students actually did on the various problems.

I’d argue that the predominance on some of these exams of much more challenging problems than American high school students ever see on any standardized test or graduation test tells us something about what kinds of mathematical opportunities students from other countries are getting that their American counterparts may not be.

5. In addition to international comparisons being unfair, a comparison within a province of one country of student performance before and after a student-centered discovery-oriented curriculum was introduced is also unfair. Why? Because it ignores what was going on concurrently in the rest of the country at large.

Then what kind of comparison is fair?

6. The Finnish exam and the Chinese Gao Kao are no more difficult than our Common Core-inspired exams.

My impression is that people who believe this haven’t looked closely at the mathematical demands of these tests, and/or believe that applying math to real-world situations and “proving” things using graphs (common in America's Reform Math and Common Core-inspired exams) to be of a mathematical challenge equal to or greater than the “mere” manipulation of abstract symbols. People with this impression should take a look at the research produced by professional mathematicians and check out the ratio of graphs and “real-world” situations to sequences of abstract symbols.

7. Students at an elite private high school do really well with a discovery-based curriculum.

If I were forced to enact a student-group-centered, discovery-based curriculum somewhere, I’d do it at a highly selective high school whose students were admitted, in part, based on their aptitude for (and therefore their solid foundational knowledge in) math. Such students stand the greatest chance of learning additional math independently, and from one another, and without too much loss in efficiency compared to what’s possible in more teacher-directed, individualized-problem-solving classrooms.

8 comments:

Paul Bruno said...

In addition to international comparisons being unfair, a comparison within a province of one country of student performance before and after a student-centered discovery-oriented curriculum was introduced is also unfair. Why? Because it ignores what was going on concurrently in the rest of the country at large.

Then what kind of comparison is fair?


I don't think I understand this objection. In the study of Quebec in question (this one, I think: http://www.sciencedirect.com/science/article/pii/S027277571400034X ), the authors used differences-in-differences and changes-in-changes methods precisely to compare Quebec to the rest of Canada. So they don't "fail to consider the performance of other Canadian provinces over the same time."

Katharine Beals said...

Hi Paul.

I don't quite get the objection either. But here is the rest of what the commenter wrote on Dan Meyer's blog:

"I find their discussion of the PISA to be particularly problematic. To hear them tell it, Quebec either held steady (which would not support their findings) or declined. However, when you look at the data (http://cmec.ca/Publications/Lists/Publications/Attachments/318/PISA2012_CanadianReport_EN_Web.pdf), specifically Table 1.6 in the link, you notice that PISA scores grew pretty steadily (though by small amounts) over that time, while other provinces declined. That does not build confidence in their case.

"In fact, the fact that they fail to consider the performance of other Canadian provinces over the same time is a warning signal on its own. The lack of a control comparison when one is available suggests an unwillingness (or at least a lack of interest) to account for possible historical threats to validity."

R. Craigen said...

Hi Katherine. I liked your response to #7 especially. You make a very good point. Cognitive scientists speak of a well-established pattern called the Expertise Reversal Effect. Essentially what it says is that discovery-based teaching/learning is ineffective with novices but effective with experts. That's why, for example, we tell PhD students "Here's your subject matter and the question you must solve. Good luck because nobody's ever solved it before. Go dig up all the relevant references and become and expert. Then consider how you plan to attack the problem and come back to me and we'll discuss whether you are likely to make a successful thesis out of this." That's discovery learning of a sort that blows Boaler, Meyers, Mitra etc out of the whole discussion. Even when they imagine themselves leading such a thing they have no realistic idea of how to bring it to fruition. But we do it all the time with PhD candidates, and they are SUCCESSFUL. Why? Because you don't even get INTO the PhD program without establishing your expertise. Then the first thing we do is put you through a barrage of Comprehensive exams to test your mastery of that knowledge. Then, and ONLY THEN, do we say, "okay, now lets get started on your thesis work." (Well, I'm obviously generalizing and glossing. But this is the essential principal of the matter as pertains to our success rate in producing PhDs)

When my children were in an elementary school in Fresno California they were put in the pull-out GATE (Gifted and Talented Education) program. There they did more open-ended exploration. To me that's a no-brainer. You do that with kids who have mastered the "canon" and are ready to build upon it. Nevertheless every one of those kids still sat in regular classes and did the timed drills etc with the rest. Had they not done so, and their skills fell behind, their GATE experience would have been a millstone around their educational necks.

Now I'm no rah-rah "my kids are better than yours" elitist on these matters and that's not why I bring this up. It is that I am enraged at the tendency for some to argue that because some program dealing with very talented students with a strong background is able to accomplish something with their demographic that somehow this means that it is an ideal way to teach average students. What is the basis for this argument? I fear it is as simple a logical error as causation reversal: Some seem to believe that open-ended instruction (etc) CAUSES students to be advanced. Uh, no. The observation of students having advanced abilities or backgrounds, in contrast, does open the door ("cause" is a bit of a strong word) to these possibilities with them. Causation reversal on this point is cargo cult deception.

Anonymous said...

The edworld has been assigning causation wrongly for decades; self-esteem, Latin, 8th-grade algebra (when it was only honors), foreign languages, honors and APs, debate, music etc. When it was found that kids who had high self-esteem were very successful, the edworld immediately jumped to the conclusion that the former caused the latter. When data showed that kids who took Latin, 8th-grade algebra etc. did better on a variety of measures (graduation, college etc), they jumped on the Latin-for-all (a local MS did this) etc. as a causative factor for success. In fact, those courses merely served as a proxy variable for the identification of the most able, prepared and motivated students. Inability to recognize that has lead to placement of kids into courses where they lack the background knowledge and are unable to do the work. No, the same approach does not work for all students.

Paul Bruno said...

I'm not sure why that commenter wants to cherry-pick PISA scores rather than all of the various other measures employed in the Quebec study, but the raw scores are not relevant anyway since you have to control for various student characteristics that may change between provinces over the course of the intervention. (Especially relevant here because the intervention took place over so many years.) And indeed the authors do just that in their DID/CIC methods. I will grant that the econometrics involved are complicated but these particular "compare to the rest of Canada!" and "PISA!" objections seem to be based in misunderstandings of the authors' methods and the right way to use test scores.

If only fans of reform math were this concerned with rigorous controls and falsification exercises when considering their preferred education research.

Barry Garelick said...

"What is the basis for this argument? I fear it is as simple a logical error as causation reversal: Some seem to believe that open-ended instruction (etc) CAUSES students to be advanced."

I agree with that statement and add that also seem to believe that traditional math worked only for a small group of students who happened to be advanced. I.e., the advanced nature of the student CAUSED traditional math to work. It failed for everyone else. The logic of this breaks down when one stops to define "advanced" and takes a close look at the other factors at work with traditional math. E.g., did the teachers teach it poorly or well? Of those for whom traditional math worked, what was the breakdown of IQ's and "advanced" nature of these students. For many if not most of the truly "advanced" students, the factual and procedural foundation for their success was predicated on their obtaining that foundation through the traditional teaching of math.

In the various online discussions of traditional vs reform math no one ever bothers to look at what is happening with the students who manage to make it through to HS calculus and major in a STEM field. For many of these students, they do a lot of practice and drills and memorization, either at home, or at a learning center, if they're not getting that through school. One has only to look at the Asian countries to see that Jukus and similar organizations are providing foundational skills to make these students excel at what appear to be inquiry-based assignments at school.

Barry Garelick said...

First sentence should have read: "I agree with that statement and add that some also seem..."

Anonymous said...

I had well-taught, traditional math in my small-town 1-12 school, in the 50s-60s. Most of the town was poor, but all kids came from respectable, intact, two-parent homes (widowhood aside)and were all well-socialized at school entry (no pre-k-k available). Of HS classes of 30-36, only 3-4 went to a 4-yr college and there probably weren't 10 adults with college degrees in town. At the end of 8th grade, however, everyone was decently literate, numerate, could write correctly and had decent general knowledge. ES teachers taught science, history (including art and music hx/appreciation), geography and civics. In no way was this a high-IQ or highly-educated population. The ability to use math for everyday purposes (this was before calculators) came strictly from school (perhaps some flashcard math facts at home, but only a few kids). We were taught well enough that we could calculate tax, interest, gas mileage, amount of materials needed for household projects and the like. Traditional math worked for all.