Monday, December 26, 2022

More educational malpractice: the sad legacy of Everyday Math, II

Here's a follow-up post to my first "sad legacy of Everyday Math" post, in which I concluded by saying that

You can’t blame the mathematical deficiencies of these 4th and 5th graders on their parents: both the private school and the after school program select for parents who care about education. You can’t blame it on the kids: my kids, who clearly wanted to learn, had been admitted [to our after school program] in part based on their behavior.

Picking up from there, the second post proceeded as follows:

You also can't blame it on language problems; these kids are fluent in English. In fact, there's really only one thing outside the Everyday Math curriculum that one can possibly point a finger to, and that is that these immigrant parents (many of them don't speak English) don't realize what many native-born parents already know: namely, that they can't count on the schools to fully educate their children.

So these kids are a case study in what happens when you leave math instruction entirely up to Everyday Math practitioners. And the answers to this question are slowly coming in.

For several of the 5th graders I work with, it turns out that not only do they not know how to borrow across multiple digits; they also don't know their basic addition and subtraction "facts." In other words, they don't automatically know that, say 5 plus 7 is 12, or that 15 - 8 is 7; instead they count on their fingers.

This got me thinking about addition and subtraction "facts." Back in my day, there was no issue of kids  learning these facts as such. Yes, we memorized our multiplication tables. But we never set about deliberately memorizing that 5 plus 7 is 12. Why? Because the frequency of the much-maligned "rote" calculations we did ensured that we, in today's lingo, constructed this knowledge on our own.

Back in my day, a typical third grade arithmetic sheet looked something like this:




And a typical fourth grade arithmetic sheet looked something like this:





But in Reform Math programs like Everyday Math, such pages filled with calculations are only occasional, and each problem involves a much shorter series of calculations. Here's a set from 4th grade Everyday Math:





Each multi-digit addition problem amounts to a series of simple addition problems. For example, adding two two-digit numbers involves adding at least two pairs of numbers; three if one is regrouping. Adding three three-digit numbers can involve 8 iterations of simple addition. Some of the problems in the second traditional math sheet involve as many as 17 iterations of simple addition.

In the traditional 4th grade math scenario, we may have had 25 problems per day like those in the first two sheets above, 5 days a week. With Everyday Math, you might get, at best, 25 problems like those in the second two sheets above per week.

Putting it all together, the resulting difference in the amount of practice with basic addition "facts" is quite large.  5 (days) times 25 (problems) times (say, as an average of iterations of simple addition) 10 for the traditional math curriculum versus 1 (day) times 25 (problems) times (average iterations) 3 for the Everyday Math curriculum. Assuming I'm not screwing up my arithmetic, that's 1,250 vs. 75 basic addition calculations per week. No wonder so many of those who are educated exclusively through Everyday Math don't know their "addition facts" by grade 5!

Ah, but surely their "conceptual understanding" is deeper. Note the calls for "ballpark estimate" at the bottom of each Everyday Math problem, where traditional math simply has you calculate. Stay tuned: in my next post on this topic, I'll discuss the state of conceptual understanding in my Everyday Math mal-educated 5th graders.

Thursday, December 22, 2022

Educational malpractice for the sake of Reform Math

Jo Boaler has been in the news once again, this time as one of the authors of the controversial new California Math framework

So I thought I'd resurrect this old post, inspired by, and based on part on, an email message I received back in 2013 from James Milgram, an emeritus Professor of Mathematics at Stanford University.

Professor Milgram is known in the education world for his comprehensive critique of a study done by Jo Boaler, an education professor at Stanford, and Megan Staples, then an education professor at Purdue. Boaler and Staples' paper, preprinted in 2005 and published in 2008, is entitled Transforming Students’ Lives through an Equitable Mathematics Approach: The Case of Railside School. Focusing on three California schools, it compares cohorts of students who used either a traditional algebra curriculum, or the Reform Math algebra curriculum The College Preparatory Mathematics (CPM). According to Boaler and Staple's paper, the Reform Math cohort achieved substantially greater mathematical success than the traditional math cohorts.

In early 2005 a high ranking official from the U.S. Department of Education asked Professor Milgram to evaluate Boaler and Staples' study. The reason for her request? She was concerned that, if Boaler and Staples' conclusions were correct, the U.S. department of education would be obliged, in Milgram's words, "to begin to reconsider much if not all of what they were doing in mathematics education." This would entail an even stronger push by the U.S. educational establishment to implement the Constructivist Reform Math curricula throughout K12 education.

Milgram's evaluation of Boaler and Staples' study resulted in a paper, co-authored with mathematician Wayne Bishop and statistician Paul Clopton, entitled A close examination of Jo Boaler's Railside Report. The paper was accepted for publication in peer-reviewed journal Education Next, but statements made to Milgram by some of his math education colleagues caused him to become concerned that the paper's publication would, in Milgram's words, make it "impossible for me to work with the community of math educators in this country"--involved as he then was in a number of other math education-related projects. Milgram instead posted the paper to his 
Stanford website.

This past October a bullet-point response to Milgram's paper, entitled "When Academic Disagreement Becomes Harassment and Persecution," appeared on Boaler's Stanford website. A month ago, Milgram posted his response and alerted me to it. I have his permission to share parts of it here.

Entitled Private Data - The Real Story: A Huge Problem with Education Research, this second paper reviews Milgram et al's earlier critiques and adds several compelling updates. Together, the two papers make a series of highly significant points, all of them backed up with transparent references to data of the sort that Boaler and Staple's own paper completely lacks.

Indeed, among Milgram et al's points is precisely this lack of transparency. Boaler and Staples refuse to divulge their data, in particular data regarding which schools they studied, claiming that agreements with the schools and FERPA (Family Educational Rights and Privacy Act) rules disallow this. But FERPA only involves protecting the school records of individual students; not those of whole schools. More importantly, refusals to divulge such data violate the federal Freedom of Information Act. Boaler's refusal also violates the policies of Stanford University, specifically its stated "commitment to openness in research" and its prohibitions of secrecy, "including limitations on publishability of results."

Second, Milgram et al's examination of the actual data, once they were able to track it down via California's education records, shows that it was distorted in multiple ways.

1. Boaler and Staple's chosen cohorts aren't comparable:

It appears, from state data, that the cohort at Railside [the pseudonym of the Reform Math school] was comprised of students in the top half of the class in mathematics. For Greendale, it appears that the students were grouped between the 35th and 70th percentiles, and that the students at Hilltop were grouped between the 40th and 80th percentiles. [Excerpted from Milgram; boldface mine]

2. Boaler and Staple's testing instruments are flawed:

Our analysis shows that they contain numerous mathematical errors, even more serious imprecisions, and also that the two most important post-tests were at least 3 years below their expected grade levels.  [Excerpted from Milgram; boldface mine]

3. The data comparing test scores on California's standardized tests (STAR) comes from a comparison of test scores from students not involved in Boaler and Staple's study:

The students in the cohorts Boaler was studying should have been in 11th grade, not ninth in 2003! So [this] is not data for the population studied in [Boaler and Staple's paper]. This 2003 ninth grade algebra data is the only time where the Railside students clearly outperformed the students at the other two schools during this period. There is a possibility that they picked the unique data that might strengthen their assertions, rather than make use of the data relevant to their treatment groups.   [Excerpted from Milgram; boldface mine]

4. The most relevant actual data yields the opposite conclusion about the Reform Math cohort's mathematical success relative that of the traditional math cohorts:

  • The most telling data we find is that the mathematics remediation rate for the cohort of Railside students that Boaler was following who entered the California State University system was 61%
  • This was much higher than the state average of 37%
  • Greendale's remediation rate was 35% o and Hilltop's was 29%

5. School officials at "Railside" report that the results of the reform math curriculum are even worse than Milgram et al had originally indicated:

A high official in the district where Railside is located called and updated me on the situation there in May, 2010. One of that person's remarks is especially relevant. It was stated that as bad as [Milgram et al's original paper] indicated the situation was at Railside, the school district's internal data actually showed it was even worse. Consequently, they had to step in and change the math curriculum at Railside to a more traditional approach.

Changing the curriculum seems to have had some effect. This year (2012) there was a very large (27 point) increase in Railside's API score and an even larger (28 point) increase for socioeconomically disadvantaged students, where the target had been 7 points in each case.

6. Boaler’s responses to Milgram et al provide no substantiated refutations of any of their key points

In response to comments on an article on Boaler's critique of Milgram, Boaler states:

"I see in some of the comments people criticizing me for not addressing the detailed criticisms from Milgram/Bishop. I am more than happy to this. [...] I will write my detailed response today and post it to my site."

However, as Milgram notes in his December paper:

As I write this, nearly two months have passed since Boaler's rebuttal was promised, but it has not appeared. Nor is it likely to. The basic reason is that there is every reason to believe [Milgram et al's paper] is not only accurate but, in fact, understates the situation at "Railside" from 2000 - 2005.

In a nutshell: under the mantle of purported FERPA protection, we have hidden and distorted data supporting a continued revolution in K12 math education--a revolution that actual data show to be resulting, among other things, in substantially increased mathematics remediation rates among college students. Ever lower mathematical preparedness; ever greater college debt. Just what our country needs.

Nor is Boaler's Reform Math-supporting "research" unique in its lack of transparency, in its lack of independent verification, and in its unwarranted impact on K12 math practices. As Milgram notes,

This seems to be a very common occurrence within education circles.

For example, the results of a number of papers with enormous effects on curriculum and teaching, such as [Diane Briars and Lauren Resnick's paper "Standards, assessments -- and what else? The essential elements of Standards-based school improvement"] and [J. Riordan and P. Noyce's paper, "The impact of two standards-based mathematics curricula on student achievement in Massachusetts"] have never been independently verified.

Yet, [Briars and Resnick's paper] was the only independent research that demonstrated significant positive results for the Everyday Math program for a number of years. During this period district curriculum developers relied on [Briars and Resnick's paper] to justify choosing the program, and, today, EM is used by almost 20% of our students. Likewise [Riordan and Noyce's paper] was the only research accepted by [the U.S. Department of Education's] What Works Clearinghouse in their initial reports that showed positive effects for the elementary school program ``Investigations in Number, Data, and Space,'' which today is used by almost 10% of our students.

As Milgram notes:

Between one quarter and 30% of our elementary school students is a huge data set. Consequently, if these programs were capable of significantly improving our K-12 student outcomes, we would surely have seen evidence by now.

And to pretend that such evidence exists when it doesn't is nothing short of educational malpractice.

Monday, December 19, 2022

The sad legacy of everyday math

[Everyday Math, I gather, is still very much in use, and so I thought it worthwhile to recycle this old post.]

Twice this past week I saw shocking examples of the cumulative effects of Everyday Math. Last Thursday I visited a nearby private school with sliding scale tuition and a diversity of students. For years the school had used Everyday Math, but recently, with the encouragement of a friend and colleague of mine who advises schools on math curricula, they’d begun to use Singapore Math. They’re phasing it in gradually, however, and currently don’t introduce it until 4th grade. For the first few grades, like nearly every other school in Philadelphia, they use Everyday Math.

So the 4th graders I observed had only been using Singapore Math since September. Their teacher was walking them through a topic in the 3rd grade Singapore Math curriculum: how to multiply and reduce fractions. And no one in the class who tried to answer the teacher’s questions got a single answer right. They didn’t know how to find ¼ of a 20, and they didn’t know how to reduce 5/20.

The next day I spent my first session of the school year with a group of children of French African immigrant parents who had enrolled them in an after school enrichment program I’m involved with. They were four Everyday Math-educated 5th graders, and I was exploring their mastery of addition and subtraction. Addition went fine: they know how to stack numbers and carry from one digit to the next. Subtraction was another story.

Heartened by their success adding two three-digit numbers, I asked them how to do 1000 - 91. All but one of the five students were stumped. Most got the same number: 1011. Two things had stumped them: 0 - 1, which they thought was 1, and how to borrow across more than one digit. So I gave them an easier problem, 100-71--and they were equally stumped, again getting answers that were larger than the number they were subtracting from. So I began the tricky process of teaching them how to borrow across more than one digit.

The great thing is that they were hooked. When I asked them whether their answers should be bigger or smaller than the number they were subtracting from, they all answered “smaller.” When I then asked them whether their answers were, in fact, smaller, they looked down at their sheets, and then up at me, and I had their undivided attention. These are good kids: they want to learn. And they like math.

You can’t blame the mathematical deficiencies of these 4th and 5th graders on their parents: both the private school and the after school program select for parents who care about education. You can’t blame it on the kids: my kids, who clearly wanted to learn, and had been admitted in part based on their behavior; in the private school classroom I saw, they were very well behaved. You can’t blame it on class size: the classes I observed contained between 7 and 12 students. You can’t blame it on the teachers: the teachers I saw seemed well above average in their ability to engage their students in the material at hand.

No, I’m afraid there’s only one thing we can blame here, much as the developers of the Everyday Math monolith would like to claim otherwise.

Friday, December 16, 2022

Where calculators don't help

How much longer would it have taken for the flight to take off if one of the younger flight attendants had taken charge?

Another resurrected post:

Everyday arithmetic: where calculators don't help

What with the proliferating meme that calculators can substitute for most real-world human calculations, and with restaurant bills that increasingly calculate the various tip possibilities for you (15%, 20%, 25%), it's harder and harder to find examples of cases in which, say, there's no good substitute for knowing your multiplication tables.

But on a recent airplane flight, one such case suddenly jumped out at me. Before the flight took off, the flight attendants were asked to verify that all the passengers were on board. There were six seats per row on what seemed to be a maxed-out flight, and one of the older flight attendants knew exactly what to do. Walking down the aisle, she rapidly counted by 6's: 6, 12, 18, 24, etc...

Think about it... There's no faster way to count rows of airplane passengers than to apply your memorized multiplication tables.

How much longer would it have taken for the flight to take off, I couldn't help wondering, if one of the younger flight attendants had taken charge?

Tuesday, December 13, 2022

False choices in remediation: "addition and subtraction over and over again" or Marxism and Shakespeare

I suspect the rejection of true remediation has only grown over the years since I wrote this, as schools, bowing to the Common Core standards, increasingly expect nearly all students to engage with the same material based not on academic readiness, but on what year and month they happened to be born in.


Another false choice in remediation: "addition and subtraction over and over again" or Marxism and Shakespeare

A recent New York Times Education Supplement article entitled "Rigorous SchoolsPut College Dreams Into Practice" showcases Bard College's new "early college high school" for disadvantaged students in Newark.

For the uninitiated (like yours truly until I read this), an early college high school is one that merges high school courses with "some college." As the article explains:

Students can earn both a high school diploma and an associate degree, and some are set on the path to a four-year degree.

The early college high school is also a growing movement:

There are now more than 400 early college high schools across the country — North Carolina has 76 of them — educating an estimated 100,000 students.

Across the country in communities like Newark, the early college high school model is being lauded as a way to provide low-income students with a road map to and through college.

What makes early college high schools different from, say, a college prep track of a regular high school? For one thing, they seem to be specifically geared at students who need "catching up," and they aim to offer an alternative to remediation:

The ethos of early college high schools: catch students up, not by relegating them to the kind of remedial classes required at community colleges but by bombarding them with challenging work. At the Bard school, that means works by Dante, Locke and W.E.B. Du Bois that have populated and enriched the lives of their more affluent peers.

Can Dante really replace traditional remediation? At Bard's Newark branch:

Students say the transition has been tough. Al-Nisa Amin, now a sophomore, remembers slumping over a math problem that first year, crying out of sheer frustration. But she has stuck it out, partly because she is scared of being sent to a zoned high school.

In particular, the article cites students who were getting A's at their zoned high schools now getting D's and F's. And here's what their Shakespeare seminars are like:

Flipping through their Signet Classic paperbacks and scribbling notes, they reviewed the first act of “Twelfth Night,” intuitively understanding that Orsino, Duke of Illyria, had become obsessed with Olivia. When their professor, David Cutts, asked what was going on in Orsino’s heart, several called out matter-of-factly: “Love.” The class then discussed the vagaries of love at first sight, and voted on whether they believed in it. Most didn’t.

Some were confused by the shipwrecked noblewoman Viola and her motives in disguising herself as a servant. “He’s rich, so why is she trying to hide?” one student asked, befuddled. Another hypothesized: “I think she’s interested in him.”

Then there's the Marxism and Postmodernism seminar:

...which on this day involved mulling over a densely written essay by the Marxist political theorist Fredric Jameson on the meaning of self in a postmodern world. Reading aloud a 2006 article in The Economist titled “Post-Modernism Is the New Black,” one student stumbled over “facade,” “anachronistic” and “grandeur” — words that would seem fair game for late high school.

Another student wanted to know: “What’s a phenomenon?” One inquired about the meaning of “sinister.”

The article cited Auschwitz as an example of how the Enlightenment had “given birth” to totalitarianism. Not one of the 10 students knew what Auschwitz was. Debate ensued over whether it was a city in Switzerland, Russia or Poland. Their professor finally interjected: “It’s usually used as the big example of the Holocaust.”

It's important to note that this may not be representative of early college high school seminars in general:

In similar classes in Bard’s New York schools, students’ vocabulary, communication skills and historical knowledge appear noticeably more advanced.

But, as the article notes:

The disparity raises an uncomfortable question. Can students who are so behind be brought up to college level in a few efficient years, even with good teachers and good intentions?

So far, the school has lost 7 of the 36 students who entered in 2011 as first-year college students and 20 of the 87 who entered as high school freshmen.

Najee has repeated one class. Both Miles and Billy have repeated several. More than half the class had to repeat one of the required seminars in a monthlong intensive at the end of the last school year.

The article proceeds to elaborate on the pedagogical philosophy of these early college high schools:

Taken as a group, early college high schools place a premium on teaching rudimentary study skills — how to take notes, how to interact with professors, where the best spot is to sit in a classroom. But the greatest emphasis is on thinking. Students are encouraged to see themselves as participants in an academic world, and as interested in gaining knowledge as in getting good grades. Dr. Ween calls it “joining the debate.”

So far, so good. But, as the next paragraph makes clear, "thinking" and "joining the debate" mean something troubling specific and Constructivist:

The students at Duplin Early College High School in eastern North Carolina take an applied math class in which they learn about velocity and graphing by building roller coasters out of wire, piping and masking tape. Then they are asked to defend the project. At the Dayton Early College Academy in Ohio, students learn about constitutional law in mock trials. And at Bard, in an environmental science class, students read articles about the effect biofuel is having on corn prices and debate the merits of renewable energy.

These activities do not address the deficits that colleges most often need to remediate: those in essay writing and pre-calculus.

“You cannot pull off an early college high school successfully without fundamentally changing pedagogy,” said Joel Vargas, vice president of Jobs for the Future, a nonprofit organization based in Boston that develops early college high schools. He calls it the opposite of “chalk and talk.”

I agree with Vargas' first statement. But fundamentally changing pedadogy in a way that prepares disadvantaged kids for high school means less project-based learning and more direct instruction and pen and paper exercises ("chalk and talk"?).

As articles like this one make clear, the edworld is increasingly convinced that remediation can be bypassed:

Gone is the thinking that students must master all the basics before taking on more challenging work.

“Traditionally, what has happened is that kids who come in below standards are put in a remedial track and they do addition and subtraction over and over again,” said Cecilia Cunningham, executive director of the Middle College National Consortium, a network of more than 30 early college high schools. “They’re bored out of their minds and the message is: ‘You really can’t do this.’ ”

To maintain beliefs like these, the edworld depends on such false dichotomies. Remediation = doing addition and subtraction over and over again = boring kids out of their minds and telling them they're incapable. The only alternative is project-based learning and Postmodernism seminars. It simply doesn't occur to them that remediating a child's academic skills at their Zone of Proximal Development is less likely to bore them and make them feel incapable than forcing them through a seminar on the meaning of self in a postmodern world (even with the implicit threat of having to return to their local high schools if they don't play ball).

Do such tactics nonetheless work? In general:

Studies show that high school students who take classes in which they get both college and high school credit — often referred to as dual-credit courses — fare better academically.

A study last year of more than 30,000 Texas high school graduates found that those who took college-level classes in high school were more likely to have finished college after six years.

Studies like these, however,

aren’t able to determine if it is the type of students drawn to college-level coursework that makes the difference. And no long-term studies have been conducted about early college high school students and college graduation. [Emphasis mine]

A refreshing voice in the edworld wilderness comes from emeritus professor Sandra Stotsky

who notes that there is not any substantial evidence that the model being tried out in Newark will help at-risk students get through four years of college. Dr. Stotsky finds the idea that students should have to go to college to get a good high school education counterintuitive, and has called on educators to refocus their efforts on making high school coursework more challenging.

Otherwise, the opposite may happen--at the college level:

Critics also worry about rushing students through the material and pushing them prematurely onto college campuses, thus dumbing down classes for the other students.

It's interesting that, for all the permeation of the "learning differences" and "differentiated instruction" memes, what current trends in education have most amounted to in practice is a one-size-fits all curriculum in which the same hands-on, arts-based "multiple learning styles" assignments are inflicted on everyone and neither advancement nor remediation are allowed.


Thursday, December 8, 2022

Is there really no Theory of Mind deficit in autism? Part II: the validity of the standard Theory of Mind measures

Cross-posted at FacilitatedCommunication.org.

In my previous post on Gernsbacher and Yergeau’s 2019 paper, Empirical Failures of the Claim That Autistic People Lack a Theory of Mind, I discussed problems with the authors’ arguments that the original studies that showed Theory of Mind deficits in autism have failed to replicate and been overturned by later studies. As the article continues, the authors embark on a second line of argumentation—this one concerning the inherent validity of the various ToM tests.

Morton A. Gernsbacher, Professor of Psychology, University of Wisconsin

The various ToM tests, Gernsbacher and Yergeau argue, fail to “converge” (that is, agree in their conclusions) on anything that meaningfully distinguishes autism.  The ToM tests they consider are:

  • The Strange Stories Test, which measures the ability to deduce the social/emotional reasons for characters’ behaviors in a narrative sequence

  • The Animated Triangles Test, which measures the ability to ascribe emotions to self-propelled, interacting shapes in a short animation

  • The Eyes Test, which measures the ability to recognize emotions from facial information that is restricted to a rectangular region around the eyes

  • The Faux Pas Test, which measures the ability to detect social blunders

  • The false-belief tests, which measures the ability to make inferences about the beliefs held by (or about) individuals who are missing key pieces of information

    • The Sally-Anne “unexpected location change”  test, in which Sally doesn’t witness Anne changing the location of her marble

    • The Smarties “unexpected contents” test, in which a candy box contains a pencil rather than candy

This “lack of convergent validity among theory-of-mind tasks,” they conclude, “undermines the core construct validity of theory of mind.”

Gernsbacher and Yergeau cite, in particular, the repeated failure of the Strange Stories test to converge with the Eyes test or the Animated Triangles test; and the repeated failure of the Eyes test to converge with the Faux Pas test. These test pairs, however, measure quite different social phenomena: detecting emotion from eyes; making sense of social behavior in a narrative; mentalizing about animated shapes; and detecting social blunders.  Thus, none of these convergence failures is particularly surprising. The most related of these two phenomena, the Strange Stories and the Faux Pas tests, are one pair whose degree of convergence Gernsbacher and Yergeau do not mention. The lack of convergence of the other pairs may only mean that other variables, like individual variation within the autism spectrum in frequency of eye contact or in verbal skills, also play a role. This doesn’t rule out that each of these tests tap into an underlying social factor that is atypical in autism.

But Gernsbacher and Yergeau go on to make an even more surprising claim: that even within the false-belief tests there is lack of convergence, with different false-belief tasks failing to correlate significantly with one another. The only specific examples of non-convergent false-belief tasks in the studies they cite, however, are (1) first- vs. second-order false-belief tasks and (2) one study in which the Smarties task fails to correlate with the Sally-Anne test. As far as first vs. second-order false-belief tests go, we’ve already seen their non-convergence: first-order false-belief tests correlate with autistic traits (and language development); second-order false-belief tests correlate with working memory/processing abilities.  But that does not undermine the validity of first-order false-belief tests in singling out autism.

As far as the Smarties vs. the Sally-Anne test, one study, Hughes (1998), found that performance on the Smarties test was not correlated with the Sally-Anne test. Nonetheless, each of these tests correlated with a different ToM task. The Sally-Anne test, which has to do with the location of hidden objects, correlated with a penny-hiding task, while the Smarties test, which has to do with a deceptive container, correlated with a deception task. Furthermore, as Lind and Bowler (2009) point out, the Sally-Anne task involves a narrative and a relatively simple test question (“Where will Sally look for her marble?”), while the Smarties task does not involve a narrative but does involve a more complex test prompt (“Billy hasn’t seen the box. When he comes in I’ll show him the box just like this and ask what’s in there. What will he say?”). Different skillsets (comprehension of hiding vs. deception; following a narrative vs. parsing a complex question), therefore, may be involved in passing each one, even if each also taps into the same underlying ToM skills. Beyond this one study, studies in general (e.g., Ozonoff, 1991, mentioned *in my earlier post*, for Smarties; or Kerr and Durkin, 2004, mentioned below, for Sally-Anne) have found both of these tests to be disproportionately difficult for individuals with autism.

Gernsbacher and Yergeau’s other two references for lack of convergence among false-belief tests do not involve autistic subjects. Charman & Campbell (1997) addresses individuals with learning disabilities; Duval et al. (2011), individuals with age-related cognitive decline.

In short, to the extent that there is a lack of convergence between ToM tests as applied to autistic individuals, this lack of convergence does not undermine the tests’ validity as measures that tap into the various aspects of the ToM deficits in autism.

REFERENCES:

Charman, T., & Campbell, A. (1997). Reliability of theory of mind task performance by individuals with a learning disability: A research note. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 38, 725–730. https://doi.org/10.1111/j.1469-7610.1997.tb01699.x

Duval, C., Piolino, P., Bejanin, A., Eustache, F., & Desgranges, B. (2011). Age effects on different components of theory of mind. Consciousness and Cognition, 20, 627–642. https://doi.org/10.1016/j.concog.2010.10.025

Gernsbacher, M. A., & Yergeau, M. (2019). Empirical Failures of the Claim That Autistic People Lack a Theory of Mind. Archives of scientific psychology7(1), 102–118. https://doi.org/10.1037/arc0000067

Hughes, C. (1998). Executive function in preschoolers: Links with theory of mind and verbal ability. British Journal of Developmental Psychology, 16, 233–253. https://doi.org/10.1111/j.2044-835X.1998.tb00921.x

Kerr, S., & Durkin, K. (2004). Understanding of thought bubbles as mental representations in children with autism: Implications for theory of mind. Journal of Autism and Developmental Disorders, 34, 637–648. https://doi.org/10.1007/s10803-004-5285-z

Lind, S. E., & Bowler, D. M. (2009). Language and theory of mind in autism spectrum disorder: the relationship between complement syntax and false belief task performance. Journal of autism and developmental disorders39(6), 929–937. https://doi.org/10.1007/s10803-009-0702-y

Ozonoff, S., Rogers, S. J., & Pennington, B. F. (1991). Asperger’s syndrome: Evidence of an empirical distinction from high-functioning autism. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 32, 1107–1122. https://doi.org/10.1111/j.1469-7610.1991.tb00352.x

Monday, December 5, 2022

A linguist's meditation on youth, age, and the passage of time

Does the passage of time--forward or backward--make things younger or older?

A short time ago, the youngest countries (e.g. Eritrea and South Sudan) came into existence. 

A long time ago, America was one of the youngest countries. 

As time moves forward, America gets older and older and so do we.

Going back in time, as we get younger and younger (and eventually don't exist), we eventually find older and older languages, cultures, civilizations, and historical figures... 

But long ago, many of these things were new...

Saturday, December 3, 2022

Shakespeare for everyone?

A recent class discussion on full inclusion for autistic students led me and my students to the Shakespeare Question. This question is forced upon us by the fact that the Common Core Standards, for all their general vagueness about curricula, require nearly all high school students to read Shakespeare, Only the most intellectually disabled are exempted. 

But plenty of non-to-minimally disabled students can't handle the Bard, even with page-by-page glossaries. Some teachers send students over to SparkNotes or No Fear Shakespeare. And then there's...

21st century Shakespeare

[An Out in Left Field post from 2011]

One thing that got lost in the discussion following the NY Times' recent exposé of a school district in Arizona that has spent $33 million on laptops, big interactive screens, and so-called educational software and is asking for $46.3 million for more of the same even as its test scores are falling and its music, art, and phys ed classes are being cut, is what is happening to Shakespeare:

Amy Furman, a seventh-grade English teacher here, roams among 31 students sitting at their desks or in clumps on the floor. They’re studying Shakespeare’s “As You Like It” — but not in any traditional way.

In this technology-centric classroom, students are bent over laptops, some blogging or building Facebook pages from the perspective of Shakespeare’s characters. One student compiles a song list from the Internet, picking a tune by the rapper Kanye West to express the emotions of Shakespeare’s lovelorn Silvius.

This is a great way to ruin literature in general, but reducing Shakespeare in particular to the psycho/sociological perspectives of his characters is particularly ruinous.  Despite claims by some that Shakespeare invented the human, human psychology was not Shakespeare's strong suit. His characters are constantly falling in love--or tumbling into murderous jealousy--at the drop of a hat or handkerchief; switching love interests at the lifting of a disguise; and ridiculously prone to deception (c.f., e.g., the "transvestite comedies"), persuasion ( And be it moon, or sun, or what you please: An if you please to call it a rush-candle, Henceforth I vow it shall be so for me.) or (most infamously) forgiveness.

No, what Shakespeare excelled in was--of course--language, and the radiant gems of wisdom into which he worked it. Some of my favorites:

Uneasy lies the head that wears a crown.

 or:

The undiscovere'd country, from whose bourn
No traveller returns 

or:

Be not afraid of greatness: some are born great, some achieve greatness, and some have greatness thrust upon them.  

What would these look like as status updates on Facebook?