In my last post, I proposed one reason for the popularity of the Telepathy Tapes: how predisposed people are to believe in paranormal phenomena. Here I examine another reason: how predisposed people are to believe that autistic non-speakers can be unlocked via held-up letterboards—that is, via variants of facilitated communication known alternatively as Rapid Prompting Method (RPM) and Spelling to Communicate (S2C).
First, the Telepathy Folks
As far as Telepathy Tapes listeners go, part of this inclination comes from the fact that the podcast provides no explicit indications that the individuals on the Telepathy Tapes are being cued by their facilitators. The podcast is audio only, and so in scenes where nonspeaking autistic individuals type out messages on letterboards, all we have are the verbal descriptions provided Ky Dickens, who is not only the show’s host, but a fervent believer in FC. And Dickens’ verbal descriptions omit that the letterboards are held up and inevitably shift around while the autistic person’s extended index finger roams around in front of the letter arrays. The show does provide a few videos behind a paywall, but the facilitator cueing in these, as with many other videos RPM and S2C, is proving to be too subtle for most naïve viewers.
But that doesn’t fully explain what so many people with no vested interest in FC are apparently ready to believe—judging, at least from what we’ve heard from the many Telepathy Tapes enthusiasts. Presented with verbal descriptions of scenarios in which an autistic person points to a number that only the facilitator saw, or to a sequence of letters that labels a picture that only the facilitator saw, a surprisingly large number of Telepathy Tapes listeners have concluded that this is both:
A. a reliable description of what happened, and
B. evidence, not that the facilitator might be influencing the number/letter selection through via normal, if subtle, physical mechanisms, but that the facilitator is instead sending a telepathic message that is picked up and acted upon by the autistic non-speaker.
Beyond telepathy believers
As we’ve discussed elsewhere on this blog, you don’t have to believe in telepathy to ignore or dismiss facilitator cueing. But dismissing facilitator cueing entails at least one extraordinary belief: namely, that non-speaking autistic individuals, who typically show little signs of attending to other people, or of comprehending more than a few basic words and phrases, and who typically aren’t included in general education classrooms, have somehow acquired sophisticated vocabularies and literacy skills, worldly knowledge, and academic skills across the entire K12 curriculum. For telepathy believers, the explanation is straightforward: this acquisition happens through telepathy. For everyone else, there are instead a host of FC-friendly education myths that have long dominated the world of K12 education and in turn, through the salience of K12 education in many people’s lives, also dominate our popular beliefs.
Myth #1: Kids can learn academic skills by osmosis.
Within and beyond the education world, there’s a widespread belief that, just as many non-academic skills can be learned through immersion and incidental learning in the natural environment, the same holds for academic skills. That is, just as typically developing children learn to walk, talk, and build block towers without any explicit instruction, the same, purportedly, goes for reading, writing, and arithmetic. Indeed, there’s an entire pedagogy based on this notion: “child-centered discovery learning.”
For reading, this means immersing children in a “print-rich environment.” For math and science, it means manipulatives (blocks, rods, chips) and child-centered exploration. Teachers are “guides on the side” rather than “sages on the stage,” providing minimal instruction or error correction. (See for example Hirsch, 1996; Ravitch, 2000). While few schools take such notions to extremes, and while learning to read through osmosis has been widely discredited, the general notion that discovery learning is more effective than direct instruction continues to resonate broadly and deeply throughout K12 education and on into the general public.
And it extends, naturally, to the world of FC. Indeed, Douglas Biklen, the person credited with bringing FC to the U.S. from Australia more or less echoed the proponents of literacy through print-rich environments when he said, by way of explanation for the literacy skills in FC, that:
I think it's rather obvious that the way in which these children learned to read was the way that most of us learned to read-- that is, by being immersed in a language-rich environment. You go into good pre-school classrooms and you'll see words everywhere, labeling objects, labeling pictures. You look at Sesame Street. We're introducing words. We're giving people whole words. We're also introducing them to the alphabet. (Palfreman, 1993).
As in K12 education, this line of thinking extends beyond literacy to other skills and knowledge. FC proponents have claimed that FCed children have learned about current events by listening to NPR (Iversen, 2006); Spanish by listening to their siblings practice at home (Handley, 2021); and physics by overhearing a physics class through a cafeteria wall (personal communication).
In K12, the illusion that students can master material without explicit instruction is sustained by powerful prompts and cues from teachers, often in the form of leading questions. I discussed this phenomenon in an earlier post; we can see it play out in detail, for example, in Emily Hanford’s Sold A Story. This podcast is an exposé of an approach to reading known as “Balanced Literacy” and/or “Three Cueing” that eschews phonics instruction and encourages kids to guess words from context. In Episode 1, a teacher reads a story about two children, Zelda and Ivy, who have run away from home because they didn’t want to eat the cucumber sandwiches their father had made for them. The teacher turns to a page where a word is covered up by a sticky note and prompts the students to use context to guess what it is. The word occurs at a point where the Zelda and Ivy are wondering how their parents will react when they realize they’re gone. Here is the excerpt:
Teacher: Do you think that covered word could be the word “miss”?...
Teacher: Could it be the word miss? Because now that they’re gone maybe their parents will miss them?
The teacher asks the kids to think about whether “miss” could be the word using the strategies they’ve been taught.
Teacher: Let’s do our triple check and see. Does it make sense? Does it sound right? How about the last part of our triple check? Does it look right? Let’s uncover the word and see if it looks right?
The teacher lifts up the sticky note and indeed, the word is “miss.”
Teacher: It looks right too. Good job. Very good job.
(Sold a Story, Episode 1 transcript).
The teacher doesn’t seem to recognize what a big clue this is—that is, how many other possibilities there might be: “find,” “scold,” “resent,” etc.—and therefore to what degree she’s essentially told the students the answer and, quite likely, overestimated their word-identification skills.
Precisely this sort of oral prompting pervades—and sustains the illusion of— the more recent variants of FC—Rapid Prompting Method (RPM), Spelling to Communicate (S2C), and Spellers Method, where facilitators frequently direct letter selection with phrases like “up-up-up,” “right next door,” and “get it!”
Myth #2: All students are equally capable: they just need the right environment for learning and the right outlet for demonstrating understanding.
The world of K12 education has become increasingly resistant to the reality that different children have different levels of academic readiness and academic achievement. Instead, large proportions of education professionals, along with large proportions of the general public, embrace pseudoscientific theories (“multiple intelligences”; “learning styles”) that recast differences in skills as differences in styles. These beliefs have continued to spread despite the growing evidence against them (Willingham et al., 2015; Newton & Salvi, 2020). Individuals once viewed as low achievers are now often labeled as “bodily-kinesthetic learners.” This type of learner, purportedly, doesn’t do well in traditional classrooms but will prove quite competent with instruction and activities that incorporate lots of movement (skits, dances, building things, marching around the classroom). Individuals who struggle to read or do math might also be labeled as “visual learners”—purportedly performing perfectly well so long as teachers replace letters and numbers with pictures.
Consistent with these assumptions, assessments are now less about testing specific skills and more about giving students multiple options for “demonstrating understanding.” Specific suggestions include allowing kids to make presentations, posters, or “concept maps” instead of pen-and-paper tests, and providing supports like text-to-speech and speech-to-text (see for example, here). A “visual” student might, for example, retell a story in pictures rather than in words.
The presumption that all students are equally capable, given the appropriate adjustments, echoes a mantra of FC proponents that dates back to Douglas Biklen: Always presume competence. But the similarities don’t end there. In FC, as in education, the apparent (but purportedly not actual) challenges of the population in question are explained by invoking the person’s body. The education world, regarding students who struggle in traditional classrooms, invokes a bodily-kinesthetic learning style; the FC world, regarding minimal speakers with autism, invokes a mind-body disconnect. Finally, just like it’s the teacher’s job to figure out the best way for individual students to demonstrate the understanding that they’re presumed to have somehow acquired, it’s the facilitator’s job, via the letterboard or keyboard, to figure out the motor or regulatory support needed for individual clients to demonstrate the literacy skills and academic knowledge that they, too, are presumed to have somehow acquired.
One final commonality here between the FC world and the edu-world is the notion that the hard work that people used to think was necessary—whether the direct, systematic instruction and “drill and kill” of traditional classrooms or the intensive, one-on-one “discrete trials” of ABA—can be bypassed by methods that simply (1) presume that children are capable of learning on their own and (2) provide appropriate supports and outlets for children to put that learning to use.
Myth #3: Traditional, controlled tests are unreliable and don’t measure what really matters.
In the education world, there has long been a resistance to high-stakes standardized tests that measure student achievement. Particularly vociferous are those most invested in the teaching business: teachers unions and education schools (Phelps, 2023). These individuals make several arguments against using such tests—arguments that resonate across the general public. Among other things, they claim that:
Testing is stressful, and some students perform poorly only because of anxiety. (There is some evidence for lowered performance in those with test anxiety, and some evidence for effective remedies).
Marginalized groups may underperform on standardized test because the tests are biased against them or because of “stereotype threat.” (There is some empirical support for the latter, and some effective remedies).
Standardized tests can only measure rote skills and “teaching to the test,” not conceptual understanding and higher-order thinking (see also here). (Untrue: here standardized tests effectively measure conceptual understanding and higher-order thinking. This is why, for example, the SAT has been a better predictor of college success than grades are).
There are better ways to measure student achievement—e.g., projects, presentations, portfolios of student work. (These measures are much more subjective and unreliable).
While standardized statewide tests are still routinely administered across the country, the interest groups most resistant to high-stakes testing have effectively eliminated the most informative of such tests: those tests that most fully, comprehensively, and objectively assess students’ skills across a variety of key academic sub-skills and provide the most information about which educational pedagogies are working and which students have been most ill-served by which pedagogies. The canonical example of such tests is the Iowa Test of Basic Skills (ITBS). The ITBS was once used by schools across the country (I took it multiple times as a kid in Illinois); it has become decreasingly popular since then and was recently replaced by new “Common Core-aligned” tests. Used until recently by many in the homeschooling community, the ITBS reported sub-scores in various aspects of reading and math and placed no ceiling on skills being measured, such that a 4th grader could score at a 6th-grade level in a particular reading sub-skill.
Most of the new Common Core-inspired state tests, in contrast, only report general scores for reading and math, not sub-scores. They also only measure students up to what the state considers to be grade-level standards: standards which many testing experts consider, for most grades, to be set too low. Also reducing the tests’ informational value is the fact that some of the math questions require literacy skills (explaining your answer) and that students can receive partial credit for incorrect answers for which they provided verbal explanations. (Both of these factors artificially lower the scores, relative to other students, of English learners and students with language delays—including students with autism). The tests are further compromised by low security: teachers rather than outside proctors administer the tests, and some large-scale cheating episodes have come to light (see Phelps, 2023).
Also decreasingly informative are the SATs, which many colleges have made optional, and which have been redesigned to measure fewer skills with less precision. Many of the math problems allow calculators, and few require complex algebraic operations. The analogies and vocabulary sections are gone, as are questions that ask students to synthesize long passages. Passages now consist of 1-2 short paragraphs, often accompanied by charts and graphs, followed by a single question that often is more about the chart than the paragraph(s). The passages (and graphics) are no longer drawn from the works of professional writers but instead are written by test-makers; as a result they’re often hard to make sense of, not because the writing is sophisticated, but because they’re poorly written (or designed). Test-takers no longer lose points for guessing, so guessing rates have gone up, adding even more noise to the signal.
Meanwhile, one of the most popular early reading assessments in use in K12 schools, Fountas and Pinnnell Benchmark Assessment, is so poor at detecting skilled vs. struggling readers as to be equivalent to a coin toss.
As for those who want to promote a particular pedagogical approach as “evidence-based,” in lieu of standardized testing that might indicate objective effects on learning outcomes, we have anecdotal reports from classrooms: subjective accounts of high levels of student and teacher engagement, interviews with teachers, annotations of student work, and/or researchers’ field notes. “Lived experience” substitutes for objective testing; anecdotes for evidence.
And if the education world needs one more reason to dismiss objective tests, Telepathy Tapes host Ky Dickens obliges. On her “resources” page she claims, falsely, that ”nothing in education can truly be empirically validated because every student is inherently unique.”
Which takes us back to FC. FC proponents, just like their counterparts in the education world, have successfully suppressed informative testing. While the Don’t test mantra dates back to Douglas Biklen and the 1990s, there were, in that decade, a number of FC practitioners who nonetheless willingly participated in objective tests. But those tests consistently established that the facilitators were the ones controlling the FCed messages. What came next was a host of arguments against authorship tests that parallel the education world’s arguments against academic tests:
Test anxiety purportedly impedes the FCed person’s ability to type messages, particularly in the hostile environment that purportedly results from skeptical examiners.
Test performance is further undermined by stereotype threat: that is, by negative stereotypes about the abilities of minimally speaking individuals with autism (Jaswal et al., 2020) (Unlike in educational testing, there is no evidence that either anxiety or stereotype threat affects authorship testing).
Authorship tests are insulting and violate the dictum to Always presume competence. (Apparently standardized education tests aren’t as insulting or unethical. Many FCed individuals take such tests—with the help of their facilitators).
There are alternative ways to assess authorship that are purportedly more reliable, like comparing the writing styles of the FCed individual and their facilitator(s), or looking at whether their pointing rhythms suggest an awareness of English spelling patterns, or mounting an eye-tracking computer on their heads and recording whether they look to letters before they point to them (Jaswal et al., 2020; Jaswal et al., 2024; Nicoli et al., 2023). (See here, here, and here for critiques).
Or better yet: there’s lived experience. FC-generated accounts attributed to FCed individuals recount their experiences with FC and explain how it’s really them producing the FCed messages. Videos or live observations of FCed individuals typing purportedly establish beyond a reasonable doubt that they aren’t being cued by the assistant who is always within auditory or visual cueing range.
As for other types of standardized tests—cognitive tests, academic tests— none of these should be conducted on any minimally speaking autistic individual except through FC. That’s because all such tests require some sort of physical response (pointing to pictures; arranging shapes; filling in bubbles), and so the purported mind-body disconnect makes these tests hopelessly unreliable.
The hostility in both the worlds of FC and the worlds of education towards objective, well-controlled, informative testing underscores what’s so powerful about such tests: they are the brass tacks that everything comes down to. They are, in all areas of life, what separates the science from the pseudoscience and exposes the clinical quacks and methodological cracks for who and what they are—whether in K12 education, in minimally-speaking autism, or on podcasts about telepathy.
While standardized statewide tests are still routinely administered across the country, the interest groups most resistant to high-stakes testing have effectively eliminated the most informative of such tests: those tests that most fully, comprehensively, and objectively assess students’ skills across a variety of key academic sub-skills and provide the most information about which educational pedagogies are working and which students have been most ill-served by which pedagogies. The canonical example of such tests is the Iowa Test of Basic Skills (ITBS). The ITBS was once used by schools across the country (I took it multiple times as a kid in Illinois); it has become decreasingly popular since then and was recently replaced by new “Common Core-aligned” tests. Used until recently by many in the homeschooling community, the ITBS reported sub-scores in various aspects of reading and math and placed no ceiling on skills being measured, such that a 4th grader could score at a 6th-grade level in a particular reading sub-skill.
Most of the new Common Core-inspired state tests, in contrast, only report general scores for reading and math, not sub-scores. They also only measure students up to what the state considers to be grade-level standards: standards which many testing experts consider, for most grades, to be set too low. Also reducing the tests’ informational value is the fact that some of the math questions require literacy skills (explaining your answer) and that students can receive partial credit for incorrect answers for which they provided verbal explanations. (Both of these factors artificially lower the scores, relative to other students, of English learners and students with language delays—including students with autism). The tests are further compromised by low security: teachers rather than outside proctors administer the tests, and some large-scale cheating episodes have come to light (see Phelps, 2023).
Also decreasingly informative are the SATs, which many colleges have made optional, and which have been redesigned to measure fewer skills with less precision. Many of the math problems allow calculators, and few require complex algebraic operations. The analogies and vocabulary sections are gone, as are questions that ask students to synthesize long passages. Passages now consist of 1-2 short paragraphs, often accompanied by charts and graphs, followed by a single question that often is more about the chart than the paragraph(s). The passages (and graphics) are no longer drawn from the works of professional writers but instead are written by test-makers; as a result they’re often hard to make sense of, not because the writing is sophisticated, but because they’re poorly written (or designed). Test-takers no longer lose points for guessing, so guessing rates have gone up, adding even more noise to the signal.
As for those who want to promote a particular pedagogical approach as “evidence-based,” in lieu of standardized testing that might indicate objective effects on learning outcomes, we have anecdotal reports from classrooms: subjective accounts of high levels of student and teacher engagement, interviews with teachers, annotations of student work, and/or researchers’ field notes. “Lived experience” substitutes for objective testing; anecdotes for evidence.
And if the education world needs one more reason to dismiss objective tests, Telepathy Tapes host Ky Dickens obliges. On her “resources” page she claims, falsely, that ”nothing in education can truly be empirically validated because every student is inherently unique.”
Which takes us back to FC. FC proponents, just like their counterparts in the education world, have successfully suppressed informative testing. While the Don’t test mantra dates back to Douglas Biklen and the 1990s, there were, in that decade, a number of FC practitioners who nonetheless willingly participated in objective tests. But those tests consistently established that the facilitators were the ones controlling the FCed messages. What came next was a host of arguments against authorship tests that parallel the education world’s arguments against academic tests:
Test anxiety purportedly impedes the FCed person’s ability to type messages, particularly in the hostile environment that purportedly results from skeptical examiners.
Test performance is further undermined by stereotype threat: that is, by negative stereotypes about the abilities of minimally speaking individuals with autism (Jaswal et al., 2020) (Unlike in educational testing, there is no evidence that either anxiety or stereotype threat affects authorship testing).
Authorship tests are insulting and violate the dictum to Always presume competence. (Apparently standardized education tests aren’t as insulting or unethical. Many FCed individuals take such tests—with the help of their facilitators).
There are alternative ways to assess authorship that are purportedly more reliable, like comparing the writing styles of the FCed individual and their facilitator(s), or looking at whether their pointing rhythms suggest an awareness of English spelling patterns, or mounting an eye-tracking computer on their heads and recording whether they look to letters before they point to them (Jaswal et al., 2020; Jaswal et al., 2024; Nicoli et al., 2023). (See here, here, and here for critiques).
Or better yet: there’s lived experience. FC-generated accounts attributed to FCed individuals recount their experiences with FC and explain how it’s really them producing the FCed messages. Videos or live observations of FCed individuals typing purportedly establish beyond a reasonable doubt that they aren’t being cued by the assistant who is always within auditory or visual cueing range.
As for other types of standardized tests—cognitive tests, academic tests— none of these should be conducted on any minimally speaking autistic individual except through FC. That’s because all such tests require some sort of physical response (pointing to pictures; arranging shapes; filling in bubbles), and so the purported mind-body disconnect makes these tests hopelessly unreliable.
The hostility in both the worlds of FC and the worlds of education towards objective, well-controlled, informative testing underscores what’s so powerful about such tests: they are the brass tacks that everything comes down to. They are, in all areas of life, what separates the science from the pseudoscience and exposes the clinical quacks and methodological cracks for who and what they are—whether in K12 education, in minimally-speaking autism, or on podcasts about telepathy.
REFERENCES
Handley, J. B., & Handley, J. (2021). Underestimated: An autism miracle. Skyhorse.
Hanford, E. (2022). Sold a Story: How Teaching Kids to Read Went so Wrong. [Podcast]. APM reports.
Hirsch, E.D. (1996). The Schools We Need and Why We Don't Have Them. New York: Doubleday.
Iversen, P. (2006). Strange son. Riverhead.
Jaswal, V. K., Wayne, A., & Golino, H. (2020). Eye-tracking reveals agency in assisted autistic communication. Scientific Reports, 10(1), 7882. doi:10.103841598-020-64553-9 PMID:32398782
Jaswal, V. K., Lampi, A. J., & Stockwell, K. M. (2024). Literacy in nonspeaking autistic people. Autism, 0(0). https://doi.org/10.1177/13623613241230709
Newton, P., & Salvi, A. (2020). How Common Is Belief in the Learning Styles Neuromyth, and Does It Matter? A Pragmatic Systematic Review, Frontiers in Education. https://doi.org/10.3389/feduc.2020.602451
Palfreman, J. (Director). (1993). Prisoners of Silence [Documentary]. PBS.
Phelps, R. (2023). The Malfunction of US Education Policy. Lanham, MD: Rowen and Littlefield.
Ravitch, D. (2000). Left Back: A Century of Failed School Reforms. New York: Simon and Schuster.
Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The Scientific Status of Learning Styles Theories. Teaching of Psychology, 42(3), 266-271. https://doi.org/10.1177/0098628315589505
https://www.amazon.com/Malfunction-Education-Policy-Misinformation-Disinformation/dp/1475869940
https://www.nea.org/nea-today/all-news-articles/racist-beginnings-standardized-testing
No comments:
Post a Comment