Wednesday, November 5, 2025

Can Jaswal’s “HoloBoards” substitute for letterboards? Part I

Over the past couple of years, S2C proponent Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user have co-authored several papers on the development of virtual tools that enable S2C users to virtually select virtual letters rather than pointing to letters on letterboards. Like Jaswal’s other recent papers, each of these begins with purported justifications for S2C as a valid communication method, and each reports instances of allegedly independent communication by S2C users. Like Jaswal’s other papers, therefore, these papers are worth reviewing in detail. In today’s post, I’ll discuss the first part of one of them: Alabood et al. (2024).

Published by the CHI (Human Factors in Computing Systems) Conference on Human Factors in Computing Systems with Jaswal as its fourth listed author and Krishnamurthy as its fifth, this paper discusses the development and preliminary study of a virtual letterboard, or what the authors call a “hologram” or “HoloBoard.” Instead of having a facilitator, or what the authors call a “Communication and Regulation Partner” or “CRP,” hover next to them and hold up the letterboard in their faces, S2C users don a virtual reality headset that projects a virtual letterboard, or “HoloBoard,” in front of them and follows them around wherever they turn their heads. Purportedly, this is an improvement on physical S2C and gives users more autonomy and privacy.

Deploying the circular reasoning so typical of S2C supporters, the article begins with variations on the usual pro-RPM/S2C claims and cites RPM/S2C-generated testimonials as its sources. In particular, it cites testimonials attributed to Dan Bergmann, Elizabeth Bonker, Naoki Higashida, and Ido Kedar, and references, as well, Edlyn Peña’s compendium of FC/RPM/S2C-generated testimonials.

In this first post, I’ll discuss these preliminary claims. These are worth fleshing out in detail, partly because some of what’s said here is new, and partly because there’s been some new research that adds to the evidence against these claims. In a follow-up post, I’ll turn to the actual study.

Claim 1: “Lack of speech is sometimes conflated with lack of ability to think.”

The source for this is an entire book, namely the memoir attributed to RPM user Ido Kedar (Ido in Autismland). And while it’s nice to see this perennial claim softened with “sometimes,” it’s hard to believe, in our Deaf-culture-aware, Stephen Hawking-informed society, that more than a handful of highly uninformed people are guilty of conflating speech with thought. Nor have I seen any references to people actually doing so.

However, what Jaswal et al. may actually have in mind here is that, in autism in particular, people like me have stated that lack of spoken language tends to indicate deficits in a very specific cognitive skill: comprehension of language. And for this, there is actual evidence, most recently in an article by Chen et al. (2024). Examining a symptom database of 1,579 minimally speaking autistic children aged 5-18 years, and using the terms “receptive language” for “comprehension of language” and “expressive language” for speaking ability, Chen et al. found that:

·        The 1,579 children “demonstrated significantly lower receptive language compared to the norms on standardized language assessment and parent report measures.”

·        “[T]heir receptive language gap widened with age.”

·        “[O]nly about 25%... demonstrated significantly better receptive language relative to their minimal expressive levels.”

·        “[M]otor skills were the most significant predictor of greater receptive-expressive discrepancy”—i.e., the 25% with language comprehension skills that were significantly better than their minimal speaking skills had better motor skills than the rest of the non-speakers.

All this is highly problematic for S2C proponents. That’s because their two chief arguments for the legitimacy of S2C are (1), that minimal speakers have intact language comprehension and (2), that minimal speakers have such severe deficits in motor skills that they’re unable to point to letters without someone holding up a letterboard in front of their faces and prompting them.

Claim 2: “the cognitive abilities of nonspeakers are routinely underestimated” such that they are “often segregated into special classrooms where teaching of basic life skills are [sic] prioritized over academic instruction.”

This, too, is a crucial claim for S2C proponents: S2C-generated output indicates that non-speakers, assuming they’re the authors of that output, have above-average academic skills. But the authors’ source for this claim, Courchesne et al. (2015) doesn’t support it. Courchesne et al. don’t address academics; what they found was that minimally speaking autistic children performed better on cognitive measures that don’t require language skills as compared with cognitive measures that do require language skills. An example of the latter is the WISC-IV, a standard IQ test that involves a fair amount of language in both prompts and tasks. The three non-linguistically-demanding cognitive measures that Courchesne et al. looked at were the Raven’s Colored Progressive Matrices board form (RCPM), which measures visual pattern recognition, the Children’s Embedded Figures Test (CEFT), which measures the ability to find hidden shapes in a complex image, and a visual search task, which measures the ability to scan a visual scene to find a specific target object or feature. Academic achievement, of course, requires much more than these three visual capabilities.

Furthermore, even though the three visual tests make minimal language demands, the results for non-speaking autistics were significantly worse than for typicals: the 26 (out of 30) minimally-speaking autistic participants who completed the RCPM, for example, had an average raw score of 18.61 out of 36; their non-autistic counterparts, in contrast, had an average raw score of 28.5. Worse still, how well a minimal speaker did was positively correlated with their language skills—most likely because, even in autism, nonverbal cognitive skills correlate with language ability (see Chen et al, 2024), which in turn, in autism, is correlated with speaking ability (more on that below). As the authors report, “autistic children's RCPM performance differed according to their reported spoken language level” with “autistic children using two-word phrases perform[ing] better… than those using no words at all.”

Returning to the authors’ claims about the inappropriate segregation of non-speakers into classrooms “where teaching of basic life skills are [sic] prioritized over academic instruction,” strong performance on visual tests isn’t enough for academic success. Academic instruction, even in math, is highly verbal; most academic tasks are highly verbal. To access academic instruction and perform academic tasks, you need to have the same skills that are required for, and measured by, the WISC-IV and other verbally-mediated, verbally demanding tests.

Claim 3: “[M]ost nonspeakers are never provided an effective language-based alternative to speech.”

Here the authors simply claim, without any citations, that standard AAC (Augmentative and Alternative Communication) devises are deficient. In particular, they state that “the vocabulary available to a user is chosen by someone else” and claim “there is no way for an autistic person to express a concept that has not already been programmed into their AAC device.” Jaswal has made this claim repeatedly, persistently unaware that most AAC devices have keyboard options that allow typing. Used in typing mode, an AAC device is just like a letterboard: the only difference is that no one is holding it up in front of the user and prompting and cueing their letter selections.

The authors also state that “Few individuals advance beyond the requesting stage”—assuming that this must be the fault of the device rather than a consequence of the well-known socio-communicative challenges that have defined autism since it was first identified eight decades ago by Leo Kanner. Non-autistic users of AAC tools—deaf children with sign language, individuals with mobility impairments—regularly advance beyond the requesting stages to a whole range of communicative acts.

Claim 4: Non-speakers need “months or years” of training by CRPs in how to ”isolate and point at specific letters on the letterboard.”

This is why, the authors explain, CRPs have to start with “partial” letterboards with larger letters (allowing CRPs to decide which third of the alphabet each next letter comes from—yet another way to control messages in the earliest stages when their clients aren’t as susceptible to cues).

But there’s no evidence that autistic kids lack the motor skills to point. In fact, pointing doesn’t appear on any of the standard motor skills evaluations, which indicates that it simply isn’t a specific motor issue for anyone—any more than other simple gestures like waving your hand. The evidence, rather, is that, to the extent that autistic kids don’t point (many do; just less often than typical children do), it’s because they don’t understand the communicative function of pointing. That it’s the social aspects of pointing rather than the motor aspects of pointing that challenge autistic kids is also supported by the fact that the least frequent sort of pointing in autism is pointing for more purely social purposes (pointing things out to people in order to share attention), as opposed to pointing for instrumental purposes (pointing to express a request). Diminished pointing in autism, that is, is consistent with the socio-communicative challenges that have defined autism since it was first identified eight decades ago by Leo Kanner; no additional challenges need to be posited to explain it (cf. Occam’s Razor).

When it comes to pointing to letters, however, the most likely challenge is literacy: pointing to the “correct” letters to generate messages entails understanding the meanings, and knowing the spellings, of the words you want to type. Given what we know about comprehension in minimal speakers (see above), it’s unlikely that S2C spellers have the ability to independently and consistently point to “correct” letters in the often highly linguistically sophisticated messages that they’re supposedly generating.

And what minimal speakers undergo during all those “months or years” of training by their CRPs in how to ”isolate and point at specific letters on the letterboard” is most likely not the acquisition of the motor skills required for pointing (which they’ve almost certainly already acquired on their own) but, rather, a behaviorist conditioning to select letters based on the prompts and (unwitting) cues of their CRPs.

Claim 5: The alleged motor difficulty of pointing to letters makes cognitive demands that are so great that “the cognitive demands of the questions they are asked” have to start out as minimal

As a result, the authors claim:

[E]arly lessons focus on spelling and closed-ended comprehension questions (e.g., "The first steam engines were used to pump what out of mines?"), and later stages incorporate open-ended questions (e.g., "Why are trains so much more common in some countries than others?”)

In fact, there’s no evidence that pointing to letters makes cognitive demands so great that it’s hard to answer open-ended questions (see above). Rather, the more relevant difference between questions like "The first steam engines were used to pump what out of mines?” and questions like “Why are trains so much more common in some countries than others?” is the number of letters one needs to point at to answer them. For the first question, 5 letters suffice (w-a-t-e-r); for the second question, many more are needed, even for a super-short, succinct response like “smaller densely populated countries are more suitable for trains,” which contains 56 letters. Getting an S2C “novice” to point to 5 letters in a row is a lot easier than getting them to point to 56 letters in a row. Even obtaining the compliance needed for lengthy periods of letter pointing—which, according to Jaswal’s earlier eyetracking paper (Jaswal, 2020), is at about one second per letter even after years of practice—presumably takes time.

Claim 6: The alleged need by minimally-speaking individuals for attentional, emotional, and sensory support while they point to letters

The authors describe the roles of the CRP as “monitor[ing] the nonspeaker’s attention to ensure they are focused on the task at hand” and “prompting the [nonspeaker] to redirect their attention to the letterboard.” Elaborating, they state:

CRPs believe that the micro-movements they make while holding a physical letterboard might aid in maintaining the speller’s attention. For example, when they notice a nonspeaker’s focus waning, the CRP might rapidly remove the letterboard from the field of view and then reintroduce it, thereby refreshing the nonspeaker’s attention.

But wandering attention and wandering away from the letterboard are more consistent with boredom: with being made to point at letters without understanding what those letters spell—which, in turn, is consistent with what we know about profound autism and with what Chen et al. (2024) report about comprehension (see above).

The authors also claim that CRPS “provide regulatory support and encouragement as appropriate,” claiming that

Since many nonspeakers have sensory needs that can compel them into near-constant motion” [Their source for this is the FC-generated memoir “The Reason I Jump”] “the CRP may reposition the letterboard as needed so that it always remains in the subject’s field of view.”

As a side note, this last point suggests that repositioning the letterboard occurs only when it’s out of the subjects field of view; in fact, a trained CRP testified in court that he repositions the board only when his client types three letters that “don’t make sense,” and many videos of RPM/S2C show repositioning occurring under many other circumstances, often for no obvious reason other than that the client was about to hit the wrong letter.

Indeed, all of these purported roles—prompting attention, encouraging, repositioning the letterboard—are opportunities for (unwitting) verbal cueing that directs letter selection.

Claim 7: Some RPM/S2C users have achieved independence

The author’s source for this is once again an entire book: once again, Ido Kedar’s entire memoir—as if a memoir that’s been attributed to someone who’s been subjected to RPM/S2C a is proof of independent typing, and as if the testimony within such a memoir is a reliable source on RPM’s validity. In the few public videos of Kedar, there’s no evidence of independent, spontaneous, prompt-free typing.

Claim 8: What’s holding the many other RPM/S2C users back from independent typing is the need for regular practice with “trained professional[s],” who are costly and scarce.

In particular, learning type independently, the authors claim, requires “regular opportunities for practicing the required skills (e.g., coordinating gaze and pointing to letters),” which in turn requires supervision and feedback by those costly and scarce “trained professional[s].”

Of course, the alleged need for years of practice and guidance from scarce, expensive professionals in order to point independently to letters depends on the claim that pointing is motorically challenging in autism, which, in turn, is totally unsubstantiated (see above).

Unsubstantiated though this claim is, it’s one of the bases for the study of the communication tool reported in this article, to which we now turn.

The purpose of the study, “to train nonspeakers in holographic spelling,” implicitly assumes that the entire issue for nonspeakers who point to letters to communicate is one of sensori-motor environment. Somehow, whether nonspeakers who can “isolate single letters and spell full words” on a physical letterboard can do the same thing with a virtual letterboard is an open question. Indeed, as far as the researchers are concerned, it’s “an ambitious goal” for letterboard users, particularly given:

§  The lack of “haptic feedback” (touching sensation) associated with touching the physical letterboard

§  the need to wear a head-mounted device

§  the need to interact with holograms

§  the presence of unfamiliar researchers

§  the unfamiliar environment of the research study

..as if all these environmental changes are enough to interfere with the ability to point to letters to spell words.

The authors, furthermore, cite studies reporting that autistic people have difficulties generalizing skills learned in one context to another. But none of the studies they cite—or others on this topic—note difficulties with generalizing something as straightforward as isolating single letters and spelling words from one environmental context to another. It’s one thing to have trouble generalizing the meaning of a word like “doctor” from one medical setting to another, or generalizing conversation skills learned in a speech therapy session to real-world conversational settings: both of these generalization difficulties are common in autism. It’s quite another thing to have trouble generalizing spelling from one context to another. Indeed, anecdotes of hyperlexic autistic kids report no difficulties at all in this department: hyperlexic autistic kids have been observed spelling words in all sorts of contexts, and without any explicit instruction: from refrigerator letters to sidewalk chalk, to letters they form out of playdough.

Indeed, what generalization difficulties there are in the context of going from standard S2C to holographic S2C are probably quite different. One difficulty is the altered context of CRP cueing: different environments mean different CRP cues. With holographic S2C, the most obvious difference, as we’ll see, is that the letter array is typically no longer held up by the CRP.

The other generalization difficulty has to do with letter position. For those who are conditioned to spell letter sequences without understanding what they’re spelling, one of the most salient things is letter position. In cases where letters’ positions are held constant relative to other letters, as they are on S2C letterboards, one of the most likely learned elements of spelling are the sequences of movements around the letter array. To spell the word CAT on an S2C letterboard, for example, one might learn to first go to the top row and towards left the end of the board for “c,” then all the way to the left for “a”, and then down to the second to last row and far right for “t.” In addition to learning the positions for common words, one might also learn the positions for common letter sequences like “t-h” (far right of the second to last row followed by left-center of the second row).

Given this, it’s telling that the researchers meticulously replicated in virtual space the physical letterboards the kids are used to, letting them (or their CRPs) choose from “a variety of virtual letterboards that resemble popular physical models used by the community,” allowing “a nonspeaker to choose the one they are most familiar with.” That is, while the researchers play up concerns about “haptic feedback” and wearing a headset (which surely is quite annoying), they fail to mention, and yet still carefully control, one key factor that is likely to generalize only if the letter selection is driven by actual comprehension: that is, the positions of the letters on the letterboard. If you’re intentionally spelling the word CAT to talk about actual cats, your finger will go to the correct letters, no matter where they appear in an array, even if it takes some searching. But if CAT, for you, is simply a series of points to letters in certain positions, you’re much more likely to be thrown by a change in letter positions.

In my next post, I’ll pick up here and take a closer look at the actual study.

I’ll close here by noting that Jaswal et al. discuss this paper in an article for IEEE Spectrum, a magazine published by the Institute of Electrical and Electronics Engineers, where they repeat many of the same faulty and problematic claims about non-speaking autism.

 

REFERENCES

Alabood, L., Dow, T., Feeley, K. B., Jaswal, V.K., Krishnamurthy, D. From Letterboards to Holograms: Advancing Assistive Technology for Nonspeaking Autistic Individuals with the HoloBoard. CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems Article No.: 71, Pages 1 - 18 https://doi.org/10.1145/3613904.3642626

Chen, Y., Siles, B., & Tager-Flusberg, H. (2024). Receptive language and receptive-expressive discrepancy in minimally verbal autistic children and adolescents. Autism research : official journal of the International Society for Autism Research17(2), 381–394. https://doi.org/10.1002/aur.3079

Courchesne, V., Meilleur, A. A., Poulin-Lord, M. P., Dawson, M., & Soulières, I. (2015). Autistic children at risk of being underestimated: school-based pilot study of a strength-informed assessment. Molecular autism6, 12. https://doi.org/10.1186/s13229-015-0006-3

Jaswal, V. K., Wayne, A., & Golino, H. (2020). Eye-tracking reveals agency in assisted autistic communication. Scientific reports10(1), 7882. https://doi.org/10.1038/s41598-020-64553-9

Sunday, October 19, 2025

University of Minnesota Center for Genomics Engineering on “Celebrating Emelia: Science for All”

Continuing my closer look at some of the news items from our recent news roundup, I now turn to a June 26th article on the University of Minnesota Center for Genomics Engineering website entitled Celebrating Emelia: Science for All.

Emelia, 11 years old, is described as “bright, bold, and brilliant,” and as having “a rare genetic mutation known as DDX3X.” Steph Kennelly, the author of this article and also the program manager for the Center for Genome Engineering, claims that DDX3C

is associated with non-speaking autism [true] involving a disconnect between her brain and body [false]. The condition is often described in medical literature as being linked to intellectual disabilities and developmental delays [true]—but Emelia challenges that definition...

She challenges it through S2C, or what Kennelly simply calls “spelling”:

[Emelia] has learned to communicate using a method called spelling—pointing to or typing letters one at a time to form words, thoughts, and ideas. And what Emelia has to say is nothing short of extraordinary.

Completely unmentioned throughout this article is the role of the “communication partner” in “spelling”—i.e., in sitting or standing within auditory, visual, and/or tactile cueing range of the “speller” and, typically, holding up the letterboard and (inevitably) moving it around.

What Emelia has to say through S2C/”spelling” is so extraordinary that, as Kennelly notes, she’s featured “in Episode 7 of the popular podcast The Telepathy Tapes.” But Emelia’s presence on the Telepathy Tapes, naturally, is due, not to the overall extraordinariness of what she allegedly has to say, but, more specifically, to her alleged telepathic powers. The non-paranormal explanation for Emelia’s telegraphic powers—her purported ability to type things that only her communication partner knows—is that her communication partner is the one controlling the letters that Emelia points to.

Emelia purportedly also knows multiple languages. Her communication partner reports that “She in one sentence went from Spanish to English to Portuguese” and that she also knows Hebrew and hieroglyphics. Telepathy Tapes producer Ky Dickens proposes that she picked up the latter from the “realm of fundamental consciousness,” where there is no language, and where, therefore, “all communication would be by telepathy.” Dickens also proposes that Emelia has precognition, “meaning she can predict things.”

But Kennelly doesn’t seem to want to discuss Emelia’s paranormal powers here on the University of Minnesota Center for Genome Engineering website. Instead, she writes:

Through this episode, we learned about [Emelia’s] passion for science, and how her diagnosis has led her to take a deep interest in genetics. Her dream? To one day become a genetic researcher or clinician.

In the Telepathy Tapes podcast, the only source on Emelia’s alleged interests is her mother, and in her mother’s words, those interests sound a bit more circumscribed:

She wants to be a genetics doctor. She knows the challenge is that a lot of kids are going through being nonverbal and she wants to help them.

As Kennelly reports:

We were so inspired that we invited Emelia and her family to spend a day in the Moriarity/Webber lab at the Center for Genome Engineering. During her visit, she met scientists, explored interactive models, attended presentations, and toured the lab spaces where we study the very conditions that shape lives like hers. Her questions were thoughtful. Her insights were meaningful.

During her visit, PhD research assistant Ella Eaton shared her work on developing a base editor therapy for a founder mutation of SCID-A, a rare immune disorder. While Emelia may have seemed quiet at first glance, she was deeply engaged in the discussion. She asked thoughtful clarifying questions—just like any aspiring young scientist would.

She wanted to know how the Cas9 enzyme finds the right spot in the genome, and quickly grasped how the 20-base sequence of the guide RNA leads the enzyme to its target. She also asked whether someone could have the SCID-A mutation without symptoms, prompting a conversation about recessive genetic conditions and how carrier status works. These are advanced concepts for anyone—let alone an 11-year-old—and yet, Emelia was right there with us, absorbing, questioning, learning.

Our conversation extended to the genetics of DDX3X and the observation that many autistic individuals experience chronic gastrointestinal issues. This led to a discussion on the brain-gut connection and the future of research into the gut microbiome and barrier function. Emelia may not yet have all the scientific vocabulary, but her questions are already aligned with some of the most pressing research themes of our time.

All this would be beyond even a very intelligent 10-year-old, not to mention one with DDX3X, in which the rate of significant, co-occuring intellectual disability, is, according to a recent, comprehensive, 2023 study, extremely high. This study found that, out of a sample of 101 cases, 94% had an intellectual disability, with IQ scores ranging from average (only 1 out of these 101 cases) to severe.

But somehow such findings are either unknown to, or don’t raise any questions at, the University of Minnesota Center for Genome Engineering lab. Though Kennelly makes no mention of a communication partner, the more plausible explanation for insights attributed to her in the lab, and the only non-paranormal explanation for Emelia’s raison d'être on the Telepathy Tapes podcast—is that an adult, non-disabled communication partner is controlling her messages.

Only at one point does Kennelly’s narrative turn from the S2C-generated output to Emelia’s unimpeded behavior:

One of our favorite moments came when Emelia got to pipette for the first time. True to form, she used it to squirt us with water, giggling with delight. It was the perfect reminder that behind all her brilliance, Emelia is also just a kid who loves to have fun.

How she is able to do this despite the above-alleged “disconnect between her brain and body” is left unexplained. But here, at least, Emelia is clearly being herself.

Kennelly insists that

Emelia is a powerful example of how much potential lies beyond conventional expectations.

To this, she adds the usual neurodiversity-promoting bromides:

As a scientific community, we have a responsibility, and an opportunity, to do better. We must celebrate and nurture all forms of intelligence, creating inclusive spaces where neurodivergent voices are not only heard but truly valued and uplifted.

And she concludes with a quote from one of the S2C messages generated by Emelia:

“Sharing what I know is one of my biggest challenges. And I know a lot! Please don’t underestimate me.”

To this Kennelly responds:

We hear you, Emelia. Loud and clear.

Unfortunately, I’m not sure they do.

REFERENCES

Tess et al. (2023). DDX3X Syndrome: Summary of Findings and Recommendations for Evaluation and Care Levy, Pediatric Neurology, Volume 138, 87 - 94

 

Wednesday, October 8, 2025

Our communication rights paper is out!

Despite all the pressure from self-styled neurodiversity and disability rights advocates who support facilitated communication, Rapid Prompting Method, and Spelling to Communicate, a reputable journal dared to publish it

You can access the entire article here.

Thursday, October 2, 2025

Jefferson Public Radio on “Medford psychiatrist’s research into autism and telepathy sparks debate over communication”

Continuing my closer look at some of the news items from our recent news roundup, I now turn to a May 2nd piece on Jefferson Public Radio entitled Medford psychiatrist’s research into autism and telepathy sparks debate over communication. This new item showcases Johns Hopkins-trained psychiatrist Dr. Diane Powell, whose name should be familiar to anyone familiar with the Telepathy Tapes as the Telepathy Tapes’ star, pro-telepathy scientist.

While the article’s focus is on Powell and on how she came to believe that non-speakers are telepathic, it is also, necessarily, an article about RPM/S2C.  That’s because a video-taped RPM/S2C session that Dr. Powell has “watched countless times” is part of what convinced her—or so journalist Justin Higginbottom suggests. As he describes it:

On the screen is Haley, a nonverbal child with autism, sitting next to a therapist. Across the room, her mother lifts a card off the table, showing the image on it — a triangle — to the camera, but not to Haley.

The therapist, holding an electronic letter board that resembles an iPad, asks Haley to spell what’s on the card.

Slowly, Haley taps out the answer.

She presumably types out “triangle,” which is correct.

“You can see the therapist is not moving this device,” Powell noted.

Haley repeats the feat over and over again, slowly typing out words that match the image on the cards.

Like the January 10th Disability Scoop piece (see here), this piece does acknowledge the controversy surrounding RPM/S2C:

The debate comes down to that therapist in Powell’s video holding the letterboard while Haley spells out words. Supporters say it’s a breakthrough for many with autism. Critics say it’s a red flag.

And Higginbottom even goes so far as to speak, at length, with one of the directors of the American Speech-Language-Hearing Association (ASHA)—something that very few journalists covering RPM/S2C in the popular media have bothered to do:

“Spelling is not the issue. It's doing it independently,” said Jaime Van Echo, associate director of clinical issues at the American Speech-Language-Hearing Association (ASHA).

The organization is generally against communication practices involving heavy assistance from another person. That could be someone touching the nonverbal individual’s arm, prompting them in certain ways or even holding a letterboard in front of them.

“The main difference here is that… they're holding the board, which means that the other person, who's the independent communicator, is communicating with the support of another person,” Van Echo said. “And what ASHA really does strive for is independent communication.”

Someone using a letter board that’s lying on a table is perfectly fine, according to Van Echo. But problems may arise, according to ASHA, when someone else is involved in the conversation.

It might seem like a small distinction. But the group has gone as far as saying another assisted practice, called the Rapid Prompting Method, “effectively strips people of their human right to independent communication.” The organization has spoken out against teaching a similar method, called Spelling to Communicate, in schools.

But then Higginbottom, who is clearly striving for “balance,” returns to the pro RPM/S2C side:

But proponents of these assisted methods argue many with severe autism struggle with motor skills or focus. Without assistance, they say, the opportunity to communicate may be lost.

This is something he could have run past Van Echo at ASHA; if he had, he would have learned that claims about difficulties with motor skills or focus don’t validate FC/RPM/S2C.

Worse, Higginbottom takes on faith a claim made in Heyworth et al.’s highly problematic Presuming Autistic Communication Competence and Reframing Facilitated Communication article:

A 2022 Frontiers in Psychology article cited more than 100 peer-reviewed studies confirming people with autism are the ones communicating while using assisted methods.

If you read that article carefully, you see that it simply states that “peer-reviewed studies confirming autistic or disabled authorship of FC messages number over a hundred from the 1990s to the present” and then cites an article by Cardinal and Falvey (2014). If you chase down that footnote, you find that Cardinal and Falvey, in turn, state, without any supporting citations, that:

Since the inception of FC, well over 100 qualitative articles have been published in professional peer-reviewed journals, as compared with around 40 for quantitative studies.

One of my colleagues tried contacting both Cardinal and Falvey and asking them for a reference; she received no response.

Extraordinary claims like these require extraordinary evidence, and in Cardinal and Falvey’s citations, which number less than 80, there are, at most, 13 studies that could possibly qualify as peer-reviewed studies finding some weak support for (and not coming close to confirming) autistic or disabled authorship of FC messages. In general, as Scott Lilienfeld has said, there is an inverse relationship between study rigor and support for FC/RPM/S2C.

Apparently not realizing this, Higginbottom goes on to cite the highly problematic pro-S2C organization International Association for Spelling as Communication (I-ASC) and its citation of the highly problematic Jaswal et al. (2020) eye-tracking study.

The International Association for Spelling as Communication cites a study that tracked the eye movements of non-verbal individuals. It found that they appeared to focus on the correct letters before selecting them — proof that people with autism, not their aides, are the ones communicating.

No, “appearing to focus on the correct letters before selecting them” is not “proof” that the people being subjected to FC/RPM/S2C are the ones communicating (for details, see critiques here and here).

But then Higginbottom returns to the FC/RPM/S2C critics:

But other researchers disagree. A 2001 meta-analysis published in the Journal of Autism and Developmental Disorders reviewed studies on a related method known as Facilitated Communication. It found the facilitators, rather than the nonverbal individuals, were controlling communication.

In the end, though, “balance” wins out:

So, it’s safe to say that the science around this debate isn’t settled.

Actually, the science is quite settled, and all those instances of alleged telepathy that both Dr. Powell and the Telepathy Tapes podcast have compiled for us, the non-paranormal explanation for which is facilitator control, only settle it further.

Worse, the article captions one of its pictures with this:

Some have found success in using letter boards to communicate with people with severe autism.

Technically, “find success” is ambiguous: the success could be an illusion. But this is also a stand-out sentence: one that is sure to grab the attention of, and stick in the minds of the many people who won’t read the full article.

Returning to Powell and telepathy, Higginbottom does note that:

Although the Telepathy Tapes and Powell’s work have led to more parents learning about their method or similar systems, advocates think being associated with mind reading is hurting their efforts to gain wider acceptance for assisted communication.

In that last excerpt, Higginbottom links to the statement on telepathy by the International Association for Spelling as Communication (I-ASC). For those who are interested, here’s what Vosseller & Co have to say about telepathy:

I-ASC does not integrate or endorse telepathy or other such personal beliefs as part of S2C. Introducing such concepts into S2C compromises the integrity [sic] and credibility of this rigorously defined [sic] methodology.

Recent public discussions linking telepathy to nonspeaking individuals and S2C risk creating misunderstandings about the practice. I-ASC emphasizes that telepathy is outside the scope of S2C and unrelated to its science-based [sic] approach.

One can only hope that telepathy is hurting I-ASC’s efforts to gain wider acceptance.

But Higginbottom ends with a different sort of hope, or “hope.” In a concluding section entitled “Hope for Parents,” he tells us about a non-speaking girl who, at age 10, “gained a limited ability to speak.” She, too, is apparently telepathic, but her telepathy has expressed itself through speech; not S2C. Her mother reports that she was once:

deciding whether to pack a grapefruit in her daughter’s lunch, and Ginny — unprompted — yelled “grapefruit" from another room.

Yet, for all this, the mother wants to switch from speech to S2C:

But she has trouble communicating with her daughter — even with the mind reading. That’s why she wants to teach her to use a letterboard.

Higginbottom continues:

As Dr. Powell explains, the beliefs that “her daughter is in there” and that she has “far more capacity than people give [her] credit for”—beliefs that Powell appears to share—“is why her research resonates with so many parents.”

Indeed—as with so much of other autism quackery, so, too, with S2C and telepathy.

REFERENCES

Beals, Katharine (2021, May 12). A Recent Eye-Tracking Study Fails to Reveal Agency in Assisted Autistic Communication. Evidence-Based Communication Assessment and Intervention, DOI: 10.1080/17489539.2021.1918890

Cardinal, D., & Falvey, M. (2014). The Maturing of Facilitated Communication: A Means Toward Independent Communication.
Research and Practice for Persons with Severe Disabilities, 39:3, 189-194. 10.1177/1540796914555581

Heyworth, M., Chan, T., & Lawson W. (2022). Presuming autistic communication competence and reframing facilitated communication. Frontiers in Psychology.

Jaswal, V.K., Wayne, A. & Golino, H. (2020) Eye-tracking reveals agency in assisted autistic communication. Scientific Reports 10, 7882. DOI: 10.1038/s41598-020-64553-9

Vyse, Stuart. (2020, May 20). Of Eye Movements and Autism: The Latest Chapter in A Continuing Controversy. Skeptical Inquirer.

 

Thursday, September 18, 2025

Cincinnati local news and NPR Weekend Edition on Jakob Jordan

Continuing my closer look at some of the news items from our recent news roundup, I now turn to three items that focus on the son of Cincinnati radio personality Jenn Jordan. The first is an April 10th interview on WLWT News 5 Today (a news station out of Cincinnati) entitled Son of Cincinnati radio personality defying the odds during Autism Acceptance Month.

The focus of this interview is Jenn Jordan’s non-speaking, autistic son, Jakob, who is described as “using his platform to defy the odds.” But even though he’s right there, sitting next to his mother, Jakob’s platform does not include this interview. Instead, Jenn is the one being interviewed. Jakob’s “platform “turns out to be S2C, described here as “communicat[ing] through spelling.” During the interview, Jakob is almost entirely silent. It’s hard to tell whether he’s even attending: he rarely directs his eyes towards the interviewer or his mother. And, for reasons that we learn only later in the interview, no S2C occurs here, nor are there any letterboards in sight.

Referred to mostly in the third person, Jakob is described as having been “diagnosed with autism and apraxia.” And while there is little doubt autism diagnosis, the basis for the apraxia diagnosis remains unclear. According to Jenn Jordan:

The apraxia causes a brain-body disconnect. And so the body doesn’t do what the mind wants it to, including speaking, including trying new foods. Refusing to try anything new.

There are various types of apraxia with varying diagnostic symptoms, but none of them fit Jordan’s description. Being minimal speaker doesn’t suffice. Nor does apraxia involve not being able to try new things. Finally, the notion that Jakob has some sort of brain-body disconnect is belied by the 10-second clip we see of him, 2 minutes into this segment, performing a series of complex dance movements in synchrony with the people around him. In addition, at the end of the video, we hear Jakob utter words that sound like “What date?,” and that his mother interprets as Jakob intentionally asking when the movie he’s staring in will come out (more on that below). Even Jakob’s speech, minimal though it may be, appears to be under intentional control by Jakob’s brain.

Nonetheless, in an eerie echo of Isaiah’s story in Disability Scoop (see my previous post on that), Jakob’s S2C-generated messages informed his mother that:

he was tired of eating the same old stuff, he wanted to try new stuff and he told us exactly how to help him. Since then he has tried over 160 new foods.

It’s interesting to see, both in this news item and the earlier one on Isaiah, a purported apraxia/mind-body disconnect being used to explain, not just Category A of the diagnostic symptoms of autism (the communication challenges), but also Category B (the restrictive interests). Though this is the first time I’ve seen restrictive interests re-analyzed as a motor disorder, it’s completely consistent with the decades-long goal of FC-proponents to redefine all of autism as a motor disorder.

Jenn Jordan goes on to explain that “If you want different results you have to do things differently,” and to recount how, after Julie Sando from Autistically Inclined (on our list of S2C-providing organizations) introduced S2C to Jakob:

It became very clear very quickly that the intellectual disability he had been diagnosed with was completely a mislabeling.

And thus, yet again, we see the deeply ableist, anti-intellectual disability sentiments that underpin parents’ and proponents’ beliefs in S2C.

As for Jakob, he doesn’t appear responsive even when the interviewer addresses him by name—until his mother responds by turning to him and making physical contact. She seems to do this reflexively each time the interview addresses her son directly. Why doesn’t she instead hold up a letterboard for him and facilitate out messages? Eventually we learn the answer:

This is so very hard for him because there is so much in here that he wants to say and someday he’ll be able to do it with me. We’re working on it.

The reason there’s no letterboard or S2C here is that Jakob is unable to do it with Jenn; or, perhaps more accurately, Jenn is unable to do it with Jakob. Why she didn’t bring along Jakob’s communication partner is left unclear.

Jenn’s inability to do S2C with Jacob recalls the Talia Zimmerman story (see here), in which we learn that those subjected to RPM/S2C “are far more proficient at spelling when they’re working with their professional partners than with parents, siblings and others.” As I noted in that post, in comparison to most facilitators, who bypically have months or years of experience cueing multiple people, parents are comparative novices.

Towards the end of the interview, we learn from Jenn Jordan that Jakob was

recently contacted by a Hollywood casting director who saw his Cards by Jakob page on Facebook and asked if he would be interested in auditioning for a film.

Ultimately, Jakob landed a leading role, and a subsequent news segment us more about this—namely, a June 5th segment on a much more prestigious platform: NPR Weekend Edition.

This segment, entitled Two nonverbal actors star in a new opera — with an assist from AI, includes not just Jakob, but a second non-speaking actor with cerebral palsy. And rather than spelling out the details of how, precisely Jakob communicates, it focuses instead on

  • the novel elements of the technology that converts Jakob’s “spelling” into naturalistic voiced output, and
  • the supposed input regarding the technology by “the non-verbal community”

In other words, this NPR report completely leaves out the fact that Jakob and other members of the non-verbal community are being facilitated, and their communications likely authored, by their FC/RPM/S2C communication partners.

It was a five-year process. For the first two years, accessibility designer Lauren Race queried the non-verbal community about what they wanted from a new speaking device. She asked, "What do you want with this thing? What's wrong with it? What do you love? What do you hate?"

"There's this mantra that everybody in this community uses," said Prestini, "which is 'nothing about us without us.'"

Or, rather, “nothing about us without our facilitators.”

Or, rather, “nothing about the personas concocted by the facilitators without the facilitators who concoct those personas.”

Throughout the piece, remarks about technological developments and the input of non-verbal people continue to distract from the communication-rights-violating elephant in the room. One caption reads:

Jakob Jordan tries out the technology that allows him to add emotional content to his speech by speeding it up, slowing it down, and adding pauses.

We also hear the usual pro-FC/RPM/S2C conflation of FC/RPM/S2C and AAC (Alternative and Augmentative Communication, evidence-based and not facilitator-controlled). Here, the conflation is especially easy, given that the role of the S2C communication partner is completely omitted:

Some nonverbal people, like Jordan, use an augmentative and alternative communication device, or AAC device, to talk. They type words into the device and a voice reads them.

But the voice sounds emotionally flat — like a robot. The team fed the AI some of the natural sounds both Jordan and Zioueche made, and it created voices for them.

Inevitably, as well, we get the usual S2C-generated testimonials:

“As someone who was not able to fully communicate for the first 22 years of my life, it is mind-blowing to be in an opera and to be here sharing on NPR.”

And:

“When I first heard the sound of my voice come to life, a new realization was born," he said through the new device. "Dream the bigger dreams, you know, the ones you dismiss and hide away because they seem impossible."

It’s because of articles like this (and this and this) that I stopped donating to NPR. I now direct that money to the Internet Archive, which has kept alive an older FC-critical documentary from more journalistically responsible times of yore.

The last piece about Jakob takes us back to Cincinnati, this time to Local 12 News. This segment, dated July 3rd, is entitled My world has opened up': Son of local radio host changes life through tech, therapy.

Once again we learn about Jakob’s brain-body disconnect:

He was using his device to share his favorite foods during a recent trip to Las Vegas to see his favorite band, New Kids on the Block. Two years ago, such communication would have been impossible for Jakob Jordan, who struggled with a disconnect between his body and brain.

But this time we learn more about Jakob’s first S2C session, which features, as so many of these sessions do, the typing out of a sophisticated vocabulary word:

after discovering that he could spell and read, his mother collaborated with speech therapist Julie Sando to unlock his potential. The first word Jakob Jordan typed was "analogy," and he understood its meaning.

As I noted earlier, one of the main goals of the first lesson is to get the parent coming back for more, and one of the best ways to do that is to dazzle them by “unlocking” sophisticated knowledge or vocabulary that they had no idea their child had acquired.

For me, the most impressive things about Jakob are neither his purported vocabulary and literacy skills, nor the fact that he has an S2C-enabled role in a musical. Rather, what impresses me are the clip of Jakob dancing at about two minutes into the first segment, which I watched several times, finding it quite endearing; and the pictures on the cards credited to him in Cards by Jakob, which I’m assuming are his own authentic work. It seems reasonable to presume that Jakob has the competence to control his body, and it looks like he has artistic talent and enjoys using it. He also looks like a calm, happy person.

Let’s celebrate all that—and, most importantly, celebrate Jakob for who he really is.

Saturday, September 6, 2025

Twitter suspensions and their aftermath

Every so often my Twitter suspension comes up on social media, with various detractors of mine voicing different theories about why I was suspended from Twitter. 

FC/RPM/S2C proponents seem to think I was suspended for “bullying” remarks and/or “violent threats” against autistic individuals and/or autistic advocates. Crazy as this sounds, I have screenshots that show FC/RPM/S2C proponents actually saying this. (In the interest of basic decency, however, I'm keeping those screenshots private).

Structured Word Inquiry proponents prefer to think I was suspended because I called SWI a “cult” run by a shadowy and now deceased man in France with no formal linguistic credentials--a characterization of the man in question that, ironically, the person who first made this claim against me now agrees with. (I have screenshots to prove this as well but, again in the interest of basic decency, I'll likewise keep those private).

Progressive math proponents may think I was suspended for harassing them and finding fault with “social justice math.” (Who knows? It's not always so easy to see what people say beyond your back.)

And so on...

As for Twitter itself (this may have changed on X), once it suspends an account, it feeds such sundry impressions by one’s sundry detractors with canned messages about the account’s suspension that are automatically sent to anyone who “reported” the account for any reason.  People, naturally, report Twitter accounts for all sorts of reasons, many of which have nothing to do with Twitter’s rules. Some people, for example, prefer to eliminate their critics rather than debate them.

Unless you’re sufficiently famous, however (e.g., the past and present President of the U.S.), Twitter’s/X's suspension decisions aren’t made by human beings mulling over tweets, but by AI bots programmed to look for certain key words and phrases (“kill”, “smash”, “vaccine injury”). That’s because human moderation is costly, and because the number of human moderators needed to review every possibly bullying/threatening/ dangerously misinformative tweet is astronomical. Key word-based moderation, of course, is about as linguistically crude as it gets, and the result is that many suspensions are senseless. 

Twitter tattlers, of course, prefer to think that Twitter suspended their opponents for good reasons–i.e., because they reported them for bad behavior. But there aren’t enough people on Twitter’s staff for human review of more than a tiny fraction of what’s reported. And if Twitter’s bots were to automatically suspend everyone who gets reported by someone else, Twitter would eventually amount to little more than cat videos. 

None of these considerations–assuming they even occur to them–stop certain Twitter tattlers and their allies from proclaiming, without evidence, that their detractors have made bullying or threatening remarks on Twitter or elsewhere. And though one might challenge them, as I have, to find a single bullying or threatening remark in anything one has ever written anywhere, the sort of people who prefer to resolve disagreements by tattling, blocking, and suspending may not be the sort of people who think that accusations (or autism interventions, or reading instruction, or math curricula) should be supported with actual evidence.

As for my particular situation, I've known for several years now that my suspension was triggered by more than just certain key words. There was an actual human being involved (I have screenshots of that, too--it's quite the smoking gun). But because this person is near and dear to me, I have made a promise to that person not to disclose that person's actions publicly. The person in question, meanwhile, has expressed remorse for their actions (which also include twice deleting all the blog posts on this site) and has been making it up to me ever since.

Wednesday, September 3, 2025

University of Toronto Magazine on “When Words Won’t Cooperate”

In this post, I’ll focus on a second news item from my recent news roundup: a January 23rd article in the University of Toronto Magazine entitled When Words Won’t Cooperate.

In this article, journalist Alison Motluk focuses how a neuroscientist in the University of Toronto psychology department, Morgan Barense, “aims to crack the mystery of non-speaking autism.” The mystery, apparently, has to do with how much spoken language non-speaking autistic individuals understand. But this is something for which non-neurological measuring tools already exist (e.g., oral prompts and pictures to point to in response). And those non-neurological measuring tools have found language comprehension in non-speaking autism to be quite low, especially in those with motor skills impairments (Chen et al., 2024). (In Chen et al., motor skills were measured by something called the DCDQ. The DCDQ rates complex motor skills like throwing a ball and pointing, as opposed to simple ones like pointing, which, contrary to the claims of FC/RPM/S2C proponents, don’t appear to be impaired in non-speaking autism). Motor skills impairments, in turn, have long been the excuse for FC/RPM/S2C.

Motluck opens with a description of an autistic boy named Isaiah Grewal. At age 2, Isaiah was not only non-speaking, but also not “responding normally when people spoke to him.” This suggests that Isaiah’s language challenges included not just speech but also comprehension—just as Chen et al. would predict. Indeed, even at age 10, when Isaiah was still not speaking, “His parents couldn’t tell from his reactions whether he understood what they were saying.”

But, like many individuals with autism, including individuals with low comprehension skills, Isaiah showed signs of hyperlexia. That is, he would “use foam letters or fridge magnets to spell things out — words like ‘contents’ and ‘bonus material’ that he’d seen when watching a Baby Einstein DVD.” And this hypelexia of his seems to be what caused his mother, sensing that “there was more cognitive ability in him than was being tapped,” to try out S2C on him when he was 13.

Some of the messages attributed to Isaiah through S2C are those we’ve seen repeatedly (see also my previous post): being able to communicate amounts to “freedom from prison,” and the whole time he was in that prison he wanted people to know “That I’m in here.”

But among the first messages attributed to Isaiah are a few that were far less typical. They included, for example, messages about restaurant food like this one: “I want to eat off a menu like a normal teen.” Since “new foods had always upset him,” this message was enough to “stun” his parents. Via S2C, Isaiah purportedly explained what was actually going on. As Motluk puts it:

[He]e’d wanted to eat the new foods, but he didn’t have the motor control to do it. The same muscles that made it impossible for him to speak, he told them, made it impossible for him to eat those things.

That’s something I’d never heard before, so I looked up the research. One review of extant studies finds a disjunction between speech-motor challenges and oral-motor challenges; another (a scoping review)  finds a connection and reports that these can occur in autism. However, Isaiah’s ability to eat “fries and nuggets and chocolate” for years, but not “new foods,” doesn’t quite sound like an oral motor challenge.

Furthermore, the purported solution to Isaiah’s purported oral motor challenge is oddly simple. Via S2C, he purportedly advises his parents on how to get him to eat new things: “Push it into my mouth again. Chop it into squares. Say chew, chew, chew in a rhythm.

We don’t learn how this advice played out. Did his parents actually push food into his mouth? How did he react? All we learn is that, as a result of this advice, they were able to celebrate his 18th birthday at a fancy restaurant, where he “ordered lobster mac and cheese, from the menu.” How this played out in terms of chewing is left unsaid. Mac and cheese is another common preference among those with limited food preferences, and it’s at least as chewable as fries, nuggets, and chocolate are—though the lobster, perhaps, adds a bit of a twist.

One thing that’s alarming about S2C is how the messages facilitated out of its victims often appear to be at odds with what they appear to actually prefer, and how the assumption that they can’t control their bodies causes their handlers to let their S2C-generated messages trump what their behavior and body language communicates. In Isaiah’s case, this happens not just with food, but with music. In an eerie echo of Anna Stubblefield’s infamous facilitation of Derrick Johnson (where an alleged preference for classical music also emerged), the article reports that “once he [Isaiah] was able to communicate using the letter board” he revealed that he “liked classical music and jazz... but not rock or pop.”

These issues, however, don’t appear to worry Dr. Barense, the neurologist who’s studying Isaiah. From her, we instead hear the usual talking points:

·        The straw men caricatures of FC/RPM/S2C skeptics.  “People assume, she says, that if a person can’t speak, they must be intellectually impaired.” Does she think people assume this of deaf people? Or of Stephen Hawking?

·        The alleged apraxia. Barense “believes that many autistic people who don’t speak may be hindered not by problems of intellect but motor control,” specifically “apraxia.” The word “believes” is appropriate here: Barense continues the long tradition of citing no evidence for apraxia in non-speakers (there isn’t any).  She also doesn’t seem to be aware of the Chen et al. study and what it says about how problems of with motor control in non-speakers correlate with comprehension deficits.

·        The circular reasoning. Barense uses S2C-generated messages as evidence for the “apraxia” that is, in turn, evidence for S2C:

Some non-speakers who have been able to describe what’s going on inside say it’s like being stuck in the body of a drunk toddler, says Barense. They don’t know why they’re suddenly vocalizing Mickey Mouse or talking about Thomas the Tank Engine or running around frantically. They don’t want to be doing these things, they say, but their bodies are like runaway trains.

·        The notion that there are no non-neurological ways to measure cognition in non-speakers: “there are no reliable ways to estimate comprehension, language ability and intellect” because all of these “ require motor output... that some people simply may not have available to them.” But all that’s required for tests of language comprehension and intellect is pointing to or picking up pictures (for language) or cards that complete patterns (for intellect), and there’s no evidence that non-speakers with autism have difficulty either with pointing to things or with picking things up. Many routinely and successfully do one, if not both, of these. For further discussion of what the actual issue with pointing in profound autism is, see my last post.

Dr. Barense, however, thinks she has an answer, which just happens to align with her general area of expertise as a neuroscientist:

Using fMRI, she and her team will look for complex patterns of brain activity that reflect high-level comprehension but do not require motor output. For instance, as a person listens to a complicated story, the researchers can track the signal in their brain as that story is unfolding. When there’s a twist in the plot, or a disruption of the narrative, they can see how the brain signal changes in response.

This sort of “signal change” strikes me highly indeterminate, especially when compared to the standard, non-neurological measures of comprehension in autism (see again, Chen et al., 2024). Signal change, that is, could easily be generated by changes in vocal prosody (the melody, rhythm, and volume of speech), as opposed to actual comprehension of word meanings. But a sufficiently biased researcher may have no difficulty interpreting it that way—and publishing articles that report such findings. In this, Barense may qualify. As she puts it, “I have a strong prediction that we will find evidence of intact comprehension. I just don’t see how it could be otherwise.”

Motluk reports that:

Barense has so far completed a baseline magnetic resonance imaging (MRI) scan of the structure of Isaiah’s brain. Next will come scans of the brain in the process of completing intellectual tasks (known as “functional” MRI, or fMRI).

One of the challenges is that MRI scanning requires a subject to be still. And many autistic people have a lot of uncontrolled movements. Isaiah was able to be still for the 40 minutes of the scan only because of his years of motor training, says Barense.

I shudder to think what this was like for Isaiah—and at the likelihood that his consent for this was obtained through S2C and, therefore, wasn’t his.

Barense, apparently, “has applied for a grant to find ways for software to adjust for a subject’s movements” so that other S2C victims won’t have to be still when these procedures are inflicted on them. She is also “collaborating with a team from Johns Hopkins” to “use a recently developed neuroimaging technique to study motor activity in the brains of non-speaking autistic people, including Isaiah.”

Given what’s ahead for non-consenting S2C victims in terms of medical procedures like these, I find it ironic that Barense claims to feel that “it’s important to really listen to what non-speakers are telling us about their experiences and to allow them to inform the science.”  How about starting with a message-passing test that would establish who is actually doing the communicating?

Just like Barense, reporter Alison Motluk also assumes that Isaiah’s S2C-generated messages are his own.

I asked what autism felt like to him. Via keyboard, he answered, ‘Like swimming underwater 24-7 because everything feels hard to control.” I asked what he and his friends talk about when they get together online. “We mostly trash talk,” he responded. Then, later, after I’d stopped laughing, he said, “We just like to hang out in the same space and eat pizza.

“After I stopped laughing”—this isn’t the first time I’ve noticed a rather low bar for humor for facilitated kids. Is “presuming competence” turning into a “soft bigotry of low expectations”? But, superficially speaking, Isaiah has met high expectations:

Isaiah has an undergraduate certificate in professional communications from the Harvard Extension School [making him the second FCed individual we know of to enroll in this school—see here] and currently holds a graduate fellowship through Stony Brook University in New York.”

(See also our new list of colleges and universities that have admitted FC/RPM/S2C-using students.)

The piece ends with a poem, allegedly written by Isaiah.

When I first came across this article (it was forwarded to me by one of my fellow FC critics, Evan Oxman), I found her on Bluesky. She had posted about the article there, and I posted a comment. This resulted in a rather long, but cordial exchange (Barense was probably a bit more cordial than I was), which I’ve reproduced in its entirely below.

MB [This is her original post]: “I am in here!” It's a sentiment I've seen expressed time and time again from non-speaking autistic individuals who were thought to be unable to express their thoughts - but ultimately gained access to communication. Hearing this call, my research is expanding in some new directions.

It seems increasingly clear that motor control challenges are a key obstacle to their communication. It's not that they have nothing to say - it's that they have difficulty saying it. With neuroimaging, I hope that we can bypass these motor challenges and better assess their cognition.

KB: Have any of these non-speakers been assessed for apraxia? There are existing explanations for lack of speech in level 3 autism based on diagnostic symptoms. (Occam's razor). Since it isn't possible to dx speech apraxia in non-speaking autism, perhaps this is something that neuroimaging could assess.

MB: I agree, diagnosing apraxia is notoriously hard for those who cannot speak and I hope that neuroimaging can help here. But motor deficits are key associated features supporting an autism diagnosis in the DSM-5, and so most autistic individuals have motor deficits in their diagnostic profile already.

KB: Imaging for apraxia shld be your 1st step: you wouldn't want to bypass a challenge that turns out not to exist. Motor difficulties (=/= apraxia) are optional in the DSM; the social challenges are not. Nor do the motor challenges explain the language challenges: www.thetransmitter.org/spectrum/mot...

MB: I think the relationship between motor challenges and language challenges is very much up for debate, with a lot of work showing a tight coupling between the two. But we are absolutely looking at brain mechanisms of motor processing in this group - stay tuned.

I'll also say that given the modularity of brain function, it's entirely possible that there could be a vast disconnect between the ability to speak and the ability to understand. If one is profoundly apraxic, they would not be able to demonstrate understanding with any reliable form of behaviour.

KB: For sure there are ppl who can understand but not speak. In autism there's a tendency in the opposite direction. Many studies show a coupling of language acquisition (receptive & expressive) & degree of orienting to social stimuli-and (commensurate w/ this) low receptive language in profound autism.

MB: But if the primary underlying deficit is motor, one would also observe such coupling. Motor difficulties would prevent typical social behaviour and lead to an underestimation of receptive language. In some cases, this might be the simplest explanation (Occam's razor). That's our testable hypothesis.

KB: Much research finds reduced orienting to social stimuli in infants as young as 2 months who are later dxed w/ autism. Are you proposing that reduced social orienting is the result of motor deficits? That seems unlikely, but regardless, reduced social orienting massively derails language acquisition.

MB: Reduced social orienting could *absolutely* result from an abnormal sensorimotor system. This behaviour requires that the one (1) process perceptual info about the other person and (2) move in response. Either could be derailed by mechanisms that have nothing to do with high-level social processes.

KB: OK, I thought we were talking about motor, not sensory processing also. Regardless, processing of perceptual social information about other people is a prerequisite for language acquisition. Impaired processing (as we see in early infancy in autism) massively derails acquisition of receptive lang.

MB: It's hard to talk about motor without talking about sensory, given that they are right next to each other in the brain and the execution of any motor plan requires sensory info (this is why the term sensorimotor is so often used). But issues here will derail social behaviour, which will derail lang.

KB: Exactly. It is indeed hard to talk about motor without talking about sensory. But the reverse doesn't seem to hold: the lack preferential attention to social stimuli in 2+ month old infants later dxed with autism doesn't seem likely to have a motor component or motor-based explanation.

MB: I'd have to see that study, but they might not respond to social stimuli b/c they weren't getting the right (sensory) info needed to move their body appropriately (motor). Or maybe they had the info but couldn't mount the typical response. Or both. Or neither. Hard to disentangle without brain data.

KB: A motor-based explanation would have to somehow explain why there's attention to non-social stimuli but not to social stimuli. That would be quite a stretch. Here's one study showing such differential (social vs. nonsocial) attention: Maestro et al. (2002). doi.org/10.1097/0000... There are others.

MB: I don't think it's necessarily a stretch. It's well established that the motor system builds internal models of action that that can serve as templates to predict and interpret the actions of others (who move in more complex and unpredictable ways than non-social stimuli).

If these models are off b/c the motor system is abnormal, social behaviour will take the greatest hit. At any rate - let me do the studies and get back to you! We need more neuroimaging data in nonspeakers so we can understand the genesis of various profiles and tailor appropriate supports.

KB: Great! Important to note that it's not just social behavior, but social learning, incl. language acquisition, incl. receptive lang., that will take a hit--a huge hit. And worth questioning is the degree to which a "motor map" guides automatic orienting to social stimuli in infants < 6 months old.

[I haven’t heard from Dr. Barense since.]

 

REFERENCES:

Chen, Y., Siles, B., & Tager-Flusberg, H. (2024). Receptive language and receptive-expressive discrepancy in minimally verbal autistic children and adolescents. Autism Research, 17(2), 381–394. https://doi.org/10.1002/aur.3079