Over the past couple of years, S2C proponent Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user have co-authored several papers on the development of virtual tools that enable S2C users to virtually select virtual letters rather than pointing to letters on letterboards. Like Jaswal’s other recent papers, each of these begins with purported justifications for S2C as a valid communication method, and each reports instances of allegedly independent communication by S2C users. Like Jaswal’s other papers, therefore, these papers are worth reviewing in detail. In today’s post, I’ll discuss the first part of one of them: Alabood et al. (2024).
Published by the CHI (Human Factors in Computing Systems) Conference
on Human Factors in Computing Systems with Jaswal as its fourth listed author
and Krishnamurthy as its fifth, this paper discusses the development and
preliminary study of a virtual letterboard, or what the authors call a
“hologram” or “HoloBoard.” Instead of having a facilitator, or what the authors
call a “Communication and Regulation Partner” or “CRP,” hover next to them and
hold up the letterboard in their faces, S2C users don a virtual reality headset
that projects a virtual letterboard, or “HoloBoard,” in front of them and
follows them around wherever they turn their heads. Purportedly, this is an
improvement on physical S2C and gives users more autonomy and privacy.
Deploying the circular reasoning so typical of S2C supporters,
the article begins with variations on the usual pro-RPM/S2C claims and cites RPM/S2C-generated
testimonials as its sources. In particular, it cites testimonials attributed to
Dan Bergmann, Elizabeth Bonker, Naoki Higashida, and Ido Kedar, and references,
as well, Edlyn Peña’s compendium of FC/RPM/S2C-generated testimonials.
In this first post, I’ll discuss these preliminary claims.
These are worth fleshing out in detail, partly because some of what’s said here
is new, and partly because there’s been some new research that adds to the
evidence against these claims. In a follow-up post, I’ll turn to the actual
study.
Claim 1: “Lack of
speech is sometimes conflated with lack of ability to think.”
The source for this is an
entire book, namely the memoir attributed to RPM user Ido Kedar (Ido in
Autismland). And while it’s nice to see this perennial claim softened with
“sometimes,” it’s hard to believe, in our Deaf-culture-aware, Stephen
Hawking-informed society, that more than a handful of highly uninformed people
are guilty of conflating speech with thought. Nor have I seen any references to people actually doing
so.
However, what Jaswal et al. may actually have in mind here
is that, in autism in particular, people
like me have stated that lack of spoken language tends to indicate deficits in a
very specific cognitive skill: comprehension
of language. And for this, there is actual evidence, most recently in an
article by Chen et al. (2024). Examining a symptom database of 1,579 minimally
speaking autistic children aged 5-18 years, and using the terms “receptive
language” for “comprehension of language” and “expressive language” for
speaking ability, Chen et al. found that:
·
The 1,579 children “demonstrated significantly
lower receptive language compared to the norms on standardized language assessment
and parent report measures.”
·
“[T]heir receptive language gap widened with
age.”
·
“[O]nly about 25%... demonstrated significantly
better receptive language relative to their minimal expressive levels.”
·
“[M]otor skills were the most significant
predictor of greater receptive-expressive discrepancy”—i.e., the 25% with
language comprehension skills that were significantly better than their minimal
speaking skills had better motor skills than the rest of the non-speakers.
All this is highly problematic for S2C proponents. That’s
because their two chief arguments for the legitimacy of S2C are (1), that
minimal speakers have intact language comprehension and (2), that minimal
speakers have such severe deficits in motor skills that they’re unable to point
to letters without someone holding up a
letterboard in front of their faces and prompting them.
Claim 2: “the cognitive abilities of nonspeakers are
routinely underestimated” such that they are “often segregated into special
classrooms where teaching of basic life skills are [sic] prioritized over
academic instruction.”
This, too, is a crucial claim for S2C proponents:
S2C-generated output indicates that non-speakers, assuming they’re the authors
of that output, have above-average academic skills. But the authors’ source for
this claim, Courchesne et al. (2015) doesn’t support it. Courchesne et al. don’t
address academics; what they found was that minimally speaking autistic
children performed better on cognitive measures that don’t require language
skills as compared with cognitive measures that do require language skills. An
example of the latter is the WISC-IV, a standard IQ test that involves a fair amount
of language in both prompts and tasks. The three non-linguistically-demanding
cognitive measures that Courchesne et al. looked at were the Raven’s Colored
Progressive Matrices board form (RCPM), which measures visual pattern
recognition, the Children’s Embedded Figures Test (CEFT), which measures the
ability to find hidden shapes in a complex image, and a visual search task,
which measures the ability to scan a visual scene to find a specific target
object or feature. Academic achievement, of course, requires much more than
these three visual capabilities.
Furthermore, even though the three visual tests make minimal
language demands, the results for non-speaking autistics were significantly worse
than for typicals: the 26 (out of 30) minimally-speaking autistic participants
who completed the RCPM, for example, had an average raw score of 18.61 out of
36; their non-autistic counterparts, in contrast, had an average raw score of
28.5. Worse still, how well a minimal speaker did was positively correlated with
their language skills—most likely because, even in autism, nonverbal cognitive
skills correlate with language ability (see Chen et al, 2024), which in turn,
in autism, is correlated with speaking ability (more on that below). As the
authors report, “autistic children's RCPM performance differed according to
their reported spoken language level” with “autistic children using two-word
phrases perform[ing] better… than those using no words at all.”
Returning to the authors’ claims about the inappropriate segregation
of non-speakers into classrooms “where teaching of basic life skills are [sic]
prioritized over academic instruction,” strong performance on visual tests
isn’t enough for academic success. Academic instruction, even in math, is
highly verbal; most academic tasks are highly verbal. To access academic
instruction and perform academic tasks, you need to have the same skills that
are required for, and measured by, the WISC-IV and other verbally-mediated,
verbally demanding tests.
Claim 3: “[M]ost
nonspeakers are never provided an effective language-based alternative to
speech.”
Here the authors simply claim, without any citations, that
standard AAC (Augmentative and Alternative Communication) devises are
deficient. In particular, they state that “the vocabulary available to a user
is chosen by someone else” and claim “there is no way for an autistic person to
express a concept that has not already been programmed into their AAC device.”
Jaswal has made this claim repeatedly, persistently unaware that most AAC
devices have keyboard options that allow typing. Used in typing mode, an AAC
device is just like a letterboard: the only difference is that no one is
holding it up in front of the user and prompting and cueing their letter
selections.
The authors also state that “Few individuals advance beyond
the requesting stage”—assuming that this must be the fault of the device rather
than a consequence of the well-known socio-communicative challenges that have
defined autism since it was first identified eight decades ago by Leo Kanner.
Non-autistic users of AAC tools—deaf children with sign language, individuals
with mobility impairments—regularly advance beyond the requesting stages to a
whole range of communicative acts.
Claim 4: Non-speakers
need “months or years” of training by CRPs in how to ”isolate and point at
specific letters on the letterboard.”
This is why, the authors explain, CRPs have to start with
“partial” letterboards with larger letters (allowing CRPs to decide which third
of the alphabet each next letter comes from—yet another way to control messages
in the earliest stages when their clients aren’t as susceptible to cues).
But there’s no evidence that autistic kids lack the motor
skills to point. In fact, pointing doesn’t appear on any of the standard motor
skills evaluations, which indicates that it simply isn’t a specific motor issue
for anyone—any more than other simple gestures like waving your hand. The
evidence, rather, is that, to the extent that autistic kids don’t point (many
do; just less often than typical children do), it’s because they don’t
understand the communicative function of pointing. That it’s the social aspects
of pointing rather than the motor aspects of pointing that challenge autistic
kids is also supported by the fact that the least frequent sort of pointing in
autism is pointing for more purely social purposes (pointing things out to
people in order to share attention), as opposed to pointing for instrumental
purposes (pointing to express a request). Diminished pointing in autism, that
is, is consistent with the socio-communicative challenges that have defined
autism since it was first identified eight decades ago by Leo Kanner; no
additional challenges need to be posited to explain it (cf. Occam’s Razor).
When it comes to pointing to letters, however, the most
likely challenge is literacy: pointing to the “correct” letters to generate
messages entails understanding the meanings, and knowing the spellings, of the
words you want to type. Given what we know about comprehension in minimal
speakers (see above), it’s unlikely that S2C spellers have the ability to independently
and consistently point to “correct” letters in the often highly linguistically
sophisticated messages that they’re supposedly generating.
And what minimal speakers undergo during all those “months
or years” of training by their CRPs in how to ”isolate and point at specific
letters on the letterboard” is most likely not the acquisition of the motor
skills required for pointing (which they’ve almost certainly already acquired
on their own) but, rather, a behaviorist conditioning to select letters based
on the prompts and (unwitting) cues of their CRPs.
Claim 5: The alleged
motor difficulty of pointing to letters makes cognitive demands that are so
great that “the cognitive demands of the questions they are asked” have to
start out as minimal
As a result, the authors claim:
[E]arly lessons focus on spelling
and closed-ended comprehension questions
(e.g., "The first steam engines were used to pump what out of
mines?"), and later stages incorporate open-ended questions (e.g.,
"Why are trains so much more common in some countries than others?”)
In fact, there’s no evidence that pointing to letters makes
cognitive demands so great that it’s hard to answer open-ended questions (see
above). Rather, the more relevant difference between questions like "The
first steam engines were used to pump what out of mines?” and questions like “Why
are trains so much more common in some countries than others?” is the number of
letters one needs to point at to answer them. For the first question, 5 letters
suffice (w-a-t-e-r); for the second question, many more are needed, even for a
super-short, succinct response like “smaller densely populated countries are
more suitable for trains,” which contains 56 letters. Getting an S2C “novice”
to point to 5 letters in a row is a lot easier than getting them to point to 56
letters in a row. Even obtaining the compliance needed for lengthy periods of
letter pointing—which, according to Jaswal’s earlier eyetracking paper (Jaswal,
2020), is at about one second per letter even after years of practice—presumably
takes time.
Claim 6: The alleged
need by minimally-speaking individuals for attentional, emotional, and sensory
support while they point to letters
The authors describe the roles of the CRP as “monitor[ing]
the nonspeaker’s attention to ensure they are focused on the task at hand” and
“prompting the [nonspeaker] to redirect their attention to the letterboard.”
Elaborating, they state:
CRPs believe that the
micro-movements they make while holding a physical letterboard might aid in
maintaining the speller’s attention. For example, when they notice a
nonspeaker’s focus waning, the CRP might rapidly remove the letterboard from
the field of view and then reintroduce it, thereby refreshing the nonspeaker’s
attention.
But wandering attention and wandering away from the
letterboard are more consistent with boredom: with being made to point at
letters without understanding what those letters spell—which, in turn, is
consistent with what we know about profound autism and with what Chen et al.
(2024) report about comprehension (see above).
The authors also claim that CRPS “provide regulatory support
and encouragement as appropriate,” claiming that
Since many nonspeakers have sensory
needs that can compel them into near-constant motion” [Their source for this is
the FC-generated memoir “The Reason I Jump”] “the CRP may reposition the
letterboard as needed so that it always remains in the subject’s field of view.”
As a side note, this last point suggests that repositioning the
letterboard occurs only when it’s out
of the subjects field of view; in fact, a trained CRP testified
in court that he repositions the board only
when his client types three letters that “don’t make sense,” and many
videos of RPM/S2C show repositioning occurring under many other circumstances,
often for no obvious reason other than that the client was about to hit the
wrong letter.
Indeed, all of these purported roles—prompting attention,
encouraging, repositioning the letterboard—are opportunities for (unwitting)
verbal cueing that directs letter selection.
Claim 7: Some RPM/S2C
users have achieved independence
The author’s source for this is once again an entire book: once
again, Ido Kedar’s entire memoir—as if a memoir that’s been attributed to
someone who’s been subjected to RPM/S2C a is proof of independent typing, and
as if the testimony within such a memoir is a reliable source on RPM’s
validity. In the few public videos of Kedar, there’s no evidence of
independent, spontaneous, prompt-free typing.
Claim 8: What’s
holding the many other RPM/S2C users back from independent typing is the need
for regular practice with “trained professional[s],” who are costly and scarce.
In particular, learning type independently, the authors
claim, requires “regular opportunities for practicing the required skills
(e.g., coordinating gaze and pointing to letters),” which in turn requires
supervision and feedback by those costly and scarce “trained professional[s].”
Of course, the alleged need for years of practice and
guidance from scarce, expensive professionals in order to point independently to letters depends on the claim that pointing
is motorically challenging in autism, which, in turn, is totally
unsubstantiated (see above).
Unsubstantiated though this claim is, it’s one of the bases
for the study of the communication tool reported in this article, to which we
now turn.
The purpose of the study, “to train nonspeakers in
holographic spelling,” implicitly assumes that the entire issue for nonspeakers
who point to letters to communicate is one of sensori-motor environment. Somehow,
whether nonspeakers who can “isolate single letters and spell full words” on a
physical letterboard can do the same thing with a virtual letterboard is an
open question. Indeed, as far as the researchers are concerned, it’s “an
ambitious goal” for letterboard users, particularly given:
§
The lack of “haptic feedback” (touching
sensation) associated with touching the physical letterboard
§
the need to wear a head-mounted device
§
the need to interact with holograms
§
the presence of unfamiliar researchers
§
the unfamiliar environment of the research study
..as if all these environmental changes are enough to
interfere with the ability to point to letters to spell words.
The authors, furthermore, cite studies reporting that autistic
people have difficulties generalizing skills learned in one context to another.
But none of the studies they cite—or others on this topic—note difficulties
with generalizing something as straightforward as isolating single letters and
spelling words from one environmental context to another. It’s one thing to
have trouble generalizing the meaning of a word like “doctor” from one medical
setting to another, or generalizing conversation skills learned in a speech
therapy session to real-world conversational settings: both of these
generalization difficulties are common in autism. It’s quite another thing to
have trouble generalizing spelling from one context to another. Indeed,
anecdotes of hyperlexic autistic kids report no difficulties at all in this
department: hyperlexic autistic kids have been observed spelling words in all
sorts of contexts, and without any explicit instruction: from refrigerator
letters to sidewalk chalk, to letters they form out of playdough.
Indeed, what generalization difficulties there are in the
context of going from standard S2C to holographic S2C are probably quite
different. One difficulty is the altered context of CRP cueing: different
environments mean different CRP cues. With holographic S2C, the most obvious difference,
as we’ll see, is that the letter array is typically no longer held up by the CRP.
The other generalization difficulty has to do with letter
position. For those who are conditioned to spell letter sequences without
understanding what they’re spelling, one of the most salient things is letter
position. In cases where letters’ positions are held constant relative to other
letters, as they are on S2C letterboards, one of the most likely learned
elements of spelling are the sequences of movements around the letter array. To
spell the word CAT on an S2C letterboard, for example, one might learn to first
go to the top row and towards left the end of the board for “c,” then all the
way to the left for “a”, and then down to the second to last row and far right for
“t.” In addition to learning the positions for common words, one might also
learn the positions for common letter sequences like “t-h” (far right of the
second to last row followed by left-center of the second row).
Given this, it’s telling that the researchers meticulously
replicated in virtual space the physical letterboards the kids are used to,
letting them (or their CRPs) choose from “a variety of virtual letterboards
that resemble popular physical models used by the community,” allowing “a
nonspeaker to choose the one they are most familiar with.” That is, while the
researchers play up concerns about “haptic feedback” and wearing a headset
(which surely is quite annoying),
they fail to mention, and yet still carefully control, one key factor that is likely
to generalize only if the letter selection is driven by actual comprehension:
that is, the positions of the letters on the letterboard. If you’re
intentionally spelling the word CAT to talk about actual cats, your finger will
go to the correct letters, no matter where they appear in an array, even if it
takes some searching. But if CAT, for you, is simply a series of points to
letters in certain positions, you’re much more likely to be thrown by a change
in letter positions.
In my next post, I’ll pick up here and take a closer look at
the actual study.
I’ll close here by noting that Jaswal et al. discuss this
paper in an article
for IEEE Spectrum, a magazine published by the Institute of Electrical and
Electronics Engineers, where they repeat many of the same faulty and
problematic claims about non-speaking autism.
REFERENCES
Alabood, L., Dow, T., Feeley, K. B., Jaswal, V.K.,
Krishnamurthy, D. From Letterboards to Holograms: Advancing Assistive
Technology for Nonspeaking Autistic Individuals with the HoloBoard. CHI '24:
Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
Article No.: 71, Pages 1 - 18 https://doi.org/10.1145/3613904.3642626
Chen, Y., Siles, B., & Tager-Flusberg, H. (2024).
Receptive language and receptive-expressive discrepancy in minimally verbal
autistic children and adolescents. Autism research : official journal
of the International Society for Autism Research, 17(2),
381–394. https://doi.org/10.1002/aur.3079
Courchesne, V., Meilleur, A. A., Poulin-Lord, M. P., Dawson,
M., & Soulières, I. (2015). Autistic children at risk of being
underestimated: school-based pilot study of a strength-informed
assessment. Molecular autism, 6, 12. https://doi.org/10.1186/s13229-015-0006-3
Jaswal, V. K., Wayne, A., & Golino, H. (2020).
Eye-tracking reveals agency in assisted autistic communication. Scientific
reports, 10(1), 7882.
https://doi.org/10.1038/s41598-020-64553-9
