Over the past couple of years, S2C proponent Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user have co-authored several papers on the development of virtual tools that enable S2C users to virtually select virtual letters rather than pointing to letters on letterboards. Like Jaswal’s other recent papers, each of these begins with purported justifications for S2C as a valid communication method, and each reports instances of allegedly independent communication by S2C users. Like Jaswal’s other papers, therefore, these papers are worth reviewing in detail. In three pervious posts, I discussed Alabood et al. (2024), a study involving a virtual letterboard, or HoloBoard, and Alabood et al. (2025), a study involving the use of virtual letter cubes to spell words.
In my last post, I discussed an
additional study in Jaswal’s 2023-2025 virtual reality oeuvre that are relevant
to us here at facilitatedcommunication.org: the HoloGaze study (Nazari et al., 2024).
In this study, I turn an earlier study,
the HoloLens lessons study (Shahidi, et al., 2023). What makes all these studies
relevant to facilitated communication is that, like all the other Jaswal
studies we’ve reviewed:
· They involve language and literacy (some of
Jaswal et al.’s other studies instead involve visual games)
· They appear to show levels of language
comprehension that are generally not found in minimally speaking autism,
particularly in those with significant motor impairments (see Chen et al., 2024).
(Minimal speakers with autism and
significant motor impairments being representative, per Jaswal et al., of the
population that their studies are targeting.)
The HoloLens lessons study (Shahidi et
al., 2023), with Jaswal and Krishnamurthy listed, respectively, as
its fourth and fifth authors, reports on two studies of another virtual reality
tool with minimal speakers with autism. Using the virtual environment of the HoloLens
2, the authors developed a “virtual
lesson” that presents academic material (verbal and pictographic) in a virtual
environment and then tests students on that material through multiple -choice
questions.
The HoloLens paper repeats many of the
same questionable claims about motor difficulties and regulation challenges in
non-speaking autism found in their earlier studies; I won’t repeat them here. The
authors then report on a pilot study using the HoloLens. It involved just one
participant (a 19-year-old male who
communicates via a letterboard and a “communication partner”) and a virtual
lesson about the history of bicycles:
The
lesson had 13 questions, 2 questions were two-option questions (one spelling
and one comprehension), 6 questions were three-option questions (1
comprehension and 5 spelling a full word questions), and 5 of them were
four-option questions (all comprehension).
Despite the fact that the multiple-choice
questions ranged from two choices (with a 50% chance of success) to four
choices, with the plurality having three choices, the participant performed
rather poorly:
The
participant took 38 attempts to answer the 13 questions. He had correct
responses for 9 of the questions, and thus, 29 of his attempts were
unsuccessful.
The authors make no mention of any
prompting/cueing on the part of the participant’s communication partner. At the
very least, the communication partner couldn’t have provided any cues via a
held-up letterboard cues: there was no physical letterboard involved in the
Hololens lesson. Might this have been the reason for the participants’ poor
failure? What the authors say next is highly suggestive:
Interestingly,
on four occasions the participant was asked to answer the question using the
physical letterboard immediately after they selected a wrong option in the HoloLens
2 system. In all these instances, the participant selected the correct option
on the physical board.
What do the authors conclude from this
highly suggestive contrast? Not what many of us would conclude. Instead they
write:
Hence,
it was clear that there were aspects of our design that the participant was not
comfortable with.
The authors don’t consider the more
obvious explanation: namely, what was really going on were a series of
inadvertent tests of facilitator control, something akin to message-passing
tests. As the authors’ account makes clear, the participant picked out
incorrect answers when there was no letterboard for their facilitator to hold
up, and then picked out correct answers as soon as there was.
Via the held-up letterboard, the
participant answered “follow-up questions” to help the researchers to “better
understand how our design needed to be modified.” Among other messages, the
S2C-generated feedback included the message that
it
was hard to handle questions that presented more than three choices since the
limited field-of-view of the HoloLens 2 meant that more than three buttons
could not be seen simultaneously with a horizontal layout.
The follow-up study, accordingly,
avoided multiple-choice questions with more than three options. Of course, they
could instead have made the buttons small enough to allow more than three
options. Why they opted against this obvious alternative goes unexplained.
The follow-up study involved “five
nonspeaking autistic participants between 9 and 24 years of age” who “were
clients of the same communication partner.” In other words, like the
participant in the pilot study, they were all S2C users.
This study also used the bicycle
history lesson, but now there were “9 questions: 6 with two options only, and 3
with three options.” In other words, this study had an even higher chance of
randomly selected correct answers than the first study did. Furthermore, most
of the questions were about spelling rather than comprehension:
Five
of these questions involve choosing a letter to spell an entire word
("Hobby") in sequential order and one question asked for the first
letter of the word "Bicycle" (B). The other three questions were
comprehension questions related to the content of the lesson.”
In addition, their communication
partners were allowed to prompt them:
As
a form of encouragement, the communication partner intervened with verbal
prompts, e.g., "go for it", "you almost got it", "just
a bit slower", and "withdraw your finger", whenever the
participants had repeated trouble with any given interaction.
These are the kinds of prompts that,
on their own, without accompanying board movements, can be enough to direct
those being subjected to them towards the correct choices—especially when there
are only two or three options. Nonetheless, the results, while better than in
the pilot study, still showed high error rates:
Of
the 36 questions posed, participants were correct on the first attempt for 24
questions, correct on the second attempt for 7 questions, and did not respond
correctly even on the second attempt for 5 questions.
As far as the authors are concerned,
however, these results aren’t a reflection of spelling and comprehension
challenges but, rather, of the way the participants moved their hands when
interacting with the virtual environment:
Based
on our thematic analysis, there were four different causes. Overshooting (6 out
of 12) was the most frequent reason for the failed first attempt. With
overshooting, a participant’s hand movements toward a particular button were
too fast to be recognized by the device. Following an overshoot, participants
typically triggered the wrong button. Another category of incorrect response
involved atypical gestures. For example, S3 tried to press a button with all
fingers and this was not recognized by the device... The third category we
coded was accidental trigger. For example, S2 had their hand extended and
moving even before the buttons appeared. When the buttons appeared, the device
registered the participant’s moving fingers resulting in an unintended press.
The "Other" category represents cases where the participants
genuinely seemed to press the wrong button. There was one instance each of
atypical gesture (S3) and accidental trigger (S2) in their second attempts.
Curiously, the authors don’t tell us
how many instances of “Other” there were. But even if the rate of truly
erroneous selections was as low as the authors suggest, the combination of
choosing from just three options and facilitator cues like “you almost got it”
and “withdraw your finger” are probably enough to induce correct responses.
Like the HoloGaze study, the HoloLens lessons
study inadvertently shows us how errors decrease when facilitators have
control. In the HoloGaze study, it was the eye-gaze selection of virtual
letters vs. the usual S2C-generated letter selection: it’s much easier to cue
where someone points their index finger on a held-up letterboard than to cue
where someone points their eyes in a virtual environment. In the HoloLens
lessons studies, it’s the pilot study, where no verbal prompts were reported
vs. the follow-up study, where the authors reported four examples of verbal
prompts, some of them quite explicit.
Furthermore, both studies primarily assessed spelling rather than
language comprehension—and the first HoloLens lessons study, tantalizingly,
excludes the results from its one comprehension exercise: the optional open-ended
questions activity.
Returning to two studies we reviewed earlier
in this series, we see a similar contrast in error rates under conditions of
high vs. low facilitator influence. In the LetterBox study (Alabood et al.,
2025), where facilitators had no access to the virtual environment,
participants did not do so well and their successes could be explained by the familiarity
of the questions, while in the HoloBoard study (Alabood et al., 2024), where the
facilitators interacted with their clients in the virtual environment, but did
not hold the board, participants did better, though not as well they did as in
the held-up letterboard versions of S2C that they were used to. Clearly most of
them weren’t ready to graduate to S2C’s stationary letters stage—this being S2C’s
final stage, after users “graduate” to held-up keyboards, where the keyboards
are placed on stationary surfaces. There are a few S2Ced individuals who do
graduate to this stage, typing out messages with only the verbal and gestural
(and occasional tactile) cues of their ever-present, ever-hovering facilitators
to guide them.
On that note, we should keep in mind
something the Telepathy Tapes has shown all of us (except for those of us who prefer
to believe in telepathy): facilitators can completely control messages without touching
people or holding up the letterboards for them.
If Jaswal et al. had wanted to rule
out facilitator control (or, for that matter, telepathy), they could have
conducted a low-tech message-passing experiment before developing any of this
fancy equipment and inflicting it on vulnerable individuals who may be
prevented from giving genuine consent. As aversive as Jaswal et al. have
claimed (without evidence) that message-passing tests are to non-speakers with
autism, I’m guessing that these tests are far less aversive to them than
wearing headsets are: especially headsets that project virtual letters and
virtual lessons in front of them wherever they turn, and especially when they
are essentially being forced to consent to these indignities through S2C. Not to mention the fact that message-passing
tests offer FC/RPM/S2Ced individuals one possibility than none of these other
experiments offer: the possibility, should their “caregivers” accept the likely
S2C-invalidating results of such tests, of being truly liberated from their
facilitators—which is, after all, Jaswal et al.’s stated goal.
Finally, where is all this
heading? To quote again from the most
recent of these studies, the HoloBoard study (Alabood, 2025):
§ “Head tracking could be exploited to
ensure the virtual letterboard remains in the nonspeaker’s field of view even
when they move.”
§ CRPs will continue to be present,
“dedicat[ing] their focus to other aspects of supporting a user, such as
promoting attention and regulation,” even when the HoloBoard user might wish to
“engage in private conversations with a third person.” (They propose to explore
ways that “would allow a CRP to support their nonspeaker while allowing the
nonspeaker” during these private conversations.)
§ Or possibly CRPs would be replaced by
a virtual CRP: “a personalized virtual CRP within the virtual environment. The
virtual CRP would emulate the behaviour and appearance of a user’s human CRP to
provide attentional and regulatory support.” But the virtual CRP may essentially
replicate the sorts of message-controlling cues done by the real-world CRP:
“Machine Learning (ML) techniques could be used to train the virtual CRP based
on observations from a user’s real-world interactions with their human CRP.”
While the authors don’t mention
that the “virtual CRPSs” might learn to mimic the human CRPs’ prompts and board
movements (part of how CRPs unwittingly control letter selections), in an article about this paper in IEEE Spectrum, a magazine published by
the Institute of Electrical and Electronics Engineers, Jaswal et al. write that
“This virtual assistant [which now has a name: “ViC”]... can demonstrate motor
movements as a user is learning to spell with the HoloBoard, and also offers
verbal prompts and encouragement during a training session.”
§ Finally, in what are even more
powerful venues for message control, “[l]arge Language Models (LLMs) could be
integrated to reduce the effort needed to communicate thereby reducing user
fatigue. For example, such a system would allow the user to produce elaborate
responses by providing just a succinct prompt to an LLM.”
As I wrote
earlier, many of us predicted these last two items would be next on Jaswal’s
agenda. In other words:
§ Machine learning that allows S2C’s
message-controlling prompts and cues to be taken over by machines and safely
hidden away within their obscure, machine-learned, neural networks from FC
critics and others concerned about the communication rights of autistic non-speakers
§ LLMs that elaborate the short
messages authored by actual or virtual CRPs into messages that are even more
filled with predictable blather and bromides, and even more removed from what
minimal speakers actually want to communicate, than FC/RPM/S2C-generated output
is.
The HoloBoard, the HoloLens, the
HoloGaze, the LetterBox: all of it rings so... hollow. And it’s painful to
think of how all the financial and intellectual capital that went into these
projects might have been spent on to improve, rather than to diminish, the
fragile lives of minimal speakers with autism.
REFERENCES
Alabood,
L., Nazari, A., Dow, T., Alabood, S.,
Jaswal, V.K., Krishnamurthy, D. Grab-and-Release
Spelling in XR: A Feasibility Study for Nonspeaking Autistic People Using
Video-Passthrough Devices. DIS '25: Proceedings of the 2025 ACM
Designing Interactive Systems Conference. Pages 81 – 102
https://doi.org/10.1145/3715336.3735719
Alabood, L.,
Dow, T., Feeley, K. B., Jaswal, V.K., Krishnamurthy, D. From Letterboards to
Holograms: Advancing Assistive Technology for Nonspeaking Autistic Individuals
with the HoloBoard. CHI '24: Proceedings of the 2024 CHI Conference on Human
Factors in Computing Systems Article No.: 71, Pages 1 - 18 https://doi.org/10.1145/3613904.3642626
Chen, Y., Siles, B., &
Tager-Flusberg, H. (2024). Receptive language and receptive-expressive
discrepancy in minimally verbal autistic children and adolescents. Autism
research : official journal of the International Society for Autism Research, 17(2),
381–394. https://doi.org/10.1002/aur.3079
Nazari, A., Krishnamurthy, D., Jaswal,
V. K., Rathbun, M. K., & Alabood, L. (2024). Evaluating Gaze Interactions
within AR for Nonspeaking Autistic Users. 1–11. https://doi.org/10.1145/3641825.3687743
Shahidi, A., Alabood, L., Kaufman, K.
M., Jaswal, V. K., Krishnamurthy, D., & Wang, M. (2023). AR-based
educational software for nonspeaking autistic people – A feasibility
study. 2023 IEEE International Symposium on Mixed and Augmented Reality
