Friday, December 19, 2025

Can the HoloGaze liberate S2Ced individuals from their facilitators?

Over the past couple of years, S2C proponent Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user have co-authored several papers on the development of virtual tools that enable S2C users to virtually select virtual letters rather than pointing to letters on letterboards. Like Jaswal’s other recent papers, each of these begins with purported justifications for S2C as a valid communication method, and each reports instances of allegedly independent communication by S2C users. Like Jaswal’s other papers, therefore, these papers are worth reviewing in detail. In my last three posts, I discussed Alabood et al. (2024), a study involving a virtual letterboard, or HoloBoard, and Alabood et al. (2025), a study involving the use of virtual letter cubes to spell words.

In my next two posts, I’ll discuss two additional studies in Jaswal’s 2023-2025 virtual reality oeuvre that are relevant to us here at facilitatedcommunication.org: Nazari et al. (2024), a study involving the “HoloGaze,” and Shahidi et al. (2023), a study involving the “HoloLens.” What makes these studies relevant is that, like all the other Jaswal studies we’ve reviewed:

·       They involve language and literacy (some of Jaswal et al.’s other studies instead involve visual games)

·       They appear to show levels of language comprehension that are generally not found in minimally speaking autism, particularly in those with significant motor impairments (see Chen et al., 2024).

(Minimal speakers with autism and significant motor impairments being representative, per Jaswal et al., of the population that their studies are targeting.)

In this post, I’ll focus on the HoloGaze study (Nazari et al., 2024), a paper in which Krishnamurthy and Jaswal are listed, respectively, as the second and third authors. This paper focuses on a virtual reality technology, complete with headsets, that allows users to select letters and spell words by shifting their gaze to different letters from a virtual display. Users can select a particular letter either by sustaining their gaze on that letter, or by pushing a button while looking at it. The HoloGaze study differs from the HoloBoard and LetterBox studies in one important way: for the most part, it doesn’t to have its non-speaking autistic participants do anything other than select letters and spell words—as opposed to tasks that require actual comprehension of language. (It’s worth noting that a significant proportion of autistic individuals are hyperlexic and able to spell words that they don’t understand).

Like the other two papers, the HoloGaze paper opens with the same unsubstantiated/false claims about minimal speakers with autism, beginning with the purported attentional, sensory, and motor issues:

[N}onspeaking autistic people have significant attentional, sensory, and motor challenges.., making it difficult for them to write by hand or type inconventional ways. For example, many nonspeaking autistic people have difficulty with the fine motor control needed to use a pen or pencil.

As in the LetterBox paper, their authors’ sources for these claims are (1) and entire book attributed to someone who has been subjected to facilitated communication (Higashida, 2013), and (2) a meta-analysis of 41 studies on motor coordination in autism, none of which included the skills involved in pointing to letters on a letterboard (Fournier, 2010). To give Jaswal et al. the benefit of the doubt, when they mention typing in “conventional ways,” they may mean ten-finger typing. Ten-finger typing, as compared with typing with an index finger, does involve significant motor coordination. But, to the extent that the authors’ goal here is to motivate an examination of eye-gaze technology, they haven’t managed to explain why, for minimal speakers with autism, eye pointing would be superior to index-finger pointing.

Furthermore, the authors make no mention of the one specific motor issue, sometimes alleged in non-speaking autism (see Handley, 2022), that might be relevant here: ocular apraxia—or the ability to move one’s eyes in a desired direction. Ocular apraxia would have been relevant to their HoloGaze study because it reports involved a training phase in which non-speaking autistics learned how to select letters via eye gaze. If there’s no eye gaze impairment in autism, and if the non-speakers already know how to pick out letters on demand, then why do the participants in this study require anything more than a brief explanation in how to use the system?

Beyond their claims about attentional, sensory, and motor challenges in autism, the authors claim, once again citing the memoir attributed to Higashida (Higashida, 2013), that the tendency of non-speaking autistics to have trouble sitting still while being prompted by their facilitators to point to letters on the letterboard is the result of “regulatory” issues:

They may also be in constant motion (which seems to serve a regulatory function... making training to use a keyboard while remaining seated difficult [)].

Left out of this discussion is the more likely possibility: boredom with a task that, due to those significant comprehension problems in non-speaking autism that are particularly severe in those with significant motor impairments (see Chen et al., 2024), these individuals probably find meaningless.

Next, the authors claim that:

Some nonspeaking autistic people have learned to communicate by typing, which has allowed them to graduate from college and to write award-winning poetry.

Their sources are a National Public Radio piece about RPM user Elizabeth Bonker’s valedictory speech at her graduation from Rollins College (in which she stood at the podium while a pre-recorded speech, attributed to her, was broadcast), and an autobiographical piece attributed to David James Savarese, better known as Deej.

Despite the purported communicative successes of those who “communicate by typing,” the authors note that

the process by which they learned to type was lengthy and expensive, and often requires the ongoing support of another person.

The most likely explanation for these hurdles is that those subjected to “communication by typing,” aka Spelling to Communicate (S2C), depend on facilitator prompts and cues to determine which letters to select; for Jaswal et al., these hurdles are instead a reason to develop virtual reality technologies like the HoloGaze.

Importantly, however, the HoloGaze

allows a caregiver to join an AR session to train an autistic individual in gaze-based interactions as appropriate.

(“Caregiver” here is used in this paper to denote the facilitator, or what S2C-proponents call a “Communication and Regulations Partner.)

Those familiar with evidence-based AAC devices might wonder, given that there already exist AAC devices that allow eye-gaze selections or “eye typing,” what the point is of this new tool. But the authors, acknowledging that such technologies already exist, claim that VR tools offer “unique additional benefits.” They cite:

·       The “wider context” in which these devices can be used: “e.g., not just at a special education classroom but also for personal use cases.”

·        “mobility”

·       The “3-dimensional environment shared between educators and students,” which “can facilitate the training process for those who require extensive training.”

These strike me as pretty weak arguments. Standard AAC devices are mobile and can be used in a broad set of contexts. And why is “extensive training” necessary? Believing, as Jaswal does, that his non-speaking participants already know how to identify letters and point to them to spell words, and that non-speaking autistic individuals, like people in general, look at the letters that they point to (Jaswal, 2020), it’s unclear why these participants would need “extensive training” in using selecting virtual letters via the HoloGaze.

So who are these participants? In all the other studies we’ve reviewed so far, Jaswal et al.’s participants are explicitly described as individuals point to letters on letterboards with the support of communication partners (the hallmarks of Spelling to Communicate and Rapid Prompting Method). This study differs: its inclusion criteria do not mention communication partners; only “experience in communicating using a physical letterboard.” Participants therefore could have included those who communicate independently: without someone hovering, holding up the letterboard, prompting, or otherwise cueing them and controlling their letter selections. And yet the identities of every single person who is thanked in the paper’s acknowledgments suggest otherwise. In order of mention, they are: a neuropsychologist who believes in S2C and is doing her own experiments on S2C-ed individuals, an RPM user, the mother of an S2C user, an S2 practitioner, another S2C practitioner, and S2C promoter and practitioner Elizabeth Vosseller.

Let’s turn, now, to the actual experiment. After an initial “tolerance and calibration” phase, participants underwent a “training phase” in which they first learned to select flashing tiles and then flashing letters from arrays of tiles/letters in which only the target item flashed. If they selected the flashing item, it turned green and was surrounded by a “bounding box” that indicated “successful eye gaze engagement.” Also providing cues were the “caregivers”:

The person assisting the user could also observe the tile’s colour (because they were also wearing a device), enabling them to provide verbal prompts to guide the user’s attention if necessary. 

If the participants really had the language comprehension skills that Jaswal et al. regularly attribute to them, why couldn’t they just be told, verbally, how the system worked? All they needed to know, in order to use the system correctly, was this: “Direct your eyes to the flashing letter. Then either look at it for one second, or push this button while you look at it.” One has to wonder whether, at some level of consciousness or sub-consciousness, Jaswal suspected that his participants didn’t have the kinds of comprehension skills that he has long attributed to the broader non-speaking population to which they belong.

The flashing letters and the facilitators’ “attentional prompts and cues” continued into the “Assisted Spelling” part of the testing phase, where the participants had to spell actual words by selecting letters from a virtual letterboard. The researchers dictated six three-letter words (JET, DRY, EVE, FAN, GUM, RUG, IVY), and the target letters flashed one by one in order as they were selected. Besides the flashing, prompting, and facilitator cueing, participants received one additional cue:

To increase their visual load gradually, participants did not see the full letterboard at the beginning of the testing phase. Instead, only the letters in the first word were presented. After the first word (and after all subsequent words), the additional letters required to spell the next word were added.

To justify this, the authors cite feedback that was almost certainly generated by S2C:

This design was suggested by our nonspeaking autistic consultant to reduce visual clutter initially as the participant learned the affordances of a new interface.

This feedback was almost certainly generated by the facilitators/CRPs rather than by the non-speakers themselves.

Following the “Assisted Spelling Phase” was the “Unassisted Spelling Phase.” This involved six four-letter words (ARCH, BALL, DUCK, EARL, FALL, GIFT, HOPE), without the target letter flashing. It’s unclear whether the facilitators were still allowed to prompt and cue, but in any case it’s a lot harder to detect and cue people’s eye gaze than their extended index fingers.

Curiously, there was a fair amount of attrition at each stage, from the “tolerance and calibration” phase to the “training” phases to the “testing” phases:

Twelve of the 13 participants who tolerated the device attempted the testing phases that involved spelling... Half of those who tried the testing phases (6 of 12) completed both the phase where the letters flashed in sequence ("assisted") and the phase where the letters did not flash in sequence ("unassisted").

In other words, less than half the participants made it through the whole study—short though it was. And yet, the authors professed to be impressed:

This is a remarkable number given this was their first experience using eye gaze interactions using a head-mounted AR device.

As for the actual results of those who completed at least one of the testing phases, the author report two factors: correct interactions per minute (which presumably means “correct letter selections per minute”) and error rate. Mean interactions per minute decreased over time from 13.49 to 10.53, which the author claim reflects increased complexity (more letters to choose from; the shift from assisted to unassisted spelling; the shift from three-letter words to four-letter words). Thus, the unassisted spelling averaged between 5 and 6 seconds per correct letter—a surprisingly low rate for anyone who actually knows how to spell the given words.

Meanwhile, the error rate, “surprisingly” according to the authors,” improved from 0.42 (range: 0.07 - 0.79) to 0.39 (range: 0.08 - 0.63).” In other words, participants only selected the correct letters, on average, about 3/5 of the time, ranging (across the 6 participants) from near-complete success to an improvement, at the lowest end, from selecting the correct letter only about 1/5 times to selecting it just under 1/3 of the time.

While the official training only involved spelling three and four-letter words, some participants, “if they had time and interest,” were asked to engage in one activity that actually required comprehension: “answer[ing] five questions with one word answers on the virtual board.” However, the authors tell us:

This data is not reported here because these tasks were completed by only a subset of participants and because of space limitations.

We can only wonder what the excluded data would have suggested to their readers about language and literacy skills in non-speaking autism.

REFERENCES

Alabood, L.,  Nazari, A., Dow, T., Alabood, S., Jaswal, V.K., Krishnamurthy, D. Grab-and-Release Spelling in XR: A Feasibility Study for Nonspeaking Autistic People Using Video-Passthrough Devices. DIS '25: Proceedings of the 2025 ACM Designing Interactive Systems Conference. Pages 81 – 102 https://doi.org/10.1145/3715336.3735719

Alabood, L., Dow, T., Feeley, K. B., Jaswal, V.K., Krishnamurthy, D. From Letterboards to Holograms: Advancing Assistive Technology for Nonspeaking Autistic Individuals with the HoloBoard. CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems Article No.: 71, Pages 1 - 18 https://doi.org/10.1145/3613904.3642626

Chen, Y., Siles, B., & Tager-Flusberg, H. (2024). Receptive language and receptive-expressive discrepancy in minimally verbal autistic children and adolescents. Autism research : official journal of the International Society for Autism Research17(2), 381–394. https://doi.org/10.1002/aur.3079

Fournier, K. A., Hass, C. J., Naik, S. K., Lodha, N., & Cauraugh, J. H. (2010). Motor coordination in autism spectrum disorders: a synthesis and meta-analysis. Journal of autism and developmental disorders40(10), 1227–1240. https://doi.org/10.1007/s10803-010-0981-3

Handley, J. B., & Handley, J. (2021). Underestimated: An autism miracle. Skyhorse

Higashida, N. (2013). The reason I jump: The inner voice of a thirteen-year-old boy with autism. Knopf Canada.

Jaswal, V. K., Wayne, A., & Golino, H. (2020). Eye-tracking reveals agency in assisted autistic communication. Scientific reports10(1), 7882. https://doi.org/10.1038/s41598-020-64553-9

Nazari, A., Krishnamurthy, D., Jaswal, V. K., Rathbun, M. K., & Alabood, L. (2024). Evaluating Gaze Interactions within AR for Nonspeaking Autistic Users. 1–11. https://doi.org/10.1145/3641825.3687743

Shahidi, A., Alabood, L., Kaufman, K. M., Jaswal, V. K., Krishnamurthy, D., & Wang, M. (2023). AR-based educational software for nonspeaking autistic people – A feasibility study. 2023 IEEE International Symposium on Mixed and Augmented Reality

 

 

Thursday, December 4, 2025

Can Jaswal’s “LetterBoxes” substitute for letterboards?

Can Jaswal’s “LetterBoxes” substitute for letterboards?

Over the past couple of years, S2C proponent Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user have co-authored several papers on the development of virtual tools that enable S2C users to virtually select virtual letters rather than pointing to letters on letterboards. Like Jaswal’s other recent papers, each of these begins with purported justifications for S2C as a valid communication method, and each reports instances of allegedly independent communication by S2C users. Like Jaswal’s other papers, therefore, these papers are worth reviewing in detail. In my last two posts, I discussed Alabood et al. (2024), a study involving a virtual letterboard, or HoloBoard. In this post, I turn to Alabood et al. (2025), a study involving the use of virtual letter cubes to spell words, in which Jaswal is listed, respectively, as the fifth and sixth authors.

The LetterBox study is supposed to address one of the concerns that purportedly arose during the HoloBoard study. I say “purportedly” because this concern was based, in part on the S2C-generated output of the participants, which most likely was controlled by their facilitators, or, to use the authors’ sneaky terminology, their “caregivers.” (In the HoloBoard paper, the authors used the terms “communication and regulations partner,” or CRP). The concern in question was that “accidental swiping on virtual letterboards” resulted in unintended letter selections. For “who may engage in impulsive or repetitive tapping patterns, or users who experience more motor control challenges,” the authors implemented the LetterBox, a “grab-and-release interaction paradigm... designed to promote more deliberate selections of letters, thereby reducing errors.”

This claim about motor control challenges in selecting letters is based on the notion, popular with FC/RPM/S2C-proponents like Jaswal, that pointing is a motor-control challenge in non-speaking autism (see my post on their earlier paper). In particular, the authors claim that

·       “developing the motor and attentional skills required for typing is an arduous process, often taking years of practice and intensive caregiver support”

·       “some nonspeakers experience motor control challenges so significant that they may not develop the skills required for typing or letter-pointing.” Tellingly, their sources for this include FC-promoter Elizabeth Torres, a book attributed to someone who has been subjected to FC (Higashida, 2013), and a meta-analysis of 41 studies on motor coordination in autism, none of which examined pointing (Fournier, 2010).

·       “the grab-snap-release method offers a promising alternative to tapping”

·       “grab-and-release requires less fine motor coordination.”

As I noted earlier, however, pointing doesn’t appear on any of the checklists in standard motor skills evaluations. This indicates that it simply isn’t a specific motor issue for anyone—any more than other simple gestures like waving your hand. Manipulating objects, on the other hand, is more motorically complicated and appears regularly on such checklists. In other words, the authors have it precisely backwards.

The objects to be grabbed and manipulated in the LetterBox experiment involved “virtual cubes (8 cm on each side), each with a letter of the alphabet printed on it; and a spelling area where users arrange selected cubes in the correct order on a set of spelling bases to form words.” These virtual cubes are presented in three different virtual environments: augmented reality (AR); mixed reality (MR); and virtual reality (VR).

Despite its shift from virtual HoloBoards to virtual letter cubes, the LetterBox study largely resembles the HoloBoard study.

First, there’s its use of FC/RPM/S2C-generated information as sources for some of its claims. The paper cites Naoki Higashida, Elizabeth Bonker, Hari Srinivasan, and Edlyn Peña’s compendium of FC/RPM/S2C-generated testimonials. On the purported motor issues in non-speaking autism, it also cites FC-promoter Elizabeth Torres.

FC/RPM/S2C-generated information is also the basis for the input they get from their participants. The authors tell us that

·       “LetterBox was designed in collaboration with the nonspeaking autistic community (nonspeakers, educators, and therapists),” the latter group consisting of “three nonspeaking autistic individuals who use letterboards or keyboards.”

·       “Our consultants indicated that being able to see a familiar individual in these more immersive environments [AR, MR, and VR] the virtual could prevent anxiety.”

·       All participants tolerated the device throughout their session, with only one requesting a brief break midway through.

·       Several participants expressed interest in continuing to use the device even after their session had concluded, suggesting a positive user experience.”

·       One purportedly stated: "It felt good to have this available to me... Can you give me more info on how I can get started at home? Really excited to be part of this, thanks." 

As I noted in my discussion of the HoloBoard paper, proclaiming this kind of participatory research allows the researchers to check off that box that so many in disability studies fields now prioritize, i.e., making sure your research is “inclusive” or ”participatory;” in other words, mindful of the “nothing about us without us” mantra of the disability rights movement. One of the perverse effects of this new priority is that it’s incentivized those who do research on minimally-speaking autism to include non-speaking autistics in the only way that seems (to the more deluded or denialist of these researchers, anyway) to work: namely, through FC/RPM/S2C-generated output. And what this means, more broadly, for any research on non-speaking autistics is that a specific subset of non-speakers, namely those subjected to FC/RPM/S2C, will be the ones recruited to provide “feedback” on everything from experimental design to future directions for research—feedback that is most likely coming, ironically, from their non-autistic, non-clinically-trained CRPs.

As in the HoloBoard study, the authors play up the motor and cognitive demands of their virtual interface, suggesting that spelling words through the novel medium of grabbing and releasing virtual letter cubes involves an “interplay between cognitive and motor demands.” This, they say, explains “the slower reaction times for incorrect selections and the effect of increasing [the number of letter cube] options.” As I noted earlier, anecdotes of hyperlexic autistic kids report no difficulties at all transferring spelling skills to novel contexts: such kids have been observed spelling words in all sorts of media, and without any explicit instruction: from refrigerator letters to sidewalk chalk, to letters they form out of playdough.

As Jaswal does in all of his papers, he also makes false claims about standard AAC devices falling short in the communicative options they offer non-speakers:

While useful for requesting items, such systems fall short of enabling fully expressive communication, as users are constrained by preselected images and words.

As I noted earlier, Jaswal appears to be persistently unaware that most AAC devices allow customization of images to reflect users’ needs and interests and have keyboard options that allow typing. Used in typing mode, an AAC device is just like a letterboard: the only difference is that no one is holding it up in front of the user and prompting and cueing their letter selections.  The authors’ implication that standard AAC devices are only useful for requesting items, meanwhile, is a warping of the empirical evidence. Empirical studies do find that minimal speakers with autism mostly use AAC devices to make requests. But Jaswal et al. are assuming here, as they did in their HoloBoard paper, that this is the fault of the device rather than a consequence of the well-known socio-communicative challenges that have defined autism since it was first identified eight decades ago by Leo Kanner. Non-autistic users of AAC tools—deaf children with sign language, individuals with mobility impairments—regularly advance beyond the requesting stages to a whole range of communicative acts.

Finally, as in their HoloBoard paper, the authors state that “Some nonspeakers have learned to communicate by typing on a keyboard or tablet, a process that took many years and required consistent opportunities to practice the necessary motor skills.” And while there are occasional reports of non-speakers who truly do type independently, and while, like all of us who learned to type, years of consistent practice are needed to acquire “the necessary motor skills,” truly independent typing in truly minimally-speaking autism (i.e., not people who can/could speak well enough to hold a conversation), appears to be extremely rare. Furthermore, none of these few truly minimally speaking autistic individuals who type completely independently are people who started out using FC/RPM/S2C. However, the authors suggest otherwise: their sources for the claim that “some nonspeakers have learned to communicate by typing on a keyboard or tablet” come from two individuals who have been subjected to RPM. One citation is of Elizabeth Bonker’s pre-recorded Rollins College valedictory speech, in which you can see Bonker standing at the podium while a pre-recorded speech, attributed to her, is broadcast. The other is of a piece attributed to Hari Srinavasa in Psychology Today entitled Dignity Remains Elusive for Many Disabled People A Personal Perspective: I often feel like a charity case.

The authors do note that:

In some cases, nonspeakers learned to type with assistance from another person, a teaching method that has generated controversy because of the potential that the assistant could influence what the nonspeaker types.

But rather than citing any of the many studies that show just how powerful such influence is, and how it goes far beyond what most people might think of as influence to actual message control, they simply cite the American Speech-Language-Hearing Association’s Position Statement on Rapid Prompting Method—and then repurpose it into a segue way to the virtual LetterBox:

If it were possible to provide automated support to a nonspeaker as they learned the motor skills required to type, this concern would no longer apply.”

As with the HoloBoard study, the results of the LetterBox study might initially seem impressive:

·       Participants only got support from their “caregivers” and feedback from the virtual environment in the training stages, where they were asked to unscramble words. Caregiver support included verbal prompts and what the authors call “mirroring”: indicating the correct letter with a physical letterboard. Virtual feedback after incorrect letter placement involved not providing a new slot for the next letter, thus prompting the user to “return the cube to the picking platform and select a different one.”

·       Even in the training stages, “[s]everal participants... demonstrated strong independence,” with no verbal prompts or mirroring, with no incorrect responses, and “with [one participant] completing 45 [of their] correct interactions entirely independently,” Several additional participants “performed well, completing most interactions independently, though they occasionally required prompts or cues.”

·       Caregivers were present but limited in what prompts and cues they could provide. While the caregivers “remain[] visible to the user[s] via a dynamic passthrough window,” unlike in the HoloBoard study: they “see the letters on the picking platform in a different order than the user.” In addition, “when the participant grabs and moves a letter, this movement is not synchronized for the caregiver.”  Presumably, then, the caregivers, however much they  prompted and mirrored, would be more limited than typical CRPs are in their ability to guide participants to specific letters. Furthermore, the authors suggest that the caregivers were only allowed to use generic prompts like “‘Keep going,’ ‘What’s your next letter?’ or ‘Remember to open your hand when grabbing.’”

·       In the experiment’s test phase, participants faced the more challenging spelling task of spelling words with the entire alphabet, as opposed to unscrambling scrambled letters into words.

·       In the testing phase, “[p]articipants achieved a mean spelling accuracy of  90.91%. In the open-spelling task, 14 participants provided answers, often independently and with minimal support.”

·       The 19 participants “completed spelling tasks with a mean accuracy of 90.91%. Most increased interaction speed across immersion levels, and 14 participants provided one-word answers to open-ended questions using LetterBox, often independently.”

·       “6 participants... achieved perfect accuracy across all three immersion levels”

But as with the HoloBoard study, the results, on closer inspection, are much less impressive—particularly as evidence for authentic message authorship. First, while some participants were highly successful during training, “[o]ther participants required multiple attempts to be successful” and “showed a greater reliance on assistance.” One had “11 mirrored interactions and 8 prompted responses;” another “required 5 mirrored and 8 prompted responses;” several “recorded a few incorrect responses (between 3 − 5 per participant).

Given that all the training involved was spelling common 3-6 letter word like BED, FISH, HOUSE, and GARDEN after the researcher said the word by unscrambling the 3-6 virtual letter boxes that spell the word, this is arguably somewhat surprising. The participants, furthermore, were all S2C users, and so had many months, or years, of practice pointing to letter sequences that spell common words. As the authors disclose, the participants had “at least one year of experience using a letterboard or keyboard to ensure they had sufficient spelling proficiency to complete the study tasks.” Of the 19 participants, 1 had just under one years’ experience; most had at least three; one had been at it for 10 years.

Furthermore, the tasks were primarily spelling tasks--even in the test condition, where they were no longer unscrambling letters, but choosing from a full alphabet of letter cubes. In that phase, they had to answer five specific questions. Besides “Can you spell your name,” the questions were randomly selected from the following

·       “What is your favourite colour?"

·       "What is your favourite meal?"

·       "What is your favourite hobby?"

·       "What is your favourite movie?"

·       "What is your favourite book?"

·       "What is your favourite song?"

And this didn’t go as well as the unscrambling tasks did:

Of the 19 participants, 15 provided at least one valid response to the open-ended questions. Among them, 8 answered all 5 questions, 6 answered 4 questions, and 1 answered only 1 question.

Furthermore, it’s telling that the researchers chose questions whose answers can only be assessed for validity rather than actual correctness. That is, except for “Can you spell your name,” which is question that many minimal speakers with autism learn to do independently of S2C, it’s hard to know whether an answer is correct (how do we verify that “blue” is a participants’ favorite color) as opposed to valid (“blue” is at least a valid answer to a “what color” question). What would have happened if participants had been asked, say, “How many legs does a bird have,” or “Which is heavier, a balloon or a bowling ball?”

As for the 14 who answered all, or nearly all of the 5 questions correctly, I don’t profess to be able explain how they did this. But this experiment, as designed, doesn’t rule out that some or all of them did so simply by having learned, over years of answering questions both inside and outside of S2C contexts, (1) the associations certain key words (“color,” “meal,” “hobby,” etc.) spoken within in a certain common frame (“what is your favorite X”), and (2) certain specific letter sequences. Especially given the rote memorization skills in autism, and the significant comprehension deficits in non-speaking autism, particularly in those with motor impairments (Chen et al., 2024), this would appear to be the more likely route to some of the correct answers that were obtained—as opposed to intentional responding to the semantic content of the questions they were asked.

As an aside, this paper, unlike the HoloBoard, contained some interesting details about the speed of the letter selections. The average selection time during the scrambling activities was nearly 5 ½ seconds per letter, with time increasing when there were more options to choose from. Participants were fastest when only one letter remained.” This strikes me as extraordinarily slow. The average selection time during the five questions in the test phase was similar, though the authors make much of the fact that it improved from the first question to the fifth question, in what the authors claim “reflect[s] continued improvement in efficiency and comfort with grab-and-release as the session progressed.”

I’m not sure what this extremely slow letter selection process means, but I’m pretty sure it isn’t a reflection of the alleged motor difficulties the authors have asserted without empirical support, and that it isn’t good news for those who wish to make claims about intact language comprehension and literacy in S2C-ed individuals.

REFERENCES

Alabood, L.,  Nazari, A., Dow, T., Alabood, S., Jaswal, V.K., Krishnamurthy, D. Grab-and-Release Spelling in XR: A Feasibility Study for Nonspeaking Autistic People Using Video-Passthrough Devices. DIS '25: Proceedings of the 2025 ACM Designing Interactive Systems Conference. Pages 81 – 102 https://doi.org/10.1145/3715336.3735719

Chen, Y., Siles, B., & Tager-Flusberg, H. (2024). Receptive language and receptive-expressive discrepancy in minimally verbal autistic children and adolescents. Autism research : official journal of the International Society for Autism Research17(2), 381–394. https://doi.org/10.1002/aur.3079

Fournier, K. A., Hass, C. J., Naik, S. K., Lodha, N., & Cauraugh, J. H. (2010). Motor coordination in autism spectrum disorders: a synthesis and meta-analysis. Journal of autism and developmental disorders40(10), 1227–1240. https://doi.org/10.1007/s10803-010-0981-3

Higashida, N. (2013). The reason I jump: The inner voice of a thirteen-year-old boy with autism. Knopf Canada.

Torres, E. B., Brincker, M., Isenhower, R. W., Yanovich, P., Stigler, K. A., Nurnberger, J. I., Metaxas, D. N., & José, J. V. (2013). Autism: the micro-movement perspective. Frontiers in integrative neuroscience7, 32. https://doi.org/10.3389/fnint.2013.00032