Thursday, December 4, 2025

Can Jaswal’s “LetterBoxes” substitute for letterboards?

Can Jaswal’s “LetterBoxes” substitute for letterboards?

Over the past couple of years, S2C proponent Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user have co-authored several papers on the development of virtual tools that enable S2C users to virtually select virtual letters rather than pointing to letters on letterboards. Like Jaswal’s other recent papers, each of these begins with purported justifications for S2C as a valid communication method, and each reports instances of allegedly independent communication by S2C users. Like Jaswal’s other papers, therefore, these papers are worth reviewing in detail. In my last two posts, I discussed Alabood et al. (2024), a study involving a virtual letterboard, or HoloBoard. In this post, I turn to Alabood et al. (2025), a study involving the use of virtual letter cubes to spell words, in which Jaswal is listed, respectively, as the fifth and sixth authors.

The LetterBox study is supposed to address one of the concerns that purportedly arose during the HoloBoard study. I say “purportedly” because this concern was based, in part on the S2C-generated output of the participants, which most likely was controlled by their facilitators, or, to use the authors’ sneaky terminology, their “caregivers.” (In the HoloBoard paper, the authors used the terms “communication and regulations partner,” or CRP). The concern in question was that “accidental swiping on virtual letterboards” resulted in unintended letter selections. For “who may engage in impulsive or repetitive tapping patterns, or users who experience more motor control challenges,” the authors implemented the LetterBox, a “grab-and-release interaction paradigm... designed to promote more deliberate selections of letters, thereby reducing errors.”

This claim about motor control challenges in selecting letters is based on the notion, popular with FC/RPM/S2C-proponents like Jaswal, that pointing is a motor-control challenge in non-speaking autism (see my post on their earlier paper). In particular, the authors claim that

·       “developing the motor and attentional skills required for typing is an arduous process, often taking years of practice and intensive caregiver support”

·       “some nonspeakers experience motor control challenges so significant that they may not develop the skills required for typing or letter-pointing.” Tellingly, their sources for this include FC-promoter Elizabeth Torres, a book attributed to someone who has been subjected to FC (Higashida, 2013), and a meta-analysis of 41 studies on motor coordination in autism, none of which examined pointing (Fournier, 2010).

·       “the grab-snap-release method offers a promising alternative to tapping”

·       “grab-and-release requires less fine motor coordination.”

As I noted earlier, however, pointing doesn’t appear on any of the checklists in standard motor skills evaluations. This indicates that it simply isn’t a specific motor issue for anyone—any more than other simple gestures like waving your hand. Manipulating objects, on the other hand, is more motorically complicated and appears regularly on such checklists. In other words, the authors have it precisely backwards.

The objects to be grabbed and manipulated in the LetterBox experiment involved “virtual cubes (8 cm on each side), each with a letter of the alphabet printed on it; and a spelling area where users arrange selected cubes in the correct order on a set of spelling bases to form words.” These virtual cubes are presented in three different virtual environments: augmented reality (AR); mixed reality (MR); and virtual reality (VR).

Despite its shift from virtual HoloBoards to virtual letter cubes, the LetterBox study largely resembles the HoloBoard study.

First, there’s its use of FC/RPM/S2C-generated information as sources for some of its claims. The paper cites Naoki Higashida, Elizabeth Bonker, Hari Srinivasan, and Edlyn Peña’s compendium of FC/RPM/S2C-generated testimonials. On the purported motor issues in non-speaking autism, it also cites FC-promoter Elizabeth Torres.

FC/RPM/S2C-generated information is also the basis for the input they get from their participants. The authors tell us that

·       “LetterBox was designed in collaboration with the nonspeaking autistic community (nonspeakers, educators, and therapists),” the latter group consisting of “three nonspeaking autistic individuals who use letterboards or keyboards.”

·       “Our consultants indicated that being able to see a familiar individual in these more immersive environments [AR, MR, and VR] the virtual could prevent anxiety.”

·       All participants tolerated the device throughout their session, with only one requesting a brief break midway through.

·       Several participants expressed interest in continuing to use the device even after their session had concluded, suggesting a positive user experience.”

·       One purportedly stated: "It felt good to have this available to me... Can you give me more info on how I can get started at home? Really excited to be part of this, thanks." 

As I noted in my discussion of the HoloBoard paper, proclaiming this kind of participatory research allows the researchers to check off that box that so many in disability studies fields now prioritize, i.e., making sure your research is “inclusive” or ”participatory;” in other words, mindful of the “nothing about us without us” mantra of the disability rights movement. One of the perverse effects of this new priority is that it’s incentivized those who do research on minimally-speaking autism to include non-speaking autistics in the only way that seems (to the more deluded or denialist of these researchers, anyway) to work: namely, through FC/RPM/S2C-generated output. And what this means, more broadly, for any research on non-speaking autistics is that a specific subset of non-speakers, namely those subjected to FC/RPM/S2C, will be the ones recruited to provide “feedback” on everything from experimental design to future directions for research—feedback that is most likely coming, ironically, from their non-autistic, non-clinically-trained CRPs.

As in the HoloBoard study, the authors play up the motor and cognitive demands of their virtual interface, suggesting that spelling words through the novel medium of grabbing and releasing virtual letter cubes involves an “interplay between cognitive and motor demands.” This, they say, explains “the slower reaction times for incorrect selections and the effect of increasing [the number of letter cube] options.” As I noted earlier, anecdotes of hyperlexic autistic kids report no difficulties at all transferring spelling skills to novel contexts: such kids have been observed spelling words in all sorts of media, and without any explicit instruction: from refrigerator letters to sidewalk chalk, to letters they form out of playdough.

As Jaswal does in all of his papers, he also makes false claims about standard AAC devices falling short in the communicative options they offer non-speakers:

While useful for requesting items, such systems fall short of enabling fully expressive communication, as users are constrained by preselected images and words.

As I noted earlier, Jaswal appears to be persistently unaware that most AAC devices allow customization of images to reflect users’ needs and interests and have keyboard options that allow typing. Used in typing mode, an AAC device is just like a letterboard: the only difference is that no one is holding it up in front of the user and prompting and cueing their letter selections.  The authors’ implication that standard AAC devices are only useful for requesting items, meanwhile, is a warping of the empirical evidence. Empirical studies do find that minimal speakers with autism mostly use AAC devices to make requests. But Jaswal et al. are assuming here, as they did in their HoloBoard paper, that this is the fault of the device rather than a consequence of the well-known socio-communicative challenges that have defined autism since it was first identified eight decades ago by Leo Kanner. Non-autistic users of AAC tools—deaf children with sign language, individuals with mobility impairments—regularly advance beyond the requesting stages to a whole range of communicative acts.

Finally, as in their HoloBoard paper, the authors state that “Some nonspeakers have learned to communicate by typing on a keyboard or tablet, a process that took many years and required consistent opportunities to practice the necessary motor skills.” And while there are occasional reports of non-speakers who truly do type independently, and while, like all of us who learned to type, years of consistent practice are needed to acquire “the necessary motor skills,” truly independent typing in truly minimally-speaking autism (i.e., not people who can/could speak well enough to hold a conversation), appears to be extremely rare. Furthermore, none of these few truly minimally speaking autistic individuals who type completely independently are people who started out using FC/RPM/S2C. However, the authors suggest otherwise: their sources for the claim that “some nonspeakers have learned to communicate by typing on a keyboard or tablet” come from two individuals who have been subjected to RPM. One citation is of Elizabeth Bonker’s pre-recorded Rollins College valedictory speech, in which you can see Bonker standing at the podium while a pre-recorded speech, attributed to her, is broadcast. The other is of a piece attributed to Hari Srinavasa in Psychology Today entitled Dignity Remains Elusive for Many Disabled People A Personal Perspective: I often feel like a charity case.

The authors do note that:

In some cases, nonspeakers learned to type with assistance from another person, a teaching method that has generated controversy because of the potential that the assistant could influence what the nonspeaker types.

But rather than citing any of the many studies that show just how powerful such influence is, and how it goes far beyond what most people might think of as influence to actual message control, they simply cite the American Speech-Language-Hearing Association’s Position Statement on Rapid Prompting Method—and then repurpose it into a segue way to the virtual LetterBox:

If it were possible to provide automated support to a nonspeaker as they learned the motor skills required to type, this concern would no longer apply.”

As with the HoloBoard study, the results of the LetterBox study might initially seem impressive:

·       Participants only got support from their “caregivers” and feedback from the virtual environment in the training stages, where they were asked to unscramble words. Caregiver support included verbal prompts and what the authors call “mirroring”: indicating the correct letter with a physical letterboard. Virtual feedback after incorrect letter placement involved not providing a new slot for the next letter, thus prompting the user to “return the cube to the picking platform and select a different one.”

·       Even in the training stages, “[s]everal participants... demonstrated strong independence,” with no verbal prompts or mirroring, with no incorrect responses, and “with [one participant] completing 45 [of their] correct interactions entirely independently,” Several additional participants “performed well, completing most interactions independently, though they occasionally required prompts or cues.”

·       Caregivers were present but limited in what prompts and cues they could provide. While the caregivers “remain[] visible to the user[s] via a dynamic passthrough window,” unlike in the HoloBoard study: they “see the letters on the picking platform in a different order than the user.” In addition, “when the participant grabs and moves a letter, this movement is not synchronized for the caregiver.”  Presumably, then, the caregivers, however much they  prompted and mirrored, would be more limited than typical CRPs are in their ability to guide participants to specific letters. Furthermore, the authors suggest that the caregivers were only allowed to use generic prompts like “‘Keep going,’ ‘What’s your next letter?’ or ‘Remember to open your hand when grabbing.’”

·       In the experiment’s test phase, participants faced the more challenging spelling task of spelling words with the entire alphabet, as opposed to unscrambling scrambled letters into words.

·       In the testing phase, “[p]articipants achieved a mean spelling accuracy of  90.91%. In the open-spelling task, 14 participants provided answers, often independently and with minimal support.”

·       The 19 participants “completed spelling tasks with a mean accuracy of 90.91%. Most increased interaction speed across immersion levels, and 14 participants provided one-word answers to open-ended questions using LetterBox, often independently.”

·       “6 participants... achieved perfect accuracy across all three immersion levels”

But as with the HoloBoard study, the results, on closer inspection, are much less impressive—particularly as evidence for authentic message authorship. First, while some participants were highly successful during training, “[o]ther participants required multiple attempts to be successful” and “showed a greater reliance on assistance.” One had “11 mirrored interactions and 8 prompted responses;” another “required 5 mirrored and 8 prompted responses;” several “recorded a few incorrect responses (between 3 − 5 per participant).

Given that all the training involved was spelling common 3-6 letter word like BED, FISH, HOUSE, and GARDEN after the researcher said the word by unscrambling the 3-6 virtual letter boxes that spell the word, this is arguably somewhat surprising. The participants, furthermore, were all S2C users, and so had many months, or years, of practice pointing to letter sequences that spell common words. As the authors disclose, the participants had “at least one year of experience using a letterboard or keyboard to ensure they had sufficient spelling proficiency to complete the study tasks.” Of the 19 participants, 1 had just under one years’ experience; most had at least three; one had been at it for 10 years.

Furthermore, the tasks were primarily spelling tasks--even in the test condition, where they were no longer unscrambling letters, but choosing from a full alphabet of letter cubes. In that phase, they had to answer five specific questions. Besides “Can you spell your name,” the questions were randomly selected from the following

·       “What is your favourite colour?"

·       "What is your favourite meal?"

·       "What is your favourite hobby?"

·       "What is your favourite movie?"

·       "What is your favourite book?"

·       "What is your favourite song?"

And this didn’t go as well as the unscrambling tasks did:

Of the 19 participants, 15 provided at least one valid response to the open-ended questions. Among them, 8 answered all 5 questions, 6 answered 4 questions, and 1 answered only 1 question.

Furthermore, it’s telling that the researchers chose questions whose answers can only be assessed for validity rather than actual correctness. That is, except for “Can you spell your name,” which is question that many minimal speakers with autism learn to do independently of S2C, it’s hard to know whether an answer is correct (how do we verify that “blue” is a participants’ favorite color) as opposed to valid (“blue” is at least a valid answer to a “what color” question). What would have happened if participants had been asked, say, “How many legs does a bird have,” or “Which is heavier, a balloon or a bowling ball?”

As for the 14 who answered all, or nearly all of the 5 questions correctly, I don’t profess to be able explain how they did this. But this experiment, as designed, doesn’t rule out that some or all of them did so simply by having learned, over years of answering questions both inside and outside of S2C contexts, (1) the associations certain key words (“color,” “meal,” “hobby,” etc.) spoken within in a certain common frame (“what is your favorite X”), and (2) certain specific letter sequences. Especially given the rote memorization skills in autism, and the significant comprehension deficits in non-speaking autism, particularly in those with motor impairments (Chen et al., 2024), this would appear to be the more likely route to some of the correct answers that were obtained—as opposed to intentional responding to the semantic content of the questions they were asked.

As an aside, this paper, unlike the HoloBoard, contained some interesting details about the speed of the letter selections. The average selection time during the scrambling activities was nearly 5 ½ seconds per letter, with time increasing when there were more options to choose from. Participants were fastest when only one letter remained.” This strikes me as extraordinarily slow. The average selection time during the five questions in the test phase was similar, though the authors make much of the fact that it improved from the first question to the fifth question, in what the authors claim “reflect[s] continued improvement in efficiency and comfort with grab-and-release as the session progressed.”

I’m not sure what this extremely slow letter selection process means, but I’m pretty sure it isn’t a reflection of the alleged motor difficulties the authors have asserted without empirical support, and that it isn’t good news for those who wish to make claims about intact language comprehension and literacy in S2C-ed individuals.

REFERENCES

Alabood, L.,  Nazari, A., Dow, T., Alabood, S., Jaswal, V.K., Krishnamurthy, D. Grab-and-Release Spelling in XR: A Feasibility Study for Nonspeaking Autistic People Using Video-Passthrough Devices. DIS '25: Proceedings of the 2025 ACM Designing Interactive Systems Conference. Pages 81 – 102 https://doi.org/10.1145/3715336.3735719

Chen, Y., Siles, B., & Tager-Flusberg, H. (2024). Receptive language and receptive-expressive discrepancy in minimally verbal autistic children and adolescents. Autism research : official journal of the International Society for Autism Research17(2), 381–394. https://doi.org/10.1002/aur.3079

Fournier, K. A., Hass, C. J., Naik, S. K., Lodha, N., & Cauraugh, J. H. (2010). Motor coordination in autism spectrum disorders: a synthesis and meta-analysis. Journal of autism and developmental disorders40(10), 1227–1240. https://doi.org/10.1007/s10803-010-0981-3

Higashida, N. (2013). The reason I jump: The inner voice of a thirteen-year-old boy with autism. Knopf Canada.

Torres, E. B., Brincker, M., Isenhower, R. W., Yanovich, P., Stigler, K. A., Nurnberger, J. I., Metaxas, D. N., & José, J. V. (2013). Autism: the micro-movement perspective. Frontiers in integrative neuroscience7, 32. https://doi.org/10.3389/fnint.2013.00032

 

Tuesday, November 18, 2025

Can Jaswal’s “HoloBoards” substitute for letterboards? Part II

This post picks up where I left off in my discussion of a recent paper co-authored by Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user. This paper, Alabood et al. 2024 (Jaswal and Krishnamurthy are listed, respectively, its fourth and fifth authors), discusses the development and preliminary study of a virtual letterboard, or what the authors call a “hologram” or “HoloBoard.” Here, instead of having a facilitator, or what the authors call a “Communication and Regulation Partner” or “CRP,” hover next to them and hold up the letterboard in their faces, S2C users don a virtual reality headset that projects a virtual letterboard, or “HoloBoard,” in front of them and follows them around wherever they turn their heads. Purportedly, this is an improvement on physical S2C and gives users more autonomy and privacy.

In my previous post, I left off with the author’s conviction that the challenges that autistic individuals have in generalizing a skill learned in one environment to other environments, together with the environmental differences between pointing to letters on physical letterboards vs. holographic ones, means that S2Ced individuals need to go through explicit training in order to transfer the skill of pointing to letters on a letterboard to pointing to letters on HoloBoards. In my previous post, I questioned this assumption, in part, by noting that the generalization difficulties in autism don’t include difficulty spelling words in one environment vs. another, and that:

hyperlexic autistic kids have been observed spelling words in all sorts of contexts, and without any explicit instruction: from refrigerator letters to sidewalk chalk, to letters they form out of playdough.

Given the design of the HoloBoard, there’s good reason to think that transitioning from typing on a letterboard to typing on a HoloBoard is much simpler than transitioning from refrigerator letters to playdough. As I noted in my earlier post, the researchers meticulously replicated in virtual space the physical letterboards the kids are used to, down to the precise arrangement of the letters. Indeed, given that the HoloBoards so closely replicated the physical letterboards, it’s hardly surprising that the training, in fact, went quickly—surprised though the authors profess to be about this.

The training involved going back and forth from a physical letterboard to a virtual one, first selecting an individual letter (as requested by the researcher), and then spelling an entire word first with visual cues (the target letters would pulse), then without them. Choosing names with which they assumed the participants would be familiar—CAT, DOG, FISH, TIGER, TURTLE—the authors report that “16 of the 23 participants completed the training module with their average training time being less than 10 minutes.”

And they find all this impressive, even given the extremely slow but improving typing speed:

Although the speed with which they spelled on the virtual letterboard was, on average, about two thirds as fast as on the physical letterboard, that most participants got faster at using the virtual letterboard across phases is impressive given that they were adjusting to a new training environment and were asked to do increasingly demanding spelling tasks.

And they found it reassuring that “a number of” their participants “explained,” via S2C-generated output, “that with further practice they ‘could get really good at it.’”

Feedback provided via S2C, of course, is more likely to be coming from the CRPs than from the participants themselves. And this leads to an alternative account for why the participants’ typing on the HoloBoard was slower than their typing on the letterboard: they had to get used to a different set of facilitator cues (more on that below). With further practice, they might indeed “get really good at it.”

This S2C-generated comment isn’t the only instance in which the participants “provided feedback” to the researchers. Arguably, the biggest moment for participant feedback was before the study actually began, when prospective participants decided whether to give consent to participate in it. This particular study, on one hand, might seem short and innocuous: all it had participants do was point to letters on physical and virtual letter arrays to spell a small set of short words. However, inasmuch as the comprehension deficits in non-speaking autism (see my previous post) mean that letter pointing in S2C is relatively meaningless for those subjected to it, the same goes for the letter pointing activities in the experiment.

Furthermore, there are two key ways in which interacting with the HoloBoard is potentially more aversive than interacting with a CRP. First, participants had to wear a headset the entire time; second, they had a virtual letterboard constantly hovering in their lines of sight, whichever way they turned. Many people might find this uncomfortable, particularly those with the sensory issues that often accompany autism. Indeed, the authors report that “3 [participants] could not tolerate the device long enough to engage with the application, likely due to their sensory sensitivities;” that “[t]hree participants found the experience to be overstimulating but managed to complete a few phases by taking short breaks,” and that one “indicated” partway through the trials “that they were too tired to continue.” Another person for whom the HoloBoard didn’t work out, curiously, was someone whose caregiver indicated that they interacted with the physical letterboard via peripheral vision. Since the HoloBoard is programmed to remain in front of the user’s face, it doesn’t lend itself to peripheral viewing—as unlikely as it is anyone can distinguish letters through peripheral vision (see Janyce’s post on this subject).

So how did participants give consent for this possibly aversive study? Through S2C-generated output: i.e., output that may well have been totally under the control of their CRPs. Given all the other things that Institutional Review Boards (IRBs) get worked up about when deciding whether to approve clinical research, it’s ironic that FCed consent procedures don’t raise red flags. It would seem that, for all their concerns about safeguarding people with disabilities, the programs that certify IRB members are ignorant of some of the greatest threats to some of the most vulnerable of such people.

Participants also purportedly provided feedback to the researchers about the design of the HoloBoard: “HoloBoard was a result of more than a year of consultations with nonspeakers” (and with their CRPs and others). Specific suggestions attributed to the participants included “interest in the visuals and sound effects,” with one participant purportedly spelling "what I most liked was the sounds that went when pressing and clicking [the buttons]." Reporting such inputs allows the researchers to check off that box that so many in disability studies fields now prioritize, i.e., making sure your research is “inclusive” or ”participatory;” in other words, mindful of the “nothing about us without us” mantra of the disability rights movement. One of the perverse effects of this new priority is that it’s incentivized those who do research on minimally-speaking autism to include non-speaking autistics in the only way that seems (to the more deluded or denialist of these researchers, anyway) to work: namely, through FC/RPM/S2Ced output. And what this means, more broadly, for any research on non-speaking autistics is that a specific subset of non-speakers, namely those subjected to FC/RPM/S2C, will be the ones recruited to provide “feedback” on everything from experimental design to future directions for research—feedback that is most likely coming, ironically, from their non-autistic, non-clinically-trained CRPs.

Even more perversely, some of the feedback that the participants in this study purportedly provided could be used to motivate the subjecting of growing numbers of minimally-speaking autistics to virtual letterboards:

·        It was “cool because [their] CRP doesn’t have to be with [them]” but "make[s] it harder for [the CRP] to prompt and regulate [them].”

·        "[I]t felt amazing to be independent. I loved how easy the letters were to access. I am thinking it will get easier with practice."

·        "I think it was nice to not have to get a CRP [to hold the board], that I was the only one who could use it."

·        "I liked many things: in particular, I like the inclusion and the independence from mom"

·        [It could] “improve the education standard for. . . nonspeakers.” 

·        “[T]he headset on made the distractions seem to really disappear. . . it made me greatly focused, having my visual field more restricted."

All this output, generated as it was through letter selections (it’s unclear how much of it was generated through held-up letterboards and how much of it through interactions with the HoloBoard), may well have been controlled by the CRPs.

Let’s turn, now, to the findings.

At first glance, they might seem impressive. To begin with, the HoloBoard environment eliminated one major opportunity for CRP influence. In RPM/S2C, it’s up to the CRPs to determine when a letter selection has been made: they decide which letters to call out and transcribe, which allows them to ignore certain letter selections and to call out different letters than those that were actually selected (something often evident in videos of S2C). In the HoloBoard environment, it’s the HoloBoard that reads out and transcribes the letters.

Another way in which CRPs regularly influence letter selection is through board movements, including pulling the board away and then repositioning it. But the researchers report that, though the HoloBoard environment allowed them to do these things, the CRPs in the experiment generally chose not to:

We observed only one CRP who initially moved the virtual letterboard to make room for the physical letterboard. Other CRPs simply left the virtual letterboard in the same place even when the participant was touching a letter on the physical board.

In addition, of the 16 (out of 23) participants who completed the study, the majority performed well in the testing phase, where they had to spell words that were spoken out loud to them, and which were mostly not words they had trained on in the study’s training phase. The authors report that:

§  “73% of participants were able to complete Phase 5 with success rates of 100%, while 10% scored between 63% to 71% in that phase. The total average Phase 5 success rate for those who completed that phase is 95.8%.”

§  “when participants were allowed to continue interacting with the virtual letterboard after the formal session, 14 participants spelled lengthy sentences, sent emails via the application, or offered their feedback on our system solely using the virtual letterboard.”

§  14 participants “quickly learned to self-correct typos using the backspace and space bar.”

§  “many participants were able to perform optional, more advanced tasks such as providing independent full sentence feedback on our system using solely the virtual letterboard” (despite the fact that they couldn’t select letters independently with the physical letterboard)

§  5 participants were able to use HoloBoard in “solo mode” in which the CRPs removed their headsets and couldn’t interact with the virtual environment. In solo mode, they “spelled short answers independently... when asked to do so (e.g., spell your name, home city, parents’ names, favorite movie, age).”

But on closer inspection, these results are not as impressive as they first appear. First of all, in the “multiplayer mode” that was used during the training and test phases, the CRPs had access to the virtual reality and could “see the virtual letterboard” and “gauge what letters the nonspeaker is attempting to interact with.”  Based on what they saw, the CRPs could, and did, deploy what the authors call “verbal prompts” and “cues,” and this extended into the testing phase. As examples of prompts, the authors give "resist the mental loop," "scan the board," or "find your next letter”; as examples of cues, they give “repeating the word that needed to be spelled, verbally guiding the participant to the location of the next letter, or spelling out the next letter.” Such prompts and cues occur regularly in S2C and often suffice, on their own, to influence letter selection—even if they aren’t as explicit as prompts that provide verbal guidance to a letter’s location or that spell out the next letter.

Indeed, there are numerous published videos of FC/RPM/S2C where the typing is done on a stationary device, with the S2C merely providing occasional verbal prompts and cues, along with body language cues like moving their torso, head, or hand in the direction of the next letter, or giving a subtle non-verbal signal when a wandering index finger has arrived at the correct letter (see, e.g., here). Some of these published videos come from the Telepathy Tapes podcast, where the non-speaker types out a number or word that only the CRP was shown, such that the only other explanation, other than CRP control through subtle verbal and/or gestural cues, is telepathy (see Janyce’s analysis of this one). Telepathy is a topic that Jaswal has thus far avoided discussing. Nor do he and his co-authors discuss the possibility that the CRPs might be providing gestural cues in addition to verbal ones to their participants. There is nothing in the HoloBoard environment to rule this out.

Instead, the authors simply stress that there was less prompting and cueing in the test phase.

The number of cues dropped to 5 in phase 5. Participants needed prompts for 11.5% of interactions(a total of 27 interactions)... In the testing phase (Phase 5), we observed only 7 prompts in total.

Now let’s look at what the test phase required participants to do. As the authors explain:

[P]articipants were “required to spell five words (their first name, PIG, DUCK, CAMEL, DONKEY, spoken aloud by the researcher), one at a time, solely on the virtual letterboard and without visual cues or mirroring. In this phase, we expected the words they were asked to spell would be familiar, but only one (DUCK) had appeared in a previous phase.

One thing that’s striking here is just how short these verbal items are compared to the length of most S2C-generated messages. The authors have suggested that the reason for this relates to the purported cognitive demands of learning to use the HoloBoard, and that “communicating [on the HoloBoard] in a more generative matter” (which presumably includes longer sentences) might be something that would be achieved with “further training.” But assuming that the participants’ literacy skills are genuinely intact, you’d think, once they learn to type any letters on the board, longer, “more generative” sentences would come along automatically for the ride.

Second, some of the words/letter sequences were familiar. Two of the five short words the participants were asked to spell were their first name and the word DUCK, which had also appeared in the training. Given that the participants “had an average of 5.26 years (range: 2.25 to 13.33 years) of using the letterboard,” some of the other words, as well as some of the letter sequences (particularly “c-k”) may have been familiar as well.

As an aside, it’s quite telling that, despite the participants’ multi-year experience with S2C, the “CRPs reported that none of our participants could reliably spell on a physical letterboard unless it is held by a CRP.”  This, along with everything else, suggests they were highly dependent on facilitator cues.

Another important consideration is the difference between spelling and understanding. Participants were only asked to spell words; not to use them in meaningful ways. Nor did the verbal prompts to spell specific words require much in the way of comprehension: with the exception of the prompt to spell one’s first name, these were simple dictation exercises in which the word to be spelled was spoken out loud. As for the prompt for the participant’s name, this common question may have been a highly familiar prompt with a highly conditioned response.

Taken together—the simple familiar words, the minimal comprehension demands, and the opportunities for the kinds of CRP cueing that can completely control letter selection (cf. the Telepathy Tapes), the fact that “73% of participants were able to complete [the test phase] with success rates of 100%, while 10% scored between 63% to 71% in that phase” isn’t convincing evidence for the HoloBoard, as deployed in this experiment, as an authentic way for non-speaking autistic individuals to communicate.

The more seemingly convincing HoloBoard interactions only occurred after the actual experiment was over: namely, the 14 participants who “spelled lengthy sentences, sent emails via the application, or offered their feedback on our system solely using the virtual letterboard” and “were able to perform optional, more advanced tasks such as providing independent full sentence feedback on our system using solely the virtual letterboard.” Since these post-experimental claims are anecdotal, they only carry so much weight—particular as the authors don’t provide any details about either the message content or their communicative context, or what the CRPs may have been providing by way of verbal prompts, cues, and gestures.

The only time the participants’ interactions with the HoloBoard were definitively unprompted was when they occurred on “solo mode” (where the CRPs removed their headsets and couldn’t interact with the virtual environment). Only five participants used the letterboard in solo mode, and here all the authors report is that they “spelled short answers independently... when asked to do so (e.g., spell your name, home city, parents’ names, favorite movie, age).” Given

§  the many years of practice that these participants with S2C (5.26 years, ranging from 2.25 to 13.33 years)

§  the likelihood that these were highly familiar-sounding questions (even if not fully understood) to which the participants had frequently spelled out answers

§  the well-attested rote memorization skills in autism

§  the significant comprehension deficits that researchers have found in non-speaking autism (see my previous post)

...it’s much more likely that these results reflect memorized letter patterns rather than intentional, independent communication—as well as the CRPs’ notions of what their clients’ favorite movies were, as opposed to their clients’ actual cinematic preferences. If the authors had really wanted to establish whether the HoloBoard users were authentically communicating their own messages, they could have conducted message-passing tests. Both in eschewing message-passing tests, and in presenting results that reflect memorized letter patterns rather than intentional, independent communication, Jaswal et al.’s paper resembles his two previous papers on letter pointing by non-speaking individuals with autism (Jaswal et al., 2020; Jaswal et al. 2024).

But surely this is not Jaswal’s last paper, and the future directions the authors propose for this latest technology, based both on their questionable results and the problematic, S2C-generated feedback they obtained from their participants, are alarming. Motivated by the fact that the CRPs didn’t need to hold the HoloBoard, which suggests to the authors that it “may have affordances that facilitate more independent typing,” the authors seem to be suggesting a full-scale replacement of traditional S2C with HoloBoard-based communication that seems to combine the worst elements of each:

§  “Head tracking could be exploited to ensure the virtual letterboard remains in the nonspeaker’s field of view even when they move.”

§  CRPs will continue to be present, “dedicat[ing] their focus to other aspects of supporting a user, such as promoting attention and regulation,” even when the HoloBoard user might wish to “engage in private conversations with a third person.” (They propose to explore ways that “would allow a CRP to support their nonspeaker while allowing the nonspeaker” during these private conversations.)

·        Or possibly CRPs would be replaced by a virtual CRP: “a personalized virtual CRP within the virtual environment. The virtual CRP would emulate the behaviour and appearance of a user’s human CRP to provide attentional and regulatory support.” But the virtual CRP may essentially replicate the sorts of message-controlling cues done by the real-world CRP: “Machine Learning (ML) techniques could be used to train the virtual CRP based on observations from a user’s real-world interactions with their human CRP.” While the authors don’t mention that the “virtual CRPSs” might learn to mimic the human CRPs’ prompts and board movements (part of how CRPs unwittingly control letter selections), in an article about this paper in IEEE Spectrum, a magazine published by the Institute of Electrical and Electronics Engineers, Jaswal et al. write that “This virtual assistant [which now has a name: “ViC”]... can demonstrate motor movements as a user is learning to spell with the HoloBoard, and also offers verbal prompts and encouragement during a training session.”

§  Finally, in what are even more powerful venues for message control, “[l]arge Language Models (LLMs) could be integrated to reduce the effort needed to communicate thereby reducing user fatigue. For example, such a system would allow the user to produce elaborate responses by providing just a succinct prompt to an LLM.”

Many of us predicted these last two items would next be on Jaswal’s agenda. In other words:

§  Machine learning that allows S2C’s message-controlling prompts and cues to be taken over by machines and safely hidden away within their obscure, machine-learned, neural networks from FC critics and others concerned about the communication rights of autistic non-speakers.

§  LLMs that elaborate the short messages authored by actual or virtual CRPs into messages that are even more filled with predictable blather and bromides, and even more removed from what minimal speakers actually want to communicate, than FC/RPM/S2C-generated output is.

REFERENCES

Alabood, L., Dow, T., Feeley, K. B., Jaswal, V.K., Krishnamurthy, D. From Letterboards to Holograms: Advancing Assistive Technology for Nonspeaking Autistic Individuals with the HoloBoard. CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems Article No.: 71, Pages 1 - 18 https://doi.org/10.1145/3613904.3642626

Jaswal, V. K., Wayne, A., & Golino, H. (2020). Eye-tracking reveals agency in assisted autistic communication. Scientific reports10(1), 7882. https://doi.org/10.1038/s41598-020-64553-9

Jaswal, V. K., Lampi, A. J., & Stockwell, K. M. (2024). Literacy in nonspeaking autistic people. Autism : the international journal of research and practice28(10), 2503–2514. https://doi.org/10.1177/13623613241230709

 

Wednesday, November 5, 2025

Can Jaswal’s “HoloBoards” substitute for letterboards? Part I

Over the past couple of years, S2C proponent Vikram Jaswal, Professor of Psychology at the University of Virginia and the father of an S2C user, and Diwakar Krishnamurthy, Professor of Electrical and Computer Engineering at the University of Calgary and the father of an RPM user have co-authored several papers on the development of virtual tools that enable S2C users to virtually select virtual letters rather than pointing to letters on letterboards. Like Jaswal’s other recent papers, each of these begins with purported justifications for S2C as a valid communication method, and each reports instances of allegedly independent communication by S2C users. Like Jaswal’s other papers, therefore, these papers are worth reviewing in detail. In today’s post, I’ll discuss the first part of one of them: Alabood et al. (2024).

Published by the CHI (Human Factors in Computing Systems) Conference on Human Factors in Computing Systems with Jaswal as its fourth listed author and Krishnamurthy as its fifth, this paper discusses the development and preliminary study of a virtual letterboard, or what the authors call a “hologram” or “HoloBoard.” Instead of having a facilitator, or what the authors call a “Communication and Regulation Partner” or “CRP,” hover next to them and hold up the letterboard in their faces, S2C users don a virtual reality headset that projects a virtual letterboard, or “HoloBoard,” in front of them and follows them around wherever they turn their heads. Purportedly, this is an improvement on physical S2C and gives users more autonomy and privacy.

Deploying the circular reasoning so typical of S2C supporters, the article begins with variations on the usual pro-RPM/S2C claims and cites RPM/S2C-generated testimonials as its sources. In particular, it cites testimonials attributed to Dan Bergmann, Elizabeth Bonker, Naoki Higashida, and Ido Kedar, and references, as well, Edlyn Peña’s compendium of FC/RPM/S2C-generated testimonials.

In this first post, I’ll discuss these preliminary claims. These are worth fleshing out in detail, partly because some of what’s said here is new, and partly because there’s been some new research that adds to the evidence against these claims. In a follow-up post, I’ll turn to the actual study.

Claim 1: “Lack of speech is sometimes conflated with lack of ability to think.”

The source for this is an entire book, namely the memoir attributed to RPM user Ido Kedar (Ido in Autismland). And while it’s nice to see this perennial claim softened with “sometimes,” it’s hard to believe, in our Deaf-culture-aware, Stephen Hawking-informed society, that more than a handful of highly uninformed people are guilty of conflating speech with thought. Nor have I seen any references to people actually doing so.

However, what Jaswal et al. may actually have in mind here is that, in autism in particular, people like me have stated that lack of spoken language tends to indicate deficits in a very specific cognitive skill: comprehension of language. And for this, there is actual evidence, most recently in an article by Chen et al. (2024). Examining a symptom database of 1,579 minimally speaking autistic children aged 5-18 years, and using the terms “receptive language” for “comprehension of language” and “expressive language” for speaking ability, Chen et al. found that:

·        The 1,579 children “demonstrated significantly lower receptive language compared to the norms on standardized language assessment and parent report measures.”

·        “[T]heir receptive language gap widened with age.”

·        “[O]nly about 25%... demonstrated significantly better receptive language relative to their minimal expressive levels.”

·        “[M]otor skills were the most significant predictor of greater receptive-expressive discrepancy”—i.e., the 25% with language comprehension skills that were significantly better than their minimal speaking skills had better motor skills than the rest of the non-speakers.

All this is highly problematic for S2C proponents. That’s because their two chief arguments for the legitimacy of S2C are (1), that minimal speakers have intact language comprehension and (2), that minimal speakers have such severe deficits in motor skills that they’re unable to point to letters without someone holding up a letterboard in front of their faces and prompting them.

Claim 2: “the cognitive abilities of nonspeakers are routinely underestimated” such that they are “often segregated into special classrooms where teaching of basic life skills are [sic] prioritized over academic instruction.”

This, too, is a crucial claim for S2C proponents: S2C-generated output indicates that non-speakers, assuming they’re the authors of that output, have above-average academic skills. But the authors’ source for this claim, Courchesne et al. (2015) doesn’t support it. Courchesne et al. don’t address academics; what they found was that minimally speaking autistic children performed better on cognitive measures that don’t require language skills as compared with cognitive measures that do require language skills. An example of the latter is the WISC-IV, a standard IQ test that involves a fair amount of language in both prompts and tasks. The three non-linguistically-demanding cognitive measures that Courchesne et al. looked at were the Raven’s Colored Progressive Matrices board form (RCPM), which measures visual pattern recognition, the Children’s Embedded Figures Test (CEFT), which measures the ability to find hidden shapes in a complex image, and a visual search task, which measures the ability to scan a visual scene to find a specific target object or feature. Academic achievement, of course, requires much more than these three visual capabilities.

Furthermore, even though the three visual tests make minimal language demands, the results for non-speaking autistics were significantly worse than for typicals: the 26 (out of 30) minimally-speaking autistic participants who completed the RCPM, for example, had an average raw score of 18.61 out of 36; their non-autistic counterparts, in contrast, had an average raw score of 28.5. Worse still, how well a minimal speaker did was positively correlated with their language skills—most likely because, even in autism, nonverbal cognitive skills correlate with language ability (see Chen et al, 2024), which in turn, in autism, is correlated with speaking ability (more on that below). As the authors report, “autistic children's RCPM performance differed according to their reported spoken language level” with “autistic children using two-word phrases perform[ing] better… than those using no words at all.”

Returning to the authors’ claims about the inappropriate segregation of non-speakers into classrooms “where teaching of basic life skills are [sic] prioritized over academic instruction,” strong performance on visual tests isn’t enough for academic success. Academic instruction, even in math, is highly verbal; most academic tasks are highly verbal. To access academic instruction and perform academic tasks, you need to have the same skills that are required for, and measured by, the WISC-IV and other verbally-mediated, verbally demanding tests.

Claim 3: “[M]ost nonspeakers are never provided an effective language-based alternative to speech.”

Here the authors simply claim, without any citations, that standard AAC (Augmentative and Alternative Communication) devises are deficient. In particular, they state that “the vocabulary available to a user is chosen by someone else” and claim “there is no way for an autistic person to express a concept that has not already been programmed into their AAC device.” Jaswal has made this claim repeatedly, persistently unaware that most AAC devices have keyboard options that allow typing. Used in typing mode, an AAC device is just like a letterboard: the only difference is that no one is holding it up in front of the user and prompting and cueing their letter selections.

The authors also state that “Few individuals advance beyond the requesting stage”—assuming that this must be the fault of the device rather than a consequence of the well-known socio-communicative challenges that have defined autism since it was first identified eight decades ago by Leo Kanner. Non-autistic users of AAC tools—deaf children with sign language, individuals with mobility impairments—regularly advance beyond the requesting stages to a whole range of communicative acts.

Claim 4: Non-speakers need “months or years” of training by CRPs in how to ”isolate and point at specific letters on the letterboard.”

This is why, the authors explain, CRPs have to start with “partial” letterboards with larger letters (allowing CRPs to decide which third of the alphabet each next letter comes from—yet another way to control messages in the earliest stages when their clients aren’t as susceptible to cues).

But there’s no evidence that autistic kids lack the motor skills to point. In fact, pointing doesn’t appear on any of the standard motor skills evaluations, which indicates that it simply isn’t a specific motor issue for anyone—any more than other simple gestures like waving your hand. The evidence, rather, is that, to the extent that autistic kids don’t point (many do; just less often than typical children do), it’s because they don’t understand the communicative function of pointing. That it’s the social aspects of pointing rather than the motor aspects of pointing that challenge autistic kids is also supported by the fact that the least frequent sort of pointing in autism is pointing for more purely social purposes (pointing things out to people in order to share attention), as opposed to pointing for instrumental purposes (pointing to express a request). Diminished pointing in autism, that is, is consistent with the socio-communicative challenges that have defined autism since it was first identified eight decades ago by Leo Kanner; no additional challenges need to be posited to explain it (cf. Occam’s Razor).

When it comes to pointing to letters, however, the most likely challenge is literacy: pointing to the “correct” letters to generate messages entails understanding the meanings, and knowing the spellings, of the words you want to type. Given what we know about comprehension in minimal speakers (see above), it’s unlikely that S2C spellers have the ability to independently and consistently point to “correct” letters in the often highly linguistically sophisticated messages that they’re supposedly generating.

And what minimal speakers undergo during all those “months or years” of training by their CRPs in how to ”isolate and point at specific letters on the letterboard” is most likely not the acquisition of the motor skills required for pointing (which they’ve almost certainly already acquired on their own) but, rather, a behaviorist conditioning to select letters based on the prompts and (unwitting) cues of their CRPs.

Claim 5: The alleged motor difficulty of pointing to letters makes cognitive demands that are so great that “the cognitive demands of the questions they are asked” have to start out as minimal

As a result, the authors claim:

[E]arly lessons focus on spelling and closed-ended comprehension questions (e.g., "The first steam engines were used to pump what out of mines?"), and later stages incorporate open-ended questions (e.g., "Why are trains so much more common in some countries than others?”)

In fact, there’s no evidence that pointing to letters makes cognitive demands so great that it’s hard to answer open-ended questions (see above). Rather, the more relevant difference between questions like "The first steam engines were used to pump what out of mines?” and questions like “Why are trains so much more common in some countries than others?” is the number of letters one needs to point at to answer them. For the first question, 5 letters suffice (w-a-t-e-r); for the second question, many more are needed, even for a super-short, succinct response like “smaller densely populated countries are more suitable for trains,” which contains 56 letters. Getting an S2C “novice” to point to 5 letters in a row is a lot easier than getting them to point to 56 letters in a row. Even obtaining the compliance needed for lengthy periods of letter pointing—which, according to Jaswal’s earlier eyetracking paper (Jaswal, 2020), is at about one second per letter even after years of practice—presumably takes time.

Claim 6: The alleged need by minimally-speaking individuals for attentional, emotional, and sensory support while they point to letters

The authors describe the roles of the CRP as “monitor[ing] the nonspeaker’s attention to ensure they are focused on the task at hand” and “prompting the [nonspeaker] to redirect their attention to the letterboard.” Elaborating, they state:

CRPs believe that the micro-movements they make while holding a physical letterboard might aid in maintaining the speller’s attention. For example, when they notice a nonspeaker’s focus waning, the CRP might rapidly remove the letterboard from the field of view and then reintroduce it, thereby refreshing the nonspeaker’s attention.

But wandering attention and wandering away from the letterboard are more consistent with boredom: with being made to point at letters without understanding what those letters spell—which, in turn, is consistent with what we know about profound autism and with what Chen et al. (2024) report about comprehension (see above).

The authors also claim that CRPS “provide regulatory support and encouragement as appropriate,” claiming that

Since many nonspeakers have sensory needs that can compel them into near-constant motion” [Their source for this is the FC-generated memoir “The Reason I Jump”] “the CRP may reposition the letterboard as needed so that it always remains in the subject’s field of view.”

As a side note, this last point suggests that repositioning the letterboard occurs only when it’s out of the subjects field of view; in fact, a trained CRP testified in court that he repositions the board only when his client types three letters that “don’t make sense,” and many videos of RPM/S2C show repositioning occurring under many other circumstances, often for no obvious reason other than that the client was about to hit the wrong letter.

Indeed, all of these purported roles—prompting attention, encouraging, repositioning the letterboard—are opportunities for (unwitting) verbal cueing that directs letter selection.

Claim 7: Some RPM/S2C users have achieved independence

The author’s source for this is once again an entire book: once again, Ido Kedar’s entire memoir—as if a memoir that’s been attributed to someone who’s been subjected to RPM/S2C a is proof of independent typing, and as if the testimony within such a memoir is a reliable source on RPM’s validity. In the few public videos of Kedar, there’s no evidence of independent, spontaneous, prompt-free typing.

Claim 8: What’s holding the many other RPM/S2C users back from independent typing is the need for regular practice with “trained professional[s],” who are costly and scarce.

In particular, learning type independently, the authors claim, requires “regular opportunities for practicing the required skills (e.g., coordinating gaze and pointing to letters),” which in turn requires supervision and feedback by those costly and scarce “trained professional[s].”

Of course, the alleged need for years of practice and guidance from scarce, expensive professionals in order to point independently to letters depends on the claim that pointing is motorically challenging in autism, which, in turn, is totally unsubstantiated (see above).

Unsubstantiated though this claim is, it’s one of the bases for the study of the communication tool reported in this article, to which we now turn.

The purpose of the study, “to train nonspeakers in holographic spelling,” implicitly assumes that the entire issue for nonspeakers who point to letters to communicate is one of sensori-motor environment. Somehow, whether nonspeakers who can “isolate single letters and spell full words” on a physical letterboard can do the same thing with a virtual letterboard is an open question. Indeed, as far as the researchers are concerned, it’s “an ambitious goal” for letterboard users, particularly given:

§  The lack of “haptic feedback” (touching sensation) associated with touching the physical letterboard

§  the need to wear a head-mounted device

§  the need to interact with holograms

§  the presence of unfamiliar researchers

§  the unfamiliar environment of the research study

..as if all these environmental changes are enough to interfere with the ability to point to letters to spell words.

The authors, furthermore, cite studies reporting that autistic people have difficulties generalizing skills learned in one context to another. But none of the studies they cite—or others on this topic—note difficulties with generalizing something as straightforward as isolating single letters and spelling words from one environmental context to another. It’s one thing to have trouble generalizing the meaning of a word like “doctor” from one medical setting to another, or generalizing conversation skills learned in a speech therapy session to real-world conversational settings: both of these generalization difficulties are common in autism. It’s quite another thing to have trouble generalizing spelling from one context to another. Indeed, anecdotes of hyperlexic autistic kids report no difficulties at all in this department: hyperlexic autistic kids have been observed spelling words in all sorts of contexts, and without any explicit instruction: from refrigerator letters to sidewalk chalk, to letters they form out of playdough.

Indeed, what generalization difficulties there are in the context of going from standard S2C to holographic S2C are probably quite different. One difficulty is the altered context of CRP cueing: different environments mean different CRP cues. With holographic S2C, the most obvious difference, as we’ll see, is that the letter array is typically no longer held up by the CRP.

The other generalization difficulty has to do with letter position. For those who are conditioned to spell letter sequences without understanding what they’re spelling, one of the most salient things is letter position. In cases where letters’ positions are held constant relative to other letters, as they are on S2C letterboards, one of the most likely learned elements of spelling are the sequences of movements around the letter array. To spell the word CAT on an S2C letterboard, for example, one might learn to first go to the top row and towards left the end of the board for “c,” then all the way to the left for “a”, and then down to the second to last row and far right for “t.” In addition to learning the positions for common words, one might also learn the positions for common letter sequences like “t-h” (far right of the second to last row followed by left-center of the second row).

Given this, it’s telling that the researchers meticulously replicated in virtual space the physical letterboards the kids are used to, letting them (or their CRPs) choose from “a variety of virtual letterboards that resemble popular physical models used by the community,” allowing “a nonspeaker to choose the one they are most familiar with.” That is, while the researchers play up concerns about “haptic feedback” and wearing a headset (which surely is quite annoying), they fail to mention, and yet still carefully control, one key factor that is likely to generalize only if the letter selection is driven by actual comprehension: that is, the positions of the letters on the letterboard. If you’re intentionally spelling the word CAT to talk about actual cats, your finger will go to the correct letters, no matter where they appear in an array, even if it takes some searching. But if CAT, for you, is simply a series of points to letters in certain positions, you’re much more likely to be thrown by a change in letter positions.

In my next post, I’ll pick up here and take a closer look at the actual study.

I’ll close here by noting that Jaswal et al. discuss this paper in an article for IEEE Spectrum, a magazine published by the Institute of Electrical and Electronics Engineers, where they repeat many of the same faulty and problematic claims about non-speaking autism.

 

REFERENCES

Alabood, L., Dow, T., Feeley, K. B., Jaswal, V.K., Krishnamurthy, D. From Letterboards to Holograms: Advancing Assistive Technology for Nonspeaking Autistic Individuals with the HoloBoard. CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems Article No.: 71, Pages 1 - 18 https://doi.org/10.1145/3613904.3642626

Chen, Y., Siles, B., & Tager-Flusberg, H. (2024). Receptive language and receptive-expressive discrepancy in minimally verbal autistic children and adolescents. Autism research : official journal of the International Society for Autism Research17(2), 381–394. https://doi.org/10.1002/aur.3079

Courchesne, V., Meilleur, A. A., Poulin-Lord, M. P., Dawson, M., & Soulières, I. (2015). Autistic children at risk of being underestimated: school-based pilot study of a strength-informed assessment. Molecular autism6, 12. https://doi.org/10.1186/s13229-015-0006-3

Jaswal, V. K., Wayne, A., & Golino, H. (2020). Eye-tracking reveals agency in assisted autistic communication. Scientific reports10(1), 7882. https://doi.org/10.1038/s41598-020-64553-9