Wednesday, May 1, 2024

Virtual facilitation: a means to independence or yet another high-tech distraction?

 (Cross-posted at FacilitatedCommunication.org).

Vikram Jaswal’s latest, FC-promoting paper—Dr. Jaswal is actually fourth author on this one—reports on the development of a virtual reality (VR) environment for the variant of FC known as Spelling to Communicate (S2C). In this VR environment, a virtual letterboard appears to move in front of a person who wears a VR headset. Via machine learning, Jaswal and colleagues have trained the VR program to move the virtual board in front of the headset wearer in ways that mimic how a human facilitator (known in the S2C world as a “communication and regulations partner” or CRP) moves the letterboard around in front of the person they’re facilitating. The most obvious consequence of this training is that wherever the helmeted person turns, the letterboard follows.

In some ways, this paper is just like Jaswal’s other pro-S2C papers. It explores S2C through fancy technology and complex computations while evading direct, low-tech ways to test S2C’s validity, and its sources are either unreliable or don’t say what the authors say they say (more on that shortly).

In other ways, this is not just another pro-S2C Jaswal paper. Indeed, it says nothing new about the validity of S2C, but instead:

  • spells out (as it were) a bit more than what we see elsewhere about the rationale of S2C proponents for how and why the CRPs move around the letterboards

  • raises questions about what Jaswal, the father of an S2C user, actually believes

  • raises the possibility that, depending on how this VR project moves forward, some evidence about the validity of S2C could possibly emerge (possibly).

But before we get to the new stuff, let’s dispense with the old.

This paper, rather than saying anything new about S2C’s validity, assumes from the get-go that S2C is valid. Indeed, in order for there to be any motivation at all for this machine-learning and virtual-reality-building venture, which includes equations like ๐œƒ = cos−1 (2⟨๐‘ž1, ๐‘ž2⟩ 2 − 1), S2C has to be valid.

That’s because the purported motivation for this venture stems from S2C-generated messages. These, in turn, are valid only if S2C is valid:

[N]onspeakers have expressed to us a desire to transition towards a more independent communication method that relies less on a human assistant, which would provide them with more autonomy and privacy.

Since we have no idea if S2C is valid—because no one involved in using S2C will agree to participate in a simple message-passing experiment—we have no idea if those subjected to S2C actually have the above desires. I suspect, though, that the purported communications, however inauthentic they may be, still express kernels of truth. Those subjected to S2C probably do prefer to communicate independently than to have people thrust letterboards in their faces and repeatedly prompt them. But I’m guessing that their preferences are to communicate via body language, speech, and/or typing (many do speak, as Jaswal himself has acknowledged, and many routinely type when navigating YouTube) rather than to wear VR helmets and find VR letterboards in front of their faces wherever they turn their heads.

But S2C supporters have dismissed these independent forms of communication (especially speech and body movements) as somehow unintentional, insisting, instead, that those subjected to S2C have fine motor challenges that require held-up letterboards. Jaswal is no exception:

As many members of this population have limited fine motor skills, the CRP ensures that the letterboard is positioned in a way that accommodates a user’s unique motor skills profile.

Supporting this claim is one of Jaswal et al.’s unreliable and inaccurate citations: a paper by Fournier et al. (2010). Fournier et al. focus mostly on gross motor phenomena—gate, posture, gestures, and arm wave stereotypy (arm flapping)—and say nothing about pointing. Indeed, as we’ve discussed here, difficulty with pointing doesn’t come up in any of the actual research on motor difficulties in autism.

Relatedly, Jaswal and colleagues claim that “nonspeakers’ hand movements can become more restrictive as they tire from letterboard interactions.” This sounds plausible, but it also sounds more like mental fatigue and/or growing emotional disengagement than like problems with motor control per se. In any case, the authors offer no supporting citations.

Inaccurate/unreliable citations appear in a half dozen additional claims:

  • The claim that most minimally verbal people are “not provided with effective alternatives to speech.” The most obvious interpretation of this statement is that people are neglecting to provide minimally verbal individuals with alternative ways to communicate. The authors cite three papers in support:

    • Jaswal’s eye-tracking study, which asserts the same claim but does not back it up except by citing another paper, one by Brignell et al.

    • The Brignell et al. paper, which is about the limited evidence that the alternatives to speech provided to individuals with autism have led to improvements their communication

    • Tager-Flusberg and Kasari, who focus on issues of teaching, noting both the lack of progress that some children make in acquiring language despite intensive interventions, and the fact that AAC devices, unlike PECS (Picture Exchange Communication System), lack specific teaching protocols.

    None of these observations supports the notion that people have neglected to provide minimally verbal people with effective alternatives to speech. In fact, many minimally verbal people communicate effectively using AAC devices.

  • The claim that several non-speakers have learned to successfully communicate by pointing to letters. Here the authors cite reports of three FC users: Elizabeth Bonker (who is subjected to the Rapid Prompting Method); Naoki Higashida (another held-up letterboard user); and Dave James Savarese (who is subjected to touch-based FC). None of these individuals have had their communications validated by controlled testing (message-passing tests).

  • The claim that “a number of nonspeakers have learned to type independently after commencing letterboard based training.” Here the authors cite videos from the pro-FC website United for Communication Choice. Only two of these show unedited (and therefore potentially reliable) footage. In both, a helper sits within cueing range, stares at the keyboard, provides occasionally oral prompts, and subtly shifts her body while the person slowly selects letters with an extended index finger, such that subtle letter cueing—the sort of inadvertent cueing that can evolve over multiple years of being subjected to FC—cannot be ruled out. True independence, of course, includes being able to execute the skill in question without a designated helper sitting next to you, prompting you, and staring at your output the whole time you’re producing it. (In the second of these videos, interestingly, we see that the person can independently and spontaneously pronounce words, which raises the question of why he’s being prompted to hunt and peck out, over dozens of seconds, what it takes him only about one second to speak).

  • The claim that the constant movement of some of those who are subjected to S2C, which might be a sign of restlessness with the procedure and/or a desire to end it, is instead “to cope with their sensory sensitivities.” The only citation for this claim is Naoki Higashia (see above).

  • The claim that non-verbal individuals “have reported enjoying using” virtual reality helmets. Here the only citations are two previous papers that include Jaswal as a co-author, but the bigger issue is that these reports could only have occurred through some form of FC, which makes them of doubtful validity.

  • The claim that an alternative, independent way to pick out letters—namely via eye gaze, aka “eye typing”—“may be challenging for nonspeakers because of the atypical eye movements that are frequently associated with the condition.” Here the authors cite Little (2018), a paper that says very little about atypical eye movements. Little focuses mostly on atypical visual processing and on diminished following of other people’s eyegazes (a component of joint attention, well known to be impaired in autism). Little cites just a couple of studies that discuss actual issues with eye movement as such, and concludes “While these studies provide some evidence of eye movement problems in ASD and its potential as a biomarker, there are inconsistencies in findings.”

  • The claim that some individuals are able to identify letters through peripheral vision. The authors once again cite Little (2018). But Little says nothing anywhere about peripheral vision. As Janyce discusses here, peripheral vision simply isn’t acute enough for picking out letters.

Now to the three things that distinguish this paper from Jaswal’s other pro S2C papers. First, as I noted above, this paper spells out more about what the rationale is, according to S2C proponents, for the movements by the CRPs (communication and regulations partners, or facilitators) of the letterboards. These board movements, purportedly, are designed to adjust to the movements of the facilitated person. And each facilitated person, apparently, has a specific “motor and movement profile,” of which there are, purportedly, three common varieties. As we read in the description of the machine learning sessions:

the CRP who has 10 years of experience working with nonspeaking autistic individuals, emulated [later in the paper, the word “mimic” is used instead] three different common motor and movement profiles she typically encounters in her practice when teaching nonspeaking people to use a letterboard to communicate.

Second, this paper, more than Jaswal’s other papers, raises questions about what Jaswal actually believes. That’s because the virtual facilitation process, as described, appears to be a process that is, essentially, message-blind. The virtual board movements don’t seem to involve any letter-specific movements—i.e., they don’t seem designed to cue specific letters. And the VR doesn’t seem to involve any stored information about which messages should be typed out in which situations—i.e., it doesn’t seem to “know” what the “correct” answers are. Instead, the basis for the machine learning of the virtual letterboard movements, as described by the authors, appears to only involve data about the physical configuration of the user’s body:

The positions and rotations of the user’s head, index fingertip, palm, and the physical board are collected. This data is used to train a BC Machine Learning (ML) model that learns a unique placement policy (of the physical letterboard) personalized to the user’s unique motor skills profile and movements.

Were such a virtual facilitation—based, that is, only on the physical configuration of the user’s body, and free of any additional facilitation by humans—tried out on an S2C user, it could amount to something close to a message-passing test. If the virtually facilitated user types out only nonsense sequences and/or words and phrases that they’re already able to type out on their own, this would effectively invalidate their S2C-mediated communications. But if the virtually facilitated user types out spontaneous, contextually appropriate messages similar in style, sophistication, and content to what they produce through S2C, this would suggest that their S2C-mediated communications might be authentically theirs.

Does Jaswal, the father of an S2C user, really want such a quasi-message-passing scenario see the light of day? Is it possible—despite his resistance to rigorous message-passing tests, despite his constant detours around simple, straight-forward experiments into obscure thickets of technology, engineering, and mathematics, despite his faulty experimental designs, faulty citations, and faulty conclusions—that Jaswal actually believes that S2C is valid?  

But then there’s this question: what are the next steps before Jaswal considers this VR replacement of human CRPs ready for prime time?

Some of these next steps appear to maintain the purely physical basis for the virtual facilitation:

Future research will expand our evaluation to include more nonspeaking participants. We will explore models that consider additional environmental features such as a user’s orientation relative to the room or the presence of other individuals in the space

But what Jaswal et al. leave unanswered is how the final product is going to address what they claim is another essential role of the CRP. Beyond simply adjusting for the user’s physical behavior:

The CRP also offers attentional and regulatory support, redirecting the user’s attention to the letterboard when necessary and encouraging them to complete their thoughts.

So is the final VR product going to involve VR facilitation that somehow redirects attention, provides regulatory support, and encourages completion of thoughts? Or will there still be a human involved: someone who supplements the VR’s physical adjustments with these potentially letter-cueing, message-controlling facilitator behaviors?

One thing is certain: whatever’s coming next will be a can of worms—in ways that we probably can’t fully imagine.

REFERENCES

Beals, K. (2021). A recent eye-tracking study fails to reveal agency in assisted autistic communication. Evidence-Based Communication Assessment and Intervention, 15(1), 46–51. https://doi.org/10.1080/17489539.2021.1918890

Brignell A, Chenausky KV, Song H, Zhu J, Suo C, Morgan AT. (2018). Communication interventions for autism spectrum disorder in minimally verbal children. Cochrane Database of Systematic Reviews, 11. Art. No.: CD012324. DOI: 10.1002/14651858.CD012324.pub2.

Fournier, K. A., Hass, C. J., Naik, S. K., Lodha, N., & Cauraugh, J. H. (2010). Motor coordination in autism spectrum disorders: a synthesis and meta-analysis. Journal of autism and developmental disorders, 40(10), 1227–1240. https://doi.org/10.1007/s10803-010-0981-3

Little, J. (2018). Vision in children with autism spectrum disorder: a critical review. Clinical and Experimental Optometry, 101(4), 504–513. https://doi-org.proxy.library.upenn.edu/10.1111/cxo.12651

Nazari A., Alabood L., Feeley K.B., Jaswal V. K., Krishnamurthy, D. (2014). Personalizing an AR-based communication system for nonspeaking autistic users. Proceedings of the 29th International Conference on Intelligent User Interfaces, 731–741

Tager-Flusberg, H., & Kasari, C. (2013). Minimally verbal school-aged children with autism spectrum disorder: the neglected end of the spectrum. Autism research : official journal of the International Society for Autism Research, 6(6), 468–478. https://doi.org/10.1002/aur.1329

No comments: