Tuesday, January 24, 2023

Where is AI where we really need it?

With the help of my readers, I've found out about several typos in my recent books that went undetected by me, my editors, and my early readers. They also went undetected by Microsoft Word and Grammarly. (I have not found Grammarly helpful for style, but it is useful for catching some typos).

But what about ChatGPT? Surely a technology that can mimic human texts so convincingly that professors are to turning to AI detection tools (to determine whether their students actually wrote their papers themselves) should be able to take an existing paper and detect all its typos.

The issue is that proofreading entails text processing, while text-generation, especially when it comes to AI, does not. For text generation, AI can rely solely on statistics and pattern recognition--specifically, the patterns and frequencies of certain words or structures in gigantic amounts of data it's been fed. It can then use these patterns and frequencies to generate text that is similar to the texts it was trained on--specifically, those texts that, based on their patterns, are most statistically relevant to the prompt.

No need, here, for AI to actually understand anything. Nor does it appear to, as we see, for example, in Catherine's recent posts.

Most text processing, in contrast, cannot rely on statistics and pattern recognition (the one apparent exception being the text processing that goes into detecting the likelihood that a paper was generated by AI). Most text processing, that is, involves something more akin to comprehension. To summarize, AI needs to understand the main points; to simplify, it needs to understand well enough to paraphrase; and to detect many common writing errors--e.g., whether "or" should have been "of", or vice versa ("careful reading of good writing" vs. "careful reading or good writing")--comprehension is crucial. Comprehension, in turn, involves:

1. Word, sentence, paragraph, and text-level semantics and pragmatics (including subtle forms of negation like "mixed up with" and irony).

2. Knowledge of how linguistic meaning maps onto real-world phenomena, including sensory experience. For this, we may need ambient robots that can process not just visual information (parsing the world into a 3D model of shapes and colors), but also auditory information (including phonological information), and tactile and chemical information.

Not surprisingly, AI seems nowhere near able to scale what some in the field have called the "barrier of meaning."

The irony is that proofreading, summarizing, and simplifying are much more useful than text-generating, which probably raises more problems than it solves.

More useful but so much more difficult--at least for AI.


No comments: