Wednesday, September 6, 2023

Should professors embrace ChatGPT and other LLMs?

Ethan Mollick, a professor at the University of Pennsylvania Wharton School, thinks we should. In fact, he requires it, at least for certain assignments. Importantly, he also requires students to fact-check everything ChatGPT writes and holds them responsible for any errors they don't catch. He also provides students with guidelines about how to use Chat: specifically, how to ask it questions that produce useful responses. "More elaborate and specific prompts work better," he notes. He also offers suggestions on how to deal with the fact that Chat runs out of memory after about 3000 words.

Telling, the subtitle of the article that describes his approach is "A professor at the University of Pennsylvania embraces AI use in all his classes and has seen an increase in student success rates."

To some extent, this isn't surprising. Catherine Johnson and I have been looking at lots of output from ChatGPT and other LLMs and have found that most sentences are highly readable and well-connected. LLMs probably produce better prose than many people do, especially as a first draft (and I'm guessing that few people these days bother to go back and revise, especially when they have autocorrect and Grammarly catching many of their most egregious errors).

But LLM's are bad at other aspects of writing. All the sentences are about the same length and rotate through just a few sentence structures. Same with the paragraphs. The vocabulary tends toward the abstract, and the default tone is anodyne, wishy-washy, and preachy. Even if you suspend disbelief, you never feel like a real person is talking to you, anticipating your assumptions and questions and addressing them at just the right moment.

Mollick does note that "[p]roducing good AI-written material is not actually trivial." But his focus isn't on punching up the prose to make it more interesting and conversational, but on tweaking the prompts that generate the output. Post-production editing, for Mollick, appears to be limited to fact-checking the content.

As Molluck notes, "as a tool to jumpstart your own writing, multiply your productivity, and to help overcome the inertia associated with staring at a blank page, it is amazing." But is that how most people are using it? What if it's used, not just as a jumpstarter and brainstorming aide, but also as an articulator, organizer, and synthesizer.

The article asks: "Will the chatbots' technical proficiency make learning certain skills for humans obsolete?" But doesn't ask the follow-up question: will chatbots reduce people's motivation to go through what is perhaps the only process for learning certain skills--thinking skills, thought-articulation skills, thought-organizing skills, thought-connecting skills--namely, the process of putting one's thoughts in writing, from scratch, and synthesizing them into something new and interesting?

No comments: