By SHANNON O. WELLS
Since its introduction by OpenAI last fall, ChatGPT, a language and artificial intelligence-based chatbot program, has generated a storm of discussion and controversy across college and university campuses. The implication that the no-longer-futuristic digital tool can simulate knowledge and generate coherent scholarly papers has put many seasoned academics on guard — and feeling more than a bit skeptical.
During “ChatGPT Implications and Applications” on March 1, one of two recent Pitt campus forums on the topic, April Dukes, faculty program director in the Swanson School of Engineering’s Education Research Center, suggested a proactive, bull-by-the-horns approach to the potentially game-changing technological incursion.
“My advice would be to try it out so that you can see what its capabilities are,” she said. “Take some example homework or assignment question, see what comes out, see how you can use it. … It can save you time if you have to write very mundane responses or emails or feedback. So by using it yourself, you can kind of see the value in it. And then when you have those discussions with your students on sort of the more ethical ways to use it, then you can speak from experiences.
“It creates an opportunity to talk to students about academic integrity and what resources are allowed not only within academia,” she added, “but also when you go into your fields: What are the resources you’re going to have at hand, and how to best use those resources so that you’re positioned to succeed in your career path.”
Launched last November by OpenAI, a San Fransico-based artificial intelligence (AI) research laboratory, ChatGPT quickly rose to prominence for its ability to generate detailed, articulate — though not always accurate or relevant — responses and narratives by scouring seemingly infinite online data sources. In addition to generating student essays, the chatbot is capable of writing and debugging computer programs, composing music, writing poetry and song lyrics, and answering test questions. Since its auspicious debut, instructors and students alike have strategized and puzzled over the role ChatGPT should play in learning and academia.
Two recent campus forums focused specifically on the rapidly accelerating influence ChatGPT and artificial intelligence in higher education, “The Ethics & Regulation of Generative AI” on Feb. 27, and “ChatGPT Implications and Applications” on March 1. In the latter discussion, Mark DiMauro, visiting assistant professor of English literature and multimedia and digital culture at Pitt–Johnstown, emphasized the importance of keeping the rise of ChatGPT and similar emerging tools in perspective.
“I think the idea that there’s going to be this radical sea change, an entire upending of the educational system as we know it, I think that’s hyperbolic for the most part,” he said. “I think we’re going to see a lot of small-scale transformations. I think we’re going to have to, as educators, modify the way we address things, the way we present (lessons) and so forth. But to be honest, part of what is required of us as educators to begin with is that flexibility, so I don’t particularly see that kind of drastic alteration coming that I think a lot of people are worried and concerned about.
“The invention of the calculator did not destroy mathematics. The invention of photography did not just destroy painting or visual art,” DiMauro observed. “We’re kind of in that same ballpark, only now, that new technological impetus comes from composition, and I think it’s more so that we’re staggered and amazed that we’ve reached this point.”
DiMauro and Dukes were joined on the panel by LeTriece Calhoun, visiting lecturer in the Dietrich School’s English department, and Jeffrey Wisniewski, the Hillman Library’s director of communications and web services. Alan Lesgold, professor emeritus of education, psychology and intelligent systems, served as moderator.
In essentially concurring with DiMauro’s stay-calm approach, Calhoun likened the negative reaction to ChatGPT’s rise to that of the now-ubiquitous Wikipedia’s emergence as a go-to online informational resource in the early 2000s.
“You know, it was the same handwringing, it was the same alarm bells in the same areas around plagiarism and (questions of) will there ever be the possibility of trusting information anymore,” she said. “I think a lot of that was the result of not understanding how the platform around Wikipedia worked.”
Now that “we all use it” and the user-interactive Wikipedia has essentially supplanted what Calhoun referred to as “white men sitting around the table and deciding what is worthy of being in the (pre-digital) encyclopedia,” she noted that academia not only survived that sea change, but also that “we’re better for the existence of Wikipedia, and I think it’s going to be a roughly similar thing with generative AI.”
Lesgold recounted a recent exercise in which he asked ChatGPT to generate six references in the field of technical training and technology that he said turned out to be “bogus,” along with links to incorrect articles. This current lack of dependability, he predicted, will soon lead to further technology that allows students to verify sources that ChatGPT generates for them. “And that cycle is going to continue for a while,” he noted. “That’s going to be part of the tumult.”
Wisniewski said all issues and concerns the panel discussion raised — particularly ChatGPT’s sometimes loose relationship with true facts, as Lesgold illustrated — points to the urgency in engaging with students in “AI literacy.”
“Through the years we’ve talked about all these different sorts of literacy: there’s literacy with print information, and then there was literacy with digital information, and now we’re going to have to have that same conversation and do some work around digital literacy,” Wisniewski said. “So talking about these things, that chat GPT is perfectly willing and able to lie seamlessly. If you didn’t know, you wouldn’t know. And it’s a function of … what it’s been trained on. So all of these things, I think there’s going to be a need for conversation and training and awareness-raising around just … being literate in this type of tool.”
While acknowledging he’s “all on board” with the use of ChatGPT and other emerging AI software technology, DiMauro said educators — rather than engaging in “gotcha” games with students they discover using it — should shift to subtler strategies to balance generative AI technology with traditional classroom expectations. This includes shifting the nature of assignments to confront the “weaknesses” of algorithmic communications provided by AI.
“Research is the foundation of good writing. We already grade (for) research quality, source thoroughness, source vetting, that kind of stuff in classes like composition,” he said. “So continuing to require that is already an Achilles heel of this kind of algorithmic communication. So there’s no real issue there.”
While it may seem like the sudden ubiquity of ChatGPT puts younger, more digital-steeped students at an advantage over educators with more traditional approaches, LeTriece Calhoun emphasized the opportunity its emergence presents for students and instructors to learn — and find common ground — together.
“Play with it with your students in the classroom as well because that helps demystify it,” she said. “It also helps place yourself as an explorer of this new sort of tool and technology. And it helps to be like, ‘I’m in this with you. I’m going through all of my questions along with you. Let’s figure it out together.’ So I would say, play — just have fun with it.”
Shannon O. Wells is a writer for the University Times. Reach him at email@example.com.
Have a story idea or news to share? Share it with the University Times.