Faculty, cyber law experts discuss generative AI’s ethics, regulations, hidden benefits

By SHANNON O. WELLS

Some of the noise and handwringing generated by the recent emergence of ChatGPT, a controversial language and artificial intelligence-based chatbot developed by OpenAI software, may overshadow some of the program’s benefits to academia.

MORE TALKS PLANNED

The topic generative AI will be addressed during at least two more events this semester

Pitt Senate’s Spring 2023 Plenary Session: 11:45 a.m.-2 p.m. April 4, William Pitt Union Assembly Room. The theme of the annual plentary is “Unsettled: Frames for Examining Generative Artificial Intelligence.” See related story.

“AI Assurance Policies: A Discussion with Assistant Secretary of Commerce Alan Davidson”: Noon, April 11, Conference Room A, University Club. Pitt Cyber will host Davidson, administrator of the National Telecommunications and Information Administration, followed by a panel and audience discussion about an assurance ecosystem that will advance AI systems that are safe, effective, legal and otherwise trustworthy. RSVP here to attend in person or by Zoom.

In a Feb. 27 panel on the Ethics & Regulations of Generative AI, Ravit Dotan, a post-doctoral research associate at the Center for Governance and Markets, spoke of her own English-language barriers and how ChatGPT could become a reliable interpreter for some of her classroom communications and projects.  

“I am a person whose first language is not English. Also, I’m stuck with this accent. That makes people sometimes treat me as stupid. It is just a fact,” she said. “Also, sometimes my English is not idiomatic or even grammatically correct … With generative AI, of course, we could create text, also audio. And there’s an opportunity here for people who don’t have this language as their first language. They face those kinds of barriers, right? It could take (them) away. It could make it less so.”

Dotan also sees benefits for her students, particularly those of different nationalities, who could use the tool to project their thoughts, ideas and personalities with more confidence and sense of authority.

“(ChatGPT) is a tool that could help in equalizing ways of communication and helping more kinds of people sound authoritative,” she said, before adding that there’s another, more problematic side of that coin. “On the other hand, when ChatGPT says nonsense, it sounds just as authoritative — and it says nonsense a lot. And so I’m hoping that, with time, the hype will go down a little bit and people will start being more critical and notice (the) nonsense that this machine is speaking out. And I hope it could increase that bias to that authoritative sound, so that more kinds of people would get this way of speaking.”

Two recent campus forums focused specifically on the rapidly accelerating influence ChatGPT and generative artificial intelligence in higher education. “The Ethics & Regulation of Generative AI” on Feb. 27 was followed by “ChatGPT Implications and Applications” on March 1 (see related story).

Hosted by Jennifer Brick Murtazashvili, director of the Center for Governance and Markets and professor in Pitt’s Graduate School of Public and International Affairs, the panel for the Feb. 27 forum included Dotan; Annette Vee, associate professor of English in the Dietrich School of Arts & Sciences and director of the Composition Program; and David Hickton, founding director of Pitt’s Institute for Cyber Law, Policy, and Security.

Picking up on Dotan’s ambivalent assessment of ChatGPT’s initial impacts and influences, Murtazashvili noted the program’s risks go deeper than making bad information sound impressive.

“I think one of the bigger issues that we’re seeing pop up recently deal with security issues and cybercrime,” she said. Remarking that it’s “amazing” how much code ChatGPT knows, Murtazashvili asked Hickton about how much and what kind of generative AI regulation is needed right now, given the technology’s “complexity.”

“I don’t think we doubt that in the short term, as a result of this new technology, we’re going to have more crime,” he acknowledged. “We’re going to have more sophisticated crime, and it’s going to be harder to identify the perpetrators. Because all cybercrime is about attribution. And it’ll be easier without identity tools for cyber criminals to hide with a ChatGPT. It’ll be easier for a less sophisticated criminal to become more sophisticated. So those are the threats in the short term in the cybercrime area.”

Talk of regulating technology, of course, often generates instant backlash and faces “the risk of being accused of trying to slow the advent of technology,” Hickton said. “And people will make arguments: ‘This is really no different than when we went from math tables to calculators, or when we went from slide rules to calculators. When we went from linear research to Google search. This is just an enhanced version of Google search.’ And those are very good arguments. But we don’t have to slow technology if we can recognize the distinction between hard regulation and soft laws and voluntary compliance guidelines.”

Launched last November by OpenAI, a San Francisco-based artificial intelligence (AI) research laboratory, ChatGPT quickly rose to prominence for its ability to generate detailed, articulate — though not always accurate or relevant — responses and narratives by scouring seemingly infinite online data sources. In addition to generating student essays, the chatbot is capable of writing and debugging computer programs, composing music, writing poetry and song lyrics, and answering test questions. Just this week, OpenAI launched GPT-4, which has the ability to analyze images and mimick human speech, according to the Washington Post.

Since ChatGPT’s auspicious debut, instructors and students alike have strategized and puzzled over the role ChatGPT should play in learning and academia.

The good news regarding regulation, Hickton noted, is that OpenAI already is discussing an “identity tool” and a collaborative effort on behalf of 10 developers of generative AI to create a set of guidelines.

“Ultimately, we may end up with a regulatory scheme which is sort of hard law,” he said. “But we can work toward voluntary compliance and tools in the short term which can sort of ease the entry of this new technology.”

Hickton said he would like to see standards developed based on a sense of awareness, transparency and accountability.

“When we did the work at Pitt Cyber previously, looking at the use of decision-making algorithms by government components, that’s what we were in pursuit of there,” he said. “And generally when there’s a new technology which arrives on the scene, there are no standards. And so we’re left with either the metaphor of the Wild West, which is frequently used, or the one I like to say, ‘We’re building the plane as we fly.’”

Hickton noted that the “early promise for regulation” comes from Europe, where an AI Act draft proposal classifies security risk in categories of unacceptable; high-risk; and other.

“It’s a growing standard that could become law,” he said. “I think that if you insist on transparency, and you bring a spotlight to every application in use, and you were to quickly classify them, we could spend time on what is unacceptable and what is high risk. And that may be sufficient for now. It’s not going to be the long-term solution, but for now, it would be good.”

Praising Hickton’s emphasis on “humanists” being at the core of regulations related to ChatGPT and other generative technologies, Annette Vee said, “I’m a humanist and thinking about the ways that people actually use the technologies and how they might (subvert them). I mean, of course, people are going to try to trick the technology.”

Vee said she and her son have experimented with making ChatGPT say things it’s not “supposed” to say, “because it’s interesting to do.”

“And that’s a very human impulse from the model, to try to get it to do things even if you’re not looking for it to produce hate speech or whatever — you’re looking to ‘prod the model,’” she explained. “And so if there’s a high-risk situation, you haven’t produced text of course, but then you can scale that in different ways and it can get disseminated in multiple (ways) … Once the person has that, how it leaves from there is a whole different ballgame, and then you get models (where) you can just download it and then run it on your own computer and then disseminate it … I’m genuinely interested in how you would actually regulate those.”

Noting consumer concerns about what the government does with personal information from popular services like Netflix, Hickton said while many in the U.S. are concerned about what the government is doing, there’s now an increasing appreciation for what big tech is doing. “And I’ve been a voice for, ‘Let’s have a responsible, not a combative conversation about that.’ ”

Likening the generative AI situation to what led to the Dodd-Frank consumer protection act that led to sweeping financial regulation following the Great Recession of 2007-08, Hickton noted that “regulations like that don’t arise organically out of the ground, they arise because of abuse. They arise because we played a casino with banking systems. They arise because we had vice presidents at certain banks selling credit cards to people that didn’t exist …

“And so I’ve defended regulatory abuse in debates like that,” he added. “But a better way to look at regulation, and I think this is a unique opportunity when you talk about generative AI, is to do it proactively, as opposed to reactively. We now know with people like those assembled in this room and online what some of the issues are right now, before people are harmed. And we could do something right now. That’s why I am so encouraged by the collaboration of the 10 (industry leaders) who’ve come forward with voluntary guidelines.”

Shannon O. Wells is a writer for the University Times. Reach him at shannonw@pitt.edu.

 

Have a story idea or news to share? Share it with the University Times.

Follow the University Times on Twitter and Facebook.