White House official seeks input on AI system policies during Pitt visit

By SHANNON O. WELLS

Spurred by the recent emergence of ChatGPT generative-text software, artificial intelligence technology has recently become the proverbial poster person of good vs. evil cultural conundrums.

It’s been a featured topic of discussion at multiple committee meetings around Pitt and was the focus of the spring 2023 Senate Plenary on April 4. The AI debate stakes were raised even higher on April 11, when policy experts from the White House came to campus for a Pitt Cyber panel discussion. “AI Accountability Policies: A Discussion with Biden Administration Officials” in the University Club Conference Room featured Alan Davidson, assistant secretary of commerce. He wasted no time getting to the heart, or more appropriately, the artificial mind of the matter.

“We’re delighted we’re here today, because we all see the benefits that responsible AI and innovation will bring, and we want that innovation to happen safely. But we’re concerned that it’s not happening today,” he said. “President Biden spoke about this tension just last week at a meeting of the President’s Council of Advisors on Science Technology, and he said, ‘AI can help deal with some very difficult challenges, like disease and climate change. But we also have to address the potential risks to our society, to our economy and to our national security.’

“The president is right,” Davidson continued. “We need to capture AI’s benefits without falling prey to the risks that are emerging,” Davidson added. “That’s why we want to make sure that we’re creating an environment to enable trustworthy AI.”

To that end, Davidson is seeking input from faculty at Pitt and other learning institutions to help shape national policy to guide and govern AI’s development and uses in daily American life.

“We’re here to advance federal policy to make sure we get it right,” he said. “AI systems should operate safely. They should protect rights. We want and can have an ecosystem that meets those needs.”

The White House is launching a request for comments on AI accountability and on what policies can support the development of AI audits, assessments of certifications, and other tools that can create earned trust in AI systems, he said.

“Accountability mechanisms for AI can help assure that an AI system is trustworthy. In the past, policy was needed to make sure that happened in the financial sector, and it may be necessary in AI as well,” he said. “But real accountability means that entities bear responsibility for what they put out into the world. And the measures we’re looking for are part of that accountability ecosystem.”

Davidson was joined at the forum by Alan Mislove, assistant director for data and democracy at the White House Office of Science and Technology Policy, and a panel of industry experts including Nat Beuse with Aurora Innovation (self-driving vehicles); Ellen P. Goodman, Rutgers University law professor and National Telecommunications and Information Administration (NTIA) senior advisor for algorithmic justice; Deb Raji with Mozilla Foundation; and Ben Winters, senior counsel at Electronic Privacy Information Center.

As AI systems become more ubiquitous in areas from learning and academia to medicine, transportation and various forms of employment, Davidson said some of the questions that need addressed include:

  • Are they safe and effective in achieving their stated outcomes?

  • Do they have discriminatory outcomes or reflect unacceptable levels of bias do they respect individual privacy?

  • Do they promote misinformation or disinformation?

“AI systems are often opaque, and it can be difficult to know whether they perform as they claim,” he said. “Accountability policies will help us shine a light on these systems and verify whether they are safe, effective, responsible and lawful.”

Davidson touted the advantages of AI in machine learning, including its role in developing the COVID vaccine, technology that benefits the medical diagnoses of the visually impaired, as well as promise in tackling even bigger-picture issues such as climate change.

“Deployed properly, they will create new economic opportunities. … And we are still in the early days of development of these systems,” he said. “It’s clear they’re going to be game changing across many sectors, but it is also becoming clear (there is) cause for concern, the consequences and the potential harms from AI system use” regarding privacy, security and safety, potential bias and discrimination, not to mention “democracy, and implications for our jobs, our economy, the future of work.”

Some of the negative consequences he noted of AI’s growing ubiquity include hidden biases in mortgage approval algorithms that lead to higher loan-denial rates for communities of color; algorithmic hiring tools that screen for personality traits that don’t comply with the Americans for Disabilities Act; and AI systems used to create fake audio and video images that “deceive the public and hurt individuals.”

“These examples are probably just the tip of the iceberg,” Davidson said. “I think we have an intuition that there is much more to come, and research shows that it’s true.”

Despite all that, Davidson said he’s “optimistic” about the future of AI. “And part of the reason for that is because policymakers and the public — all of you are paying a lot of attention …”

In 2021, more than 130 AI-related bills were introduced in state legislatures, he said. “That’s a huge difference from the early days of, say, social media, or cloud computing or even the internet, when people really were not paying attention. … I am particularly encouraged by being here in places like this, at Pitt, where we see so many faculty members, so many students engaged and thinking about these issues. You are the reason that I am optimistic, because we’re thinking about this. And it’s still early days.”

Here is a sampling of comments and questions from the panel discussion that followed Davidson’s remarks:

Is AI “trustworthy”?

“When we say that an AI system is trustworthy, or worthy of trust, it means that the system has certain characteristics,” Ellen Goodman said. “These have been defined in federal space both in NIST documents and in OSTP, the White House’s blueprint for an AI Bill of Rights as roughly ‘safe, equitable, democracy-enhancing and respecting of human and civil rights, and also transparent and explainable.’

“We’re talking here about how people can be assured, in fact, that a system is trustworthy before a system is deployed, and then on an ongoing basis,” she added. “So that’s what we mean by accountability. And finally, when we use the word ‘policies,’ we mean both regulatory, self-regulatory or created through incentives and subsidies.”

The meaning and usefulness of “audits”

“Audits have, sort of, two components. That’s how I’ve been describing it,” Deb Raji said. “One is the evaluation component where we’re trying to articulate a set of expectations we have once we release the model out into the world, and measure what the performance of the behavior of the model is compared to what we expect.” We would expect an employment algorithm “to be legally non-discriminatory, and we have ways of articulating that expectation. And we also have ways of measuring the actual behavior of the model compared to that expectation,” she added.

“It’s not just enough to measure the discrepancy between our expectations for the system and how it’s actually performing. It’s really important for that measurement to be meaningful in, let’s say, a legal process (where) someone wants to raise an anti-discrimination claim for that measure, to be meaningful in that kind of context as well.

“I think those two components are really what an audit is about,” Raji said. “You’re trying to get a concrete measurement of how well the system is performing, relative to expectations. And you’re trying to make sure that that measurement is meaningful. It’s part of a broader process of accountability.”

Shannon O. Wells is a writer for the University Times. Reach him at shannonw@pitt.edu.

 

Have a story idea or news to share? Share it with the University Times.

Follow the University Times on Twitter and Facebook