The final Dean’s Discussion of the fall semester explored the use of artificial intelligence across many platforms to understand its impact. These panels, three each semester, were developed by Dean Doug Elmendorf as a way to engage vlog faculty on important issues. The goal is to engage faculty experts in interactive conversations with each other and the audience.
This panel, featuring Dan Levy, Sharad Goel, and Bruce Schneier, embraced that goal with a lively discussion full of collegial disagreements, challenges, and counterpoints. But before the introductions were even made, Levy asked the audience to scan a QR code displayed on the screen in the discussion venue, which led to a one-question poll: Will AI be beneficial to humanity?
More on the poll results later.
Moderator Sarah Wald, adjunct lecturer in public policy and senior policy advisor and chief of staff to Dean Doug Elmendorf, began by asking for a basic definition of artificial intelligence. All three professors agreed there is not a simple answer. “I don’t think anybody has a real idea what’s going to happen in 10 or 20 years,” Goel, professor of public policy, said. “All this is pure speculation.”
“When you think about AI, think about speed, scale, scope, sophistication.”
Schneier, adjunct lecturer in public policy, offered a starting point. “When we use the term AI, we’re thinking about computers doing cognitive tasks that we normally think of as the exclusive purview of human beings.”
The promise of AI, Schneier continued, is that it can do these human things without the limitations of a person: “When you think about AI, think about speed, scale, scope, sophistication.” As an example, he envisioned a mental health crisis and too few human therapists to help. “If AI can do therapy in a manner similar to a human-trained professional,” he said, “that difference in scale could have a huge impact.”
From homework assignments to lengthy legal briefs, AI-generated content could save time and money.
“I'm not sure all those are positive things,” countered Goel. “It is hard to separate the positive and negative,” Schneier agreed.
Goel noted that AI had the potential to take human performance to a superhuman level. “I do think this is one of the great promises of this technology; a lot of the problems that we think of as basically being unsolvable right now can be addressed,” he said.
“My main concern is that, for us to learn, our brain has to engage and process, and I think there’s a way of using these tools where our brain doesn’t engage and process information in ways that are conducive to real learning.”
For his part, Levy, senior lecturer in public policy, wanted to focus on short term promises. “I think the potential to strengthen your capacity to learn as a human being is immense,” he said. “You can have what amounts to human-like conversations with a tool that responds to your questions and your comments.” The potential is there to personalize learning to meet you where you are, he continued.
Moving to perils, Levy expanded this thinking. “My main concern is that for us to learn, our brain has to engage and process, and I think there’s a way of using these tools where our brain doesn’t engage and process information in ways that are conducive to real learning.” He sees generative AI as a help and a hindrance. “I think the potential for learning is huge, but the potential of using the tool to shortcut learning is also huge.”
Goel sees danger in three areas: “off-label” harms such as disinformation, and biological and computer viruses; “on-label” limitations, like false information and biases; and large-scale social disruptions stemming from an even greater addiction to our devices or children learning to read and write in a whole new way.
However, his biggest concern is an existential one: “People,” he said. “It is unclear what is going to happen in this unregulated space.”
“I do think this is one of the great promises of this technology; a lot of the problems that we think of as basically being unsolvable right now can be addressed.”
“It’s very much like science fiction here,” Schneier replied. “Science fiction uses the future to talk about the present, and we are afraid of the corporations that are controlling AI.” He also worries about AI and the effects on democracy, generating policy, political fundraising, messaging, and lobbying. “Again, these are all things humans can do, but at a speed, scale, scope and sophistication that possibly humans can’t.”
The computer science community, Goel pointed out, is split in half about the existential risks of AI.
“So what about regulation?” Wald asked.
“We are terrible in the United States at regulating technology,” admitted Schneier. “We do regulate tech that kills people like pharmaceuticals and airplanes. Computers are going to move into that screening soon.”
“I think we will have regulation on specific applications of AI,” said Goel. “The larger risks, it’s unclear to me what can be done.”
Levy’s concern was that the United States can regulate what it does itself, but not the rest of the world. “It just takes one bad actor to cause a lot of damage.”
The perils were not so worrisome to the audience, as seen by the results of the poll taken at the start of the discussion:
Will AI be beneficial to humanity?
- 3% strongly disagreed
- 14% percent disagreed
- 28% were neutral
- 38% agreed
- 17% strongly agreed.
“I think it is interesting that only 3% strongly disagreed with this statement,” said Levy. Perhaps encouraged by the trust in the room for computer-aided discussion, Levy made a final request. “Can I propose a way of doing the Q&A that I think is better than the old traditional way?”
With that, another QR code appeared on the screen so the audience could open the site and enter questions. Without a hand being raised or a word spoken, questions were sent via audience members's phones and appeared on the screen—proof of just how comfortable we have become with technology in our usual human spaces.
—
Banner photo: (from left) Sarah Wald, Dan Levy, Sharad Goel, and Bruce Schneier
Photos by Jessica Scranton