AI tools are fast becoming pervasive in medicine and other high-stakes fields, far outpacing workforce preparedness education. Marvin Slepian, director of the Arizona Center for Accelerated Biomedical Innovation, is responding with experiential learning that’s cracking open the black box of AI while immersing students in its uses.
In one experiment, student teams rely on different resources for information: ChatGPT, Google and library collections. “The hypothesis is that the mechanism of retrieval – the detailed prompts
of ChatGPT versus the simple query structures of Google versus sweat-of-the-brow library research – will generate different results,” Slepian says. “But what do those results have in common, what gets left out and how do they compare in terms of accuracy?”
Another experiment investigates the risk of AI amplifying misinformation. Students probe the thresholds at which repeatedly feeding ChatGPT wrong information gets it to return falsehoods as fact, outputs dubbed “hallucinations” in the world of AI.
Slepian’s students are forerunners in defining the parameters of those vulnerabilities while also proposing ways to make AI outputs more reliable, e.g., by annotating with sources or enabling crowd-sourced metadata about quality and accuracy.
“We have to be fast and agile,” Slepian says. “We can’t wait years to put together a grant to study these things.” As a physician-researcher, he sees incredible promise in AI tools but notes they’re being adopted at breakneck speed, despite being still poorly understood. “We need guidelines for these technologies, and we need them now.”
Undergraduates Jordan Rodriguez and Katelyn Rohrer are among Slepian’s researchers who presented new data on ChatGPT at the Biomedical Engineering Society conference in October 2023.