An article in The New Yorker earlier this year clearly spelled out how conferences serve as the de-facto lens for ethical considerations of artificial intelligence (AI) applications.
“In computer science, the main outlets for peer-reviewed research are not journals but conferences,” reads the first line in a recent article in The New Yorker, “where accepted papers are presented in the form of talks or posters.”
There can be no mistaking the critical role conferences play in the field of AI, which is rife with potential ethical dilemmas that aren’t academic in nature, according to The New Yorker story, “Who Should Stop Unethical A.I.?” Concerns about possible impacts of AI on society tend to fall into one of four categories: the kinds of A.I. that could easily be weaponized against populations, including facial recognition, location tracking, surveillance, etc.; technologies, such as Speech2Face, that may “harden people into categories that don’t fit well,” such as gender or sexual orientation; automated weapons research; and tools to create fake news, voices, or images.
“There are few agreed-upon standards for ruling AI research out of bounds,” the author of The New Yorker article, science writer Matthew Hutson, wrote, but conference reviewers — “ordinary computer scientists deciding whether to accept or reject papers based on intellectual merit — ‘were really serving as the one and only source for pushing back on a lot of practices which are considered controversial in research,’” Katie Shilton, an information scientist at the University of Maryland, and the chair of Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction, told Hutson.
But there are shortcomings to this approach, since “lots and lots of folks in computer science have not been trained in research ethics,” Shilton said. On top of that, “it’s one thing to point out when a researcher is researching wrong,” Hutson wrote. As Shilton told him: “It is much harder to say, ‘This line of research shouldn’t exist.’”
Although Hutson didn’t specifically say that the loss of face-to-face events — there have been digital AI events during COVID — has resulted in a slip in focus on ethics, he did say this: “The shadow of suspicion that now falls over much of A.I. research feels different in person than it does online. In early 2018,
I attended Artificial Intelligence, Ethics, and Society, a conference in New Orleans, where a researcher presented a model that uses police data to guess whether a crime was gang related. (I covered the event for Science.)
The presenter took pointed questions from the audience about the possible unintended consequences of his research — could suspects be mislabeled as gang members? — before declaring, in exasperation, that he was just ‘a researcher.’ Wrong answer. … An audience member stormed out, reciting, in a German accent, a song about the Nazi rocket scientist Wernher von Braun: ‘Once the rockets are up, who cares where they come down?’”
Online, a criticism in a chat box or a rant on social media would likely have a less dramatic impact.
EARN CMP CREDIT
Earn one clock hour of certification by visiting the Convene CMP Series web page to answer questions about information contained in Convene‘s May-June cover story, “Real-Life Examples Show the Importance of In-Person Events.” The Certified Meeting Professional (CMP) is a registered trademark of the Events Industry Council.
Michelle Russell is editor in chief of Convene.