This year’s jointly held hacker conferences Black Hat, for cybersecurity professionals, and Def Con, held the first week in August in Last Vegas, were “significantly rejiggered” in response to the rapid spread of the Delta variant, according to The Washington Post, with in-person attendance down to about one-fourth of a typical pre-COVID edition, which attracted tens of thousands of participants to each event.
Jeff Moss, the founder of both conferences, tweeted in late July about the difficulty of planning for the in-person events during the ongoing pandemic. “I can’t remember any year as complex and stressful as this one,” he tweeted.
In response to the spike in COVID-19 cases, all in-person Def Con attendees were required to show proof of vaccination and wear masks. The event, run entirely by volunteers, also offered an online version. The hackathons at Def Con, held since 1993, can focus on everything from creating the longest Wi-Fi connection to finding the most effective way to cool a beer in the Nevada heat, according to Wikipedia.
As part of this year’s AI Village — a “community of hackers and data scientists working to educate the world on the use and abuse of artificial intelligence in security and privacy,” as the group says on its website — researchers competed in the tech industry’s first-ever algorithmic bias bounty contest, sponsored by Twitter. The hackers discovered that a Twitter image-cropping algorithm, found by users in 2020 to be biased against Black people, was also coded with implicit bias against other groups. (Watch a video about the contest results below.)
Researchers found evidence that the now-discontinued algorithm, which automatically edited photos to focus on people’s faces, also cropped out older people, Muslims, and people with disabilities, NBC News reported.
Twitter invited researchers to find new ways to prove that the image-cropping algorithm was inadvertently coded to be biased against particular groups of people. The competition was open from July 30–Aug. 6, and the winners were announced during the event, held Aug. 5-8 at the Paris Hotel and Bally’s Hotel in Las Vegas.
Bias in AI
The problematic Twitter algorithm isn’t the only instance of societal biases seeping into artificial intelligence trained by existing data sources.
Jamie Condliffe, who writes the This Week in Tech column for The New York Times, quoted Sandra Wachter, an associate professor in law and A.I. ethics at Oxford University, who said, “What algorithms are doing is giving you a look in the mirror. They reflect the inequalities of our society.”
Humans are inherently biased, Condliffe wrote, “and that can seep into the way they frame the analysis that underlies their code.” He was reviewing another Times story about bias being discovered in other automated systems, including:
- The algorithm that calculates credit quotas for Apple’s credit card may give higher limits to men than to women.
- AI services from Google and Amazon both failed to recognize the word “hers” as a pronoun, but correctly identified “”
AI programs that learn from users’ behavior almost invariably introduce some kind of unintended bias, Parham Aarabi, a professor at the University of Toronto and director of its Applied AI Group, told NBC News. It was Aarabi’s team’s submission, which took second place, that recognized the Twitter algorithm was biased against individuals with white or gray hair. The more artificially lightened their hair, the less likely the algorithm would choose them, NBC News reported.
Another contest submission found that the artificial intelligence had learned to crop out people who wore head coverings, like a Muslim hijab, and people who used wheelchairs. The competition winner found that the more a face is altered to look slimmer, younger, or lighter in skin tone, the more likely Twitter’s algorithm would be to highlight that face.
With unintended bias being coded into automated systems dealing with banking, real estate, health care, law enforcement, and even applications on our smart phones, it is important to somehow police the issue. As Condliffe wrote in The Times, computer scientists are researching ways to spot and remove bias in data, and to develop ways to make algorithms better able to explain their decisions.
The National Institute of Standards and Technology (NIST) has conducted studies into AI bias, including its Face Recognition Vendor Test (FRVT) program, which in 2019 evaluated 189 software algorithms from 99 developers and reached “alarming conclusions,” according to weforum.org.
NIST is currently seeking outside comments on A Proposal for Identifying and Managing Bias in Artificial Intelligence, a document that proposes ways to identify and manage biases in AI. The institute extended the deadline until Sept. 10, 2021. The study’s authors will use the public’s responses to help shape the agenda of several collaborative virtual events NIST will hold in coming months, according to the website.
Twitter Algorithmic Bias Bounty Challenge Results
Curt Wagner is digital editor of Convene.