微软近日在博客文章中宣布了Face API的重大更新,它改进了面部识别平台识别不同人种性别的能力,此前,这一直是计算机视觉平台面临的挑战。
随着这些改进,Redmond公司表示,它将深肤色人种的男性和女性的误差率降低了20倍,针对女性降低了9倍。
多年来,研究人员已经证实了面部识别系统存在种族偏见。 2011年的一项研究发现,中国,日本和韩国的算法在识别白人面孔方面比识别东亚人更困难,另一项研究表明,安全厂商们大肆推广的面部识别技术对非洲裔美国人的识别准确率会下降5%至10%。
为了解决这个问题,微软的研究人员修改并扩展了Face API的培训和基准数据集,并建立了肤色、性别和年龄的新数据库。它还与人工智能(AI)公平方面的专家合作来提高算法性别分类器的精度。
“我们探讨了消除偏见、实现公平的很多方式,”微软纽约研究实验室的高级研究员Hanna Wallach在一份声明中表示,“我们谈到了收集更多的数据,以增加培训数据的多样性。我们也讨论了在真正应用之前,对我们的系统进行内部测试的很多策略。”
Face AI技术的增强只是在公司角度上尽量减少了AI的偏见,这仅仅是个开端。微软正在开发一种工具,可以帮助工程师识别训练数据中导致性别分类错误率较高的算法盲点。据该博客文章称,公司还正在建立检测和减少AI系统开发中不公平性的最佳实践。
更具体地说,微软的Bing团队正在与伦理学专家合作,探索 “董事会、学术界和社交媒体上的热烈讨论——缺乏女性CEO”,这项搜索结果的不含偏见的呈现方式。微软指出,在财富榜前500位首席执行官中,女性只有不到5%,目前网络中,“首席执行官”的搜索结果都大大提高了男性的形象。
“如果我们用有偏见的社会产生的数据去训练机器学习系统,让它模拟该社会中做出的决策,,那么这个系统必然会再现社会的偏见。” Wallach说,“这是一次很好的机会去真正地思考,我们究竟要在这些系统中体现怎样的价值观,以及它们是否真正反映了这些价值观。”
微软并不是唯一试图最小化算法偏见的公司。今年5月,Facebook发布了Fairness Flow,如果一个算法因为某人的种族、性别或年龄对一个人做出不公正的判断,Fairness Flow会自动警告该算法。 IBM的Watson和Cloud Platforms小组最近的研究也集中在减轻AI模型中的偏见,特别是与面部识别相关的偏见。
原文
Microsoft’s improved Face API
more accurately recognizes a range of skin tones
In a blog post today, Microsoft announced an update to Face API that improves the facial recognition platform’s ability to recognize gender across different skin tones, a longstanding challenge for computer vision platforms.
With the improvements, the Redmond company said, it was able to reduce error rates for men and women with darker skin by up to 20 times, and by 9 times for women.
For years, researchers have demonstrated facial ID systems’ susceptibility to ethnic bias. A 2011 study found that algorithms in China, Japan, and South Korea had more trouble distinguishing between Caucasian faces than faces of East Asians, and a separate study showed that widely deployed facial recognition tech from security vendors performed 5 to 10 percent worse on African American faces.
To tackle the problem, researchers at Microsoft revised and expanded Face API’s training and benchmark datasets and collected new data across skin tones, genders, and ages. It also worked with experts in artificial intelligence (AI) fairness to improve the precision of the algorithm’s gender classifier.
“We had conversations about different ways to detect bias and operationalize fairness,” Hanna Wallach, senior researcher at Microsoft’s New York research lab, said in a statement. “We talked about data collection efforts to diversify the training data. We talked about different strategies to internally test our systems before we deploy them.”
The enhanced Face AI tech is just the start of a company-wide effort to minimize bias in AI. Microsoft is developing tools that help engineers identify blind spots in training data that might result in algorithms with high gender classification error rates. The company is also establishing best practices for detecting and mitigating unfairness in the course of AI systems development, according to the blog post.
More concretely, Microsoft’s Bing team is collaborating with ethics experts to explore ways to surface search results that reflect “the active discussion in boardrooms, throughout academia and on social media about the dearth of female CEOs.” Microsoft notes that less than 5 percent of Fortune 500 CEOs are women and that web search results for “CEO” largely turn up images of men.
“If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases,” Wallach said. “This is an opportunity to really think about what values we are reflecting in our systems, and whether they are the values we want to be reflecting in our systems.”
Microsoft isn’t the only company attempting to minimize algorithmic bias. In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Recent studies from IBM’s Watson and Cloud Platforms group have also focused on mitigating bias in AI models, specifically as they relate to facial recognition.
文章编辑:小柳