CIOTechie_ArtificialIntelligence_BostonUniversity_AIBias

The Team Of Researchers At Boston University Develop Framework To Improve Artificial Intelligence Fairness

Artificial Intelligence News

The-Team-Of-Researchers_At-Boston-University-Develop-Framework-To-Improve-Artificial-Intelligence-Fairness The Team Of Researchers At Boston University Develop Framework To Improve Artificial Intelligence FairnessAs Artificial Intelligence becomes more superior and proficient at addressing complex tasks, it can manifest gender and racial bias, raising concern over their make use of in critical domains like deciding whose loan gets approved, who is qualified for a job, who gets to walk free and who stays in prison. However, new research from Boston University scientists illustrated how difficult it is to weigh up fairness in AI algorithms and tries to develop a framework for identifying and alleviating uncertain behavior in automated decisions.

The research paper, titled “From Soft Classifiers to Hard Decisions: How fair can we be?” presented last week at the Association for Computing Machinery conference on Fairness, Accountability, and Transparency (ACM FAT*). The work of Boston University develops on work done in recent years to document and ease algorithmic bias. Most of the efforts in the room center on evaluating how automated systems have an effect on diverse groups of people and whether they treat those groups equally. One challenge in this field is the fact that the rules and metrics for examining AI fairness are not clear cut. In his statement, Professor of Computer Science and Engineering at Boston University and co-author of the paper, Adam Smith stated that one of the things that caught their attention early on was the fact that in several settings there are different ideas of fairness that all look reasonable but fundamentally incompatible. You just cannot have a system that satisfies all of them. For their research, the Boston University scientists utilized data issued from a famous 2016 ProPublica investigation into COMPAS, an automated recidivism evaluation software. ProPublica comprised that COMPAS manifested bias against African American defendants, related them with higher risk scores and giving them harsher prison sentences.

Most automated decision-making systems are composed of two components. The first is a calibrated scoring classifier which allows a number between zero and one to each specific case. The system typically utilizes Machine Learning to examine various data points and to offer an output that best meets the patterns it has earlier seen. In the case of recidivism systems, the output of the scoring system is the probability that a defendant might commit crimes if released from jail. And the second component is a binary post-processing system that transforms the risk score into a yes or no output. The post-processing component, in recidivism, decides whether the defendant stays in jail or walks free based on thresholds and ranges.