Artificial intelligence (AI) systems have advanced dramatically in recent years and they have been applied in many real-world applications that touch our daily lives. Despite their remarkable performance, these technologies inherently carry the risk of aggravating the societal biases present in their training data. This can lead to unfair decisions based on sensitive demographic attributes (e.g., gender), as well as the unintentional generation of insulting outputs (e.g., tagging a person as an animal). This project develops a hybrid AI system that includes humans in the decision process (human-in-the-loop) in order to ensure decisions that are robust, unbiased, and fair. The results will enable intelligent machines to seamlessly integrate with human experts to: 1) identify various types of biases in the model predictions and 2) learn to mimic the behavior of human experts and take implicit societal factors into consideration when making automatic decisions.
The technical approach is based on developing a human-machine hybrid intelligence framework allowing human experts to censor and guide an AI agent in order to identify harmful decisions and correct biases. Specifically, the team will build a bias diagnosis module with a censor model, to predict if a decision is fair or not. When the censor model is uncertain, it will request that a human expert make judgments under an active imitation learning framework. The feedback from the bias diagnosis module will be used to improve the AI system and to correct the bias exhibited in its prediction. The approaches will be to various natural language processing and computer vision applications, including entity co-reference resolution (e.g., AI thinks a female pronoun is less likely to refer to a leader) and object detection in images (e.g., AI cannot identify a tie worn by a woman).
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.