Network learning and mining plays a pivotal role across a number of disciplines, such as computer science, physics, social science, management, neural science, civil engineering, and e-commerce. Decades of research in this area has provided a wealth of theories, algorithms and open-source systems to answer who/what types of questions. For example, who is the most influential in a social network? What items shall we recommend to a given user on an e-commerce platform? What Twitter poster is likely to go viral? Who can be grouped into the same online community? What financial transactions between users look suspicious? The state-of-the-art techniques on answering these questions have been widely adopted in various real-world applications, often with a strong empirical performance as well as a solid theoretic foundation. Despite the remarkable progress in network learning, a fundamental question largely remains nascent: how can we make network learning results and process explainable, transparent, and fair? The answer to this question benefits a variety of high-impact network learning based applications in terms of their interpretability, transparency and fairness, including social network analysis, neural science, team science and management, intelligent transportation systems, critical infrastructures, and blockchain networks.

This project takes a shift for network learning, from answering who and what to answering how and why. It develops computational theories, algorithms and prototype systems in the context of network learning, forming three key pillars of fair network learning. The first pillar (interpretation) focuses on explaining the network learning results and process to end users, who are often not machine learning experts. In particular, this project develops theory and metrics to quantify the quality of explanations for network learning. Based on that, it brings explainability to network learning algorithms by carefully balancing the model fidelity and model interpretability. The second pillar (auditing) makes the network learning process transparent to end-users, focusing on demonstrating how the learning results of a given network learning algorithm relate to the underlying network structure. In particular, it develops a new fairness measure to accommodate the non-independent-and-identically-distributed nature of network learning. Based on this new fairness measure, it develops an algorithmic framework to audit a variety of network learning algorithms. The third pillar (de-biasing) explores how to mitigate potential biases to ensure fair network learning. Underpinning these pillars is a human-in-the-loop visual analytics framework to support users in identifying and mitigating bias in network learning. By assimilating the research outcome into the courses and summer programs that the research team has developed, this project trains students to value the spirit of fairness.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-01-01
Budget End
2022-12-31
Support Year
Fiscal Year
2019
Total Cost
$601,592
Indirect Cost
Name
University of Illinois Urbana-Champaign
Department
Type
DUNS #
City
Champaign
State
IL
Country
United States
Zip Code
61820