Information Retrieval (IR) systems are important for people for information access. For example, intelligent search engines are widely used in Web-based services such as web search, product search, and job search. Recently, sophisticated data and complicated black-box models have made modern IR systems less transparent to users. However, as more and more people rely on IR systems to guide their daily life and decision making, there has been growing needs of explainable search results, both for technical communities and the general public, so that they understand why certain search results are provided. Meanwhile, governmental agencies are demanding IR systems to provide not only high-quality results, but also reasonable justifications, so as to enhance the trustworthiness of the systems. This project focuses on developing algorithms and frameworks to improve the scrutability, explainability, and transparency of modern IR systems. It will inspire large-scale academic-industry collaboration, which benefits billions of users by facilitating the development of reliable and explainable information access services.
This project will develop general and reusable frameworks for scrutable and explainable IR. Research in this project will be performed on two directions. The first direction aims at new retrieval models for model-intrinsic explanation. This includes developing transparent inference process and decision boundaries for retrieval actions, scrutable functions that support result exploration with user feedback, and traceable information flow to distinguish the contribution of model inputs. The second direction aims at building analytical and simulative framework for model-agnostic explanation. This includes post-hoc explanation systems with external knowledge, and a simulation framework over black-box retrieval models with explainable outputs. Besides model-intrinsic and model-agnostic approaches, this project will also investigate crowd-sourcing tasks and systematic metrics to compare the effectiveness of intrinsic and agnostic explanations. The research outcomes will include multiple public benchmark datasets and evaluation platforms for explainable IR, which will contribute to the research community for sustainable and reproducible future studies.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.