This SBIR Phase I project aims to design and develop a collaborative and interpretable machine learning platform for key machine learning stakeholders to work together to deliver trusted machine learning and artificial intelligence capabilities. This project will address the critical commercial and societal problem of lack of trust due to inability to provide meaningful interpretation, explanation & collaborative oversight for machine generated results. This problem has become one of the biggest challenges for broader adoption of Machine Learning (ML) and Artificial Intelligence (AI), especially, in highly regulated industries, where reasonable degree of traceability, auditability and rationale as to how the machine algorithms arrived at the outcomes and predictions is necessary and mandated by law. This project aims to initially address specific and critical use cases in the healthcare and insurance areas with a plan to expand to other sectors like financial services, pharma and self-driving auto industry. The project intends to capitalize on AI driven growth in the economy by becoming the ML platform of choice in key regulatory market verticals while providing safeguards against perpetuating negative impacts due to incorrect ML and AI predictions.
The project key innovation focuses on combining the benefits of Machine Learning's ability to mass process at fast rates, find patterns which are hard to find with human collaboration, cognition and oversight to achieve transparency and trust in the ML outcomes. The project aims to advance past and ongoing scientific research in Interpretable ML (IML) and Explainable AI (XAI) and commercialize it. It also significantly speeds up adoption and commercialization of this research by applying it to critical business use cases in specific industry domain verticals. This project provides a novel collaborative interface that evaluates, augments and applies the explanations to specific business use cases in highly regulated industries. This project also uses an innovative hybrid machine and human in the loop design to make the ML interactions more meaningful and a human at the center authority for evaluation and oversight of the ML explanations prior to use in business decision-making. In future phases, this project also aims to provide continuous closed loop feedback and improvement of interpretability of ML models using its gold standard explanations knowledgebase.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.