The broader impact/commercial potential of this Small Business Innovation Research (SBIR) Phase I project will usher new Artificial Intelligence/Machine Learning (AI/ML) products delivering high accuracy with explainability. Without rationale behind predictions, decision makers can't trust and effectively use AI/ML solutions. Outcome of R&D through this project would lead to more accurate and faster detection with appropriate explanation of anomalous interactions and recommend effective controls to 1) eliminate billions of dollars of fraud, waste and abuse (FWA) in Health Insurance markets; 2) lower costs, improve quality and speed of Health Care delivery to consumers; and 3) promote new markets in Personalized Health and Smart Health sector for emerging Medical Internet-of-Things (IOT) devices and systems, enabling economic growth. The results of this research are expected to enable the discovery of medical anomaly together with advancing the detection of new types of FWA. The boost in detection accuracy with explanation will save hundreds of millions of dollars. Societal impact includes reduced costs to consumers and taxpayers through better FWA control and advance health outcome through early medical IOT anomaly detection. More broadly, the system is expected to detect possible opioid or substance abuse epidemic cohorts, under/over-medication, advanced alerts for community health anomalies.
The proposed project will extend and generalize a novel machine learning method to solve the Fraud, Waste, and Abuse (FWA) problem in health insurance, coupled with explanatory capability providing rational behind predictions and operationalized in a distributed parallel computing framework for scaling. The technical problem is how to combine relations between entities (e.g., doctors) with their attribute (e.g., a doctor's prescription history). This project advances the state of the art by combining relations between rows in the training data (e.g. doctors) with standard machine learning to improve prediction accuracy while facilitating local explanation. The result is vastly improved prediction accuracy with explainability. Thus, the method uses network information to fill in the gaps of entity information alone and vice versa while facilitating explanation for a test case. This method is expected to significantly improve the ability to detect FWA and pave ways for multi Billion dollars savings, call out IOT-based medical anomaly in advance to improve health outcome and build trust in the predictions for the decision makers through the explanations provided. The team intends to deliver not only the accuracy boost with explainability, but a fully operational system with automated data pipeline, parallel and distributed algorithmic processing framework which can be deployed on a SaaS basis or an enterprise solution.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.