The opioid epidemic in the United States has been traced to a 1980 letter reporting in the prestigious New England Journal of Medicine that synthetic opioids are not addictive. A belated citation analysis led the journal to append this letter with a warning this letter has been ?heavily and uncritically cited? as evidence that addiction is rare with opioid therapy.? This epidemic is but one example of how unreliable and uncritically cited scientific claims can affect public health, as studies from industry report that a substantial part of biomedical reports cannot be independently verified. Yet, there is no publicly available resource or indicator to determine how reliable a scientific claim is without becoming an expert on the subject or retaining one. The total citation count, the commonly used measure, is inherently a poor proxy for research quality because confirming and refuting citations are counted as equal, while the prestige of the journal is not a guarantee that a claim published there is true. The lack of indicators for the veracity of reported claims costs the public, businesses, and governments, billions of dollars per year. We have developed a prototype that automatically classifies statements citing a scientific claim into three classes: those that provide supporting or contradicting evidence, or merely mention the claim. This unique capability enables scite users to analyze the reliability of scientific claims at an unprecedented scale and speed, helping them to make better-informed decisions. The prototype has attracted potential customers among top biotechnology and pharmaceutical companies, research institutions, academia, and academic publishers. We propose to conduct research that will refine scite into an MVP by optimizing prototype efficiency and accuracy until they reach feasible milestones, and will refine the product-market fit in our beachhead market, academic publishing, whose influence on the integrity and reliability of research is difficult to overestimate.

Public Health Relevance

We propose to develop a platform that can be used to evaluate the reliability of scientific claims. Our deep learning model, combined with a network of experts, automatically classifies citations as supporting, contradicting, or mentioning, allowing users to easily assess the veracity of scientific articles and consequently researchers. By introducing a system that can identify how a research article has been cited, not just how many times, we can assess research better than traditional analytical approaches, thus helping to improve public health by identifying and promoting reliable research and by increasing the return on public and private investment in research.

Agency
National Institute of Health (NIH)
Institute
National Institute on Drug Abuse (NIDA)
Type
Small Business Innovation Research Grants (SBIR) - Phase II (R44)
Project #
4R44DA050155-02
Application #
10136941
Study Section
Special Emphasis Panel (ZDA1)
Program Officer
Arudchandran, Ramachandran Nmn
Project Start
2019-09-30
Project End
2022-04-30
Budget Start
2020-05-15
Budget End
2021-04-30
Support Year
2
Fiscal Year
2020
Total Cost
Indirect Cost
Name
Scite, Inc.
Department
Type
DUNS #
081268696
City
Brooklyn
State
NY
Country
United States
Zip Code
11249