Kids and young adults are increasingly relying on electronic devices as their main source of entertainment. During this screen time, users can watch movies, videos, and play online games on a multitude of platforms. While many platforms have parental controls, many of these app-specific filters have been found to be unreliable and easy to deceive. Exposing kids and young viewers to objectionable content has been documented to correlate with violent behavior and early initiation of sex and alcohol usage in teenagers, and anxiety and fear among children. Therefore, there is an urgent critical need to develop Artificial Intelligence methods that can help detect objectionable content.
Developing this technology requires new research infrastructure in the form of an extensive repository of consistently labeled movies and video content that currently does not exist. Through the mini-workshop series, the research team will lay the groundwork to design and create this new repository of objectionable content. The resulting infrastructure will help advance technology that will contribute to the crucial goal of providing a safer online space for young users.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.