This project explores computer vision techniques aimed at exploiting compromising reflections associated with data input in mobile electronic devices such as smart phones. The ubiquity of these personal communication devices and their growing roles in data manipulation tasks, make unintended visual emanations an exploitable liability to data security. Nevertheless, there is still a gap in understanding of both the limitations of these techniques as well as the availability of effective mitigation mechanisms. It is the goal of this work to contribute to filling this conceptual gap.

The study builds upon recent state of the art techniques for automatic reconstruction of typed input from compromising reflections, comprising of robust keystroke event detection and classification mechanisms coupled to natural language processing modules. Such paradigm is both effective and amenable to low cost implementation in commodity devices. Based on these new developments, threat scenarios are no longer restricted to controlled scenarios using specialized equipment, but rather consist of highly flexible and possibly impromptu attacks. The project develops advanced cross-platform data input transcription prototypes used within a threat validation framework. This framework provides a characterization of both threat scenario operational limitations (e.g., imaging resolution, scene illumination, computational requirements) as well as the performance characteristics (e.g., robustness, accuracy) of the different vulnerability exploitation mechanisms. Moreover, the results of the analysis of diverse threat scenarios are being used to identify and develop appropriate mitigation mechanisms when possible.

Project Report

The research in this effort was in two main areas. The first area was the analysis of compromising reflections of mobile devices, for example smart phones. Our research analyzed the privacy leaks resulting from the ubiquitous smart phone use. The second research thrust area of this effort was the security analysis of video captchas to identify their vulnerabilities to automated attacks. Under the first research area we successfully identified significant privacy leaks resulting from reflections of smart phones on for example the eye-ball of the user, or the glasses of the user. The detailed results are published in: R Raguram, AM White, Y Xu, J Frahm, P Georgel, F Monrose, "On the privacy risks of virtual keyboards: automatic reconstruction of typed input from compromising reflections", IEEE Transactions on Dependable and Secure Computing, 2013 Y Xu, J Heinly, AM White, F Monrose, JM Frahm. "Seeing Double – Reconstructing Obscured Typed Input from Repeated Compromising Reflections", Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security We also investigated the security of video captchas and demonstrated that a large class of these captchas are fundamentally broken. For this effort we collaborated with Carlton University for the usability evaluation of some of the proposed counter measures. The overall research led to two accepted papers: Yi Xu, Gerardo Reynaga, Sonia Chiasson, Jan-Michael Frahm, Fabian Monrose and Paul Van Oorschot, 'Security and Usability Challenges of Moving-Object CAPTCHAs', Usenix Securtiy 2012 Y Xu, G Reynaga, S Chiasson, J Frahm, F Monrose, P Van Oorschot. "Security Analysis and Related Usability of Motion-based CAPTCHAs: Decoding Codewords in Motion", IEEE Transactions on Dependable and Secure Computing, 2013

Project Start
Project End
Budget Start
2011-08-01
Budget End
2014-07-31
Support Year
Fiscal Year
2011
Total Cost
$151,749
Indirect Cost
Name
University of North Carolina Chapel Hill
Department
Type
DUNS #
City
Chapel Hill
State
NC
Country
United States
Zip Code
27599