In this project the PI will investigate how to apply mobile interaction data to automatically improve the usability and accessibility of mobile user interfaces. The research will identify user abilities based on their behaviors, leading to mobile user interfaces that are more accessible to diverse user communities (e.g., veterans), in a variety of environments (including cases of situation-induced impairments). In particular, the PI will explore the challenges of data-driven adaptive interface layouts based on user behavior and visual attention in mobile computing when the user is actually mobile. The work will involve student researchers from under-represented groups currently advised by the PI, and will be evaluated by different populations engaged in realistic but varied activities. This will allow the PI to release software and research findings that enable people to design interfaces that properly adapt to the abilities of members of the target communities. The research ties directly into an educational plan to develop a student response tool for lecture-style user interface courses, that allows students to create wireframe interfaces, to design typefaces, and to draw visualizations during class on touchscreen and mobile devices, which can be displayed on the room screen for discussion and peer feedback. The tool will be iteratively refined in a user interface course with a diverse student population, and deployed in a course at the PI's institution outside of computer science as well as in user interface courses at other universities.

User interfaces on mobile devices are not one-size-fits-all, nor should they be. Users' abilities may differ, or the situational context of the environment can introduce new challenges. By their very nature, mobile devices are used in many different environments and with different postures. For example, users may hold their tablet in both hands with the screen in landscape orientation to read in bed, swiping to different pages occasionally; at other times, they may be pushing a stroller while gripping their phone with one hand to navigate a map application. Because manufacturers know this, smartphones and tablets, unlike desktop computers, can accept touch input and sense both motion and orientation, and data from these interactions can be captured by websites and apps to identify specific user abilities and context. Over time, user interaction data collected at scale will enable personalization of the interface, say by reshaping touch targets to compensate for a user's habit of typically tapping to the right of a target, by relocating important buttons to more accessible locations on the screen, or by determining ideal text size by noting the zoom level a user often applies. Thus, the work will comprise three research objectives. The first objective is to investigate how to passively capture touch and motion data from mobile devices, to compute metrics representing user habits and mistakes as they perform touch interactions, and to determine the environmental context of the user from motion and touch behaviors. The second objective is to incorporate orientation and touch sensors to train an eye tracking model using the front-facing camera to detect the user's attention. The third objective, informed by findings from the first two, is to improve the usability and accessibility of existing interfaces, e.g., by adjusting the hittable area of targets, the text size, and interface layout.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
1552663
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2016-02-01
Budget End
2022-01-31
Support Year
Fiscal Year
2015
Total Cost
$517,235
Indirect Cost
Name
Brown University
Department
Type
DUNS #
City
Providence
State
RI
Country
United States
Zip Code
02912