The driving vision of this NSF CAREER project is a smartphone that can see, hear, and feel and therefore serve the user continuously. Smartphones have embraced a variety of sensors, such as cameras, microphones, accelerometers, GPS, and more. An emerging, important category of smartphone applications requires the use of sensors to learn about the physical world and the human user, often in the background without user engagement. However, existing and emerging smartphone platforms are fundamentally flawed for such sensing applications: they adopt a centralized processing model that always uses the increasingly powerful central processor, even for very simple tasks, in particular sensor data processing. This model leads to unacceptable battery lifetime when a smartphone senses frequently.
To address this fundamental flaw, this project targets at reinventing the smartphone platform by adopting a heterogeneous, distributed processing model that incorporates weak processors for simple, frequent tasks. The research has three objectives: (i) Relieve developers from dealing with the heterogeneous, distributed processing model with runtime and compiler support; (ii) Support efficient and secure execution for third-party applications that use heterogeneous, distributed resources; (iii) Provide optimized design and realization of the envisioned heterogeneous smartphone hardware, including both board and chip integrations. While the project has a focus on smartphone-like systems, the research results are expected to support general embedded systems with heterogeneous, distributed resources. The research project also provides a multidisciplinary platform to realize the educational objectives of developing system and experimental components for mobile embedded computing curriculum, involving undergraduate students in publishable research, and promoting science and engineering studies to high-school students and to groups underrepresented in these areas.