Wireless Internet Center for Advanced Technology (WICAT) Proposal #1127960

This proposal seeks funding for the Wireless Internet Center for Advanced Technology (WICAT) sites at the Polytechnic Institute of NYU (lead), Auburn University, the University of Virginia, and the Virginia Polytechnic Institute and State University. Funding Requests for Fundamental Research are authorized by an NSF approved solicitation, NSF 10-601. The solicitation invites I/UCRCs to submit proposals for support of industry-defined fundamental research.

Increasing demand for video services on handheld and other wireless devices has made efficient exchange of video data using wireless devices a necessity. However the continual changes in user demand, network traffic and hardware of the wireless environment pose significant research challenges to achieving this goal. The proposed research seeks to address these challenges via a three pronged effort to explore wireless technologies able to adapt to changing communication system states. A test bed environment for the conduct of the proposed work that will provide system surveillance and cloud computing infrastructure will be provided by Cisco Systems.

Efficient and seamless use of video on wireless platforms in the emerging cloud computing environment will have major economic impact in virtually all economic sectors. Ubiquitous implementation and widespread adoption of these services will also create societal impact through the degree of communication it will enable. The proposed work has the potential to inform and define approaches to achieving this video content integration. The work has the potential to strengthen collaborative ties and link resources among the four participating center sites to further both research and education in this critical area.

Project Report

According to a recent study by Cisco Systems, Inc., data traffic over wireless networks is expected to increase by a factor of 66 by 2013. Much of the increase in future wireless data traffic will be video related, driven by the compelling desire of mobile users for ubiquitous access to multimedia content. Such drastic increase in video traffic will significantly stress the capacity of existing and future wireless networks. While new wireless network architectures and technologies are being developed to meet this "grand challenge", it is also important to revisit existing wireless networks, to maximize their potential in carrying real-time video data. Coding, routing, and wireless transmission power can be optimized with respect to the individual user's preferences regarding basic video tradeoffs, such as frame rate versus resolution. Such optimization is complicated considerably when user preferences are dependent on the content of the video, and thus subject to change at the application layer. In many scenarios, such as occur in emergency response and military settings where a user may frequently switch tasks or objectives, the dependency of preference on content is itself subject to change. This may raise a need to dynamically elicit or infer user preferences, and then interpret content relative to those preferences, as the basis for delivery system adaptation. The goal of this project is to develop methods for managing delivery quality adaptively, responding to changes in preferences for content inferred from user feedback. The basic technical approach is to infer user preferences using machine learning techniques and automated analysis of video content. Broadly, activity was focused on the development of machine learning techniques for automated recognition of activity. In many situations, activity recognition, which involves humans, objects, or the interaction of objects, is key to understanding the relevance of sensor collections. Ideally one would like to be able to assess the activity in video. The past two years have seen much progress from the deep learning community in activity recognition in video. Research in this project took a different approach, focusing on finer-grained assessment of activities. This is inherently a harder problem, but by way of compensation use was made of high-level features such as human skeletal joints and surface points on objects. These features can in principle be extracted from video using techniques that have appeared in the literature. Alternatively it is possible to obtain these features depth-sensing cameras, such as the Microsoft Kinect device. The broad impact of the project is the development of a solid technical foundation to support automated understanding of activity in video (RGB or depth). This foundation integrates recent progress in a variety of fields, including automated target recognition, machine learning, human factors, and computer vision.

Agency
National Science Foundation (NSF)
Institute
Division of Industrial Innovation and Partnerships (IIP)
Type
Standard Grant (Standard)
Application #
1127984
Program Officer
Lawrence A. Hornak
Project Start
Project End
Budget Start
2011-08-15
Budget End
2013-07-31
Support Year
Fiscal Year
2011
Total Cost
$50,000
Indirect Cost
Name
University of Virginia
Department
Type
DUNS #
City
Charlottesville
State
VA
Country
United States
Zip Code
22904