This project investigates signal processing and machine learning algorithms for identifying and localizing one or multiple targets in the acoustic scene. The algorithms will be mapped onto a parallel architecture suitable for integration with micro-power mixed-signal hardware. A biologically inspired gradient flow signal representation blindly separates and localizes targets using a miniature array of sub-wavelength aperture. A support vector machine identifies the time-frequency signatures of the localized targets. The goal of the one-year project is to determine the achievable energy efficiency and integration density of the autonomous sensor and the feasibility of its deployment in a large-scale network, and to evaluate the concept using hardware prototypes. Advanced power management using wake-up detection will be pursued to reduce standby power. The effort will also investigate efficient means of embedding acoustic sensors onto CMOS circuits towards a highly integrated directional and intelligent acoustic sensor. Miniature integration and micro-power operation are essential to provide an autonomous sensing and processing node for distributed intelligence in a sensor network. The outcomes of this project will advance the state of the art in acoustic sensing technology for surveillance, homeland security, and as an aid to the soldier's awareness in the digital battlefield. The results will also impact new developments in intelligent hearing aids and other assistive listening technologies and human-computer interfaces.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0434161
Program Officer
Sylvia J. Spengler
Project Start
Project End
Budget Start
2004-08-01
Budget End
2006-07-31
Support Year
Fiscal Year
2004
Total Cost
$395,904
Indirect Cost
Name
Johns Hopkins University
Department
Type
DUNS #
City
Baltimore
State
MD
Country
United States
Zip Code
21218