This award supports a collaborative effort between the PI and investigators at the University of California at Los Angeles and the University of Arizona to design and develop instrumentation for wildlife biologists to inventory animals by detecting, recording, and analyzing their sounds. The system will also allow for field biologists to ask questions about the temporal and spatial dynamics of acoustic communication. Many species produce sounds, and by identifying and localizing them, we can census biodiversity and study the natural dynamics of communication. Typically, these will be vocal sounds, such as bird song, or mammal or frog calls, but in theory, the equipment and algorithms can be used for other sorts of sounds (such as the stridulations of cricket wings). To do so, we must build robust hardware and easy to use software that field biologists can and will use. While there has been the development of ?proof-of-concept? tools and algorithms for many of the components of a usable system, there is no reliable, robust, and easy-to-use system that will permit field biologists to easily census acoustic animals. To do so, a platform called VoxNet will be developed. VoxNet is an integrated software and hardware package which will be a quantum leap forward beyond existing technology in for main areas: software, near-real time event recognition, energy efficiency, and a much longer communication range.
VoxNet will be a new, highly integrated, deployable acoustic sensor node with a lower unit cost, and lower energy cost than any existing system. VoxNet will include a highly optimized system software and driver to reduce energy costs through duty cycling. In addition, new 2D/3D algorithms to localize the source of the sound will be developed. These new algorithms will run in near-real-time on a distributed network of VoxNet nodes, enabling users to detect and identify vocalizing animals while in the field. A powerful and easy-to-use software environment based on the WaveScope programming model and the XStream distributed stream processing system (technology partially developed under NSF support at MIT) will be developed. Finally, the system will be tested in the field to both census birds and identify individually alarm calling marmots.
The development of tools for field biologists to use will create new ways to census biodiversity, and ask here-to-for impossible or difficult-to-ask questions about the spatial and temporal dynamics of vocal displays. When deployed, we expect these tools to generate novel and important discoveries. The dissemination of these tools will enhance research productivity in behavior, ecology and evolutionary biology. And, these tools will create novel ways to census, conserve, and manage biodiversity. The process of developing these tools will create integrative educational opportunities for undergraduates and graduate students.
The overall goal of the project is was build tools for wildlife biologists to inventory animals by detecting, recording, and analyzing their sounds and for field biologists to ask questions about the temporal and spatial dynamics of acoustic communication. Many species produce sounds, and by identifying and localizing them, we can census biodiversity and study the natural dynamics of communication. Typically, these will be vocal sounds, such as bird song, or mammal or frog calls, but in theory, the equipment and algorithms can be used for other sorts of sounds (such as the stridulations of cricket wings). The grant supported the construction of new, robust hardware and relatively easy to use software that field biologists are now using. Our platform, called VoxNet, is an integrated software and hardware package which is a quantum leap forward beyond existing technology in for main areas: software, near-real time event recognition, energy efficiency, and a much longer communication range. VoxNet nodes are highly integrated, deployable acoustic sensor nodes with a lower unit cost, and lower energy cost than previous existing systems. VoxNet includes a highly optimized system software and driver to reduce energy costs through duty cycling. In addition, we developed new algorithms to identify and localize vocalizing animals, including in three dimensions. The development of these tools for field biologists to use will create new ways to census biodiversity, and ask here-to-for impossible or difficult-to-ask questions about the spatial and temporal dynamics of vocal displays. We expect these tools to generate novel and important discoveries. We are freely sharing both software code and hardware schematics with the research community so as to enhance research productivity in behavior, ecology and evolutionary biology. We hope that these tools will create novel ways to census, conserve, and manage biodiversity. The process of developing these tools has created integrative educational opportunities for undergraduates, graduate students, and postdoctoral fellows. And, we have begun to share the bird song recordings used to develop these algorithms with the broader research community.