The goal of this research is to make machines more intelligent by more closely mimicking biology, specifically, harnessing the information present in event-driven input along with top-down and lateral feedback. Although computers continue to make inroads into everyday life, they still cannot perform many tasks that are readily performed by humans and other animals, or lag far behind their biological counterparts. For example, on publically available datasets designed to measure performance on object detection tasks, the best computer software running on the fastest available hardware fails to locate between 20-40% of the human-annotated targets. No animal would long survive, at least not in the wild, if its visual system made so many errors. This project therefore will investigate how information processing strategies used by biological neural systems can be exploited to design more powerful computer hardware and software. The project will have impact on training in neurally-inspired approaches to computer design, and technological leadership in neuromorphic computing for both defense and commercial uses.
The project will investigate processing mechanisms that are ubiquitous in biology but typically are not found in computer software and hardware. The first of these mechanisms is event-driven input. The retina sends visual information to the brain in the form of discrete pulses that propagate down the optic nerve. Likewise, the cochlea encodes sound as action potentials or spikes that propagate along the auditory nerve. In both cases, the precise time at which a spike occurs in a given nerve fiber, relative to the time at which spikes occur in other nerve fibers, can encode information that is critical to subsequent processing by the brain. In the case of the retina, relative spike timing may help to separate foreground from background regions or even help us to read more quickly the words on this page. In the cochlea, relative timing can be used to distinguish one sound source from another, which in turn allows us to hear what our companion is saying even while in a noisy room. In contrast, the types of artificial neural networks most commonly used today do not employ event-driven input. This research will test the hypothesis that event-driven input can be used to improve the performance of artificial neural networks on image and audio segmentation tasks. The second biological processing mechanism to be investigated is top-down feedback. Whereas most artificial neural networks studied today are strictly feed-forward, the vast preponderance of synapses in the brain arise from top-down and lateral feedback connections. A mathematically tractable model of top-down and lateral feedback will be used to test the hypothesis that such connections can be used to improve the performance of artificial neural networks on object localization tasks. Whether such feedback can reduce the susceptibility of networks to adversarial training will also be explored. Finally, the project will explore whether a new type of neuromorphic chip that self-organizes in response to environmental input can learn more powerful representation from event-driven input, and develop strategies for combining such chips into hierarchical networks with lateral and top-down feedback.