Sensor network systems have seen increasing research attention recently, with a wide range of scientific and commercial applications. Sensor processors need "agile" performance that responds quickly to high-throughput bursts, with also lower-energy execution when possible to minimize energy consumption. Sensor systems have limited power budgets, with traditionally most energy going to radio communications. But as radios improve and computational demands increase, more of the energy budget is shifting back to the processor.
This research examines processor architectures for agile, energy-efficient, stream-based processing in sensor networks and other embedded systems. A particular focus is on using parallelism (through tiled processing units) for energy management. Another key aspect is investigating the interconnect between on-chip processing units, and exploring design techniques that let different processing elements run at different clock rates. Connecting processing elements by a system of hardware queues, for example, allows the chip to exploit a thread-based, producer-consumer model to adapt processor settings to running threads; this enables efficient energy and speed control via dynamic voltage/frequency scaling. By exposing queue/memory status to near neighbors, processors can use control theoretic approaches to coordinate performance/energy needs across local and broader regions.
Major research questions include: (i) How much on-chip parallelism best balances application performance and energy? (ii) What range of applications are applicable to this model? (iii) How to design performance and power-efficient speed-control models based on local and global control approaches? (iv) What are the roles of distributed and hierarchical control techniques? (v) Can control theoretic approaches be devised that offer good responsiveness and stability in the face of sensing errors and sensing/actuation delays?