The objective of this research is to develop abstractions by which the controlled process and computation state in a cyber-physical system can both be expressed in a form that is useful for decision-making across real-time task scheduling and control actuation domains. The approach is to quantify the control degradation in terms of response time, thereby tying computer responsiveness to the controlled process performance and use such cost functions to effectively manage computational resources. Similarly, control strategies can be adjusted so as to be responsive to computational state. Unmanned aircraft will be used as vehicles to demonstrate our approach. The intellectual merit of this research is that it takes disparate fields, control and computation, and builds formal abstractions in both the computation-to-control and control-to-computation directions. These abstractions are grounded in terms of physical reality (e.g., time, fuel, energy) and encapsulate in a form comprehensible and meaningful to each domain, the relevant attributes of the other domain. This research is important because cyber-physical systems are playing an increasing role in all walks of life. It will allow design approaches to be systematic and efficient rather than ad hoc. It is based on a large body of our prior work that has begun to successfully bridge the representational and algorithmic gap that separates the control and computer science & engineering communities. Dissemination of results will be by means of courses in our universities, instructional materials, research and tutorial publications and industry collaboration (e.g., General Motors R&D). The plan is to hire minority/female students.
Cyber-physical systems are increasing in importance to society. Initially, such systems were confined to very expensive applications such as aerospace. Over time, these systems have migrated to much more cost-sensitive, but still life-critical, applications, such as automobiles. Due to their very nature, cyber-physical systems occupy two distinct domains of engineering. The controlled plant lies in the domain of automatic control theory while the controller lies in the domain of computer systems (both hardware and software). It is often difficult to convey information across domain boundaries in a form that is compact and intelligible to experts in both domains. The present project uses computer response time as a metric that is meaningful to both domains. In the control domain, the computer is regarded as an entity in the feedback loop of the controlled plant. The computer response time is seen by the controlled plant as feedback delay. The feedback delay (along with the accuracy and timeliness of the input sensor data and the quality of the control algorithms) strongly affects the quality of control provided. It is well known (and intuitively obvious) that as the feedback delay increases, the quality of control degrades. Beyond a certain point, the quality of control may degrade to such an extent that the controlled plant leaves its safe performance envelope and fails. The relationship between the control quality and feedback delay can be quantified using the state equations of the controlled plant, and expressed in the form of cost functions. These cost functions, generated by a control domain expert, can then be used by the embedded control computer to schedule control tasks appropriately. An important point is that cost functions can depend on the current state of the controlled plant and its environment. For example, the cost function of the various control tasks for aircraft control will vary depending on the prevailing turbulence. In this project, we have shown how to obtain such cost functions with a detailed case study of automobile control. As mentioned above, the quality of control also depends on the quality of the output data from the computer. Many algorithms are iterative in nature and can be terminated prematurely at the cost of output quality. In some cases, when the need for output is particularly urgent and the computational capacity is very limited, prematurely terminating such an iterative task and accepting a less-than-optimal output may be the best thing to do. The problem of when to optimally terminate a task then arises. This problem becomes quite difficult when iterative tasks are not independent but instead form a task graph with some tasks providing output which is consumed as input by other tasks. We have developed a heuristic that takes the overall time budget and assigns it to the various tasks so that the output quality is quasi-optimized. This is done in the context of dynamic voltage and frequency scaling, where the voltage and frequency can be adjusted so that faster processing can be "purchased" at the cost of higher energy consumption (and greater thermal stress on the circuitry). Another resource control approach is through adaptation of processor architecture. Complex cyber-physical systems often use highly sophisticated processor cores to control them; such architectures consist of multiple functional units and internal buffers that lend themselves to dynamic configuration. Each configuration offers an energy-vs-performance tradeoff which is particular to the current needs of the application. Dynamic architecture "tuning" algorithms have been developed for cyber-physical systems. Recognizing that the needs of the controlled plant depend on its current state leads to a further resource control technique. Cyber-physical systems used in life-critical systems require high levels of fault-tolerance to satisfy their high reliability requirements. However, when they are deep within their safe operating space, the need for computational reliability drops; the controlled plant can tolerate computational failure without suffering catastrophic failure. We have devised strategies to break up the state-space of the controlled plant into subspaces, each of which is characterized by the fault-tolerance required in that subspace. Most of the time, very little fault-tolerance is required; this can translate into a greatly reduced computational workload, resulting in lower processor stress (and hence decreased processor failure rate). Broader Impact: This project has supported the training of seven MS and PhD students in cyber-physical systems. This is an interdisciplinary field and it is especially important to produce a workforce functional across the twin domains of control and computing. Our graduated students have all taken up industrial positions. In addition, we have built up a relationship with the local high school district; the first effect of this was to have a high school student work with us over a summer on software development. We plan to continue this collaborative activity.