A complex system is often defined as a system composed of interconnected parts whose properties cannot be predicted from the properties of its individual components. The investigator studies systems composed of many interacting agents that are not controlled by designated controlling agents and are self-organized. The agents are autonomous and have only local views of their system. They are endowed with the ability to learn from received signals and they share their knowledge with their neighbors by communicating it with agreed languages and rules. A typical feature of such systems is their tendency to display emergent behavior. An important instance of emergent behavior is the phenomenon of herding. Herding is a process where agents in a group ignore their own signals about the state of nature and follow the actions of their neighbors. The research is on the development of a methodology for understanding the interplay between learning and herding in complex systems.

The aim of this work is to study the emergence of herding in complex systems with distributed signal processing. Emergence of herding is defined in a mathematically precise way so that it can be detected in a meaningful way. The agents of the system are rational, i.e., they employ Bayesian learning. The main objective is to understand the emergence of herding in multi-agent systems which is due to diffusion of system knowledge through interactions of received signals, perceived actions of neighboring agents, and learning. Various models of sharing information are studied and scenarios where herding readily arises are identified. Improved methods for efficient diffusion of knowledge in multi-agent systems are developed and ways of preventing adverse herding are sought.

At the recommended level of support, the PI will make every attempt to meet the original scope and level of effort of the project.

Project Report

of networks of agents. More specifically, the systems were composed of rational Bayesian agents, that is, agents that update their beliefs in an event of interest or in static/dynamic parameters by employing the Bayesian paradigm. The work also involved the study of emergence of herding. Herding is an event arising when agents start to ignore their private signals and practically make decisions based on the decisions on their neighbors only. Various models of sharing information among agents were studied and different scenarios were investigated. The general classes of problems that were addressed included: (1) problems where the agents had to make decisions on a set of small number of hypotheses, (2) problems where the agents had to estimate unknown parameters that could be either static or dynamic or both, and (3) problems where the dynamic unknowns of interest in the system were highly nonlinear and the agents locally applied the particle filtering methodology (this methodology is relevant when the studied dynamic systems are nonlinear and non-Gaussian). In studying the first problem, we started with naive learning where the agents communicate their beliefs with their neighbors and update their own beliefs iteratively based on weighted averages of the neighbors' and their own beliefs. This type of learning is far from optimal. Instead, we wanted to develop schemes for learning where the agents achieve a belief consensus which would be identical to that of a hypothetical observer that has access to all the initial beliefs of the agents and that would combine them using Bayes' theorem. In other words, we wanted schemes that would allow for obtaining the optimal Bayesian solutions. We found schemes for obtaining optimal solutions for both binary and multiple hypothesis settings. When the agents in the network follow our schemes, after a sufficient number of iterations, they all come up with the optimal belief. With our schemes, the beliefs are not updated directly, but indirectly via the computations of well-defined functions of the beliefs and by applying the average consensus algorithm. We compared the naive learning and the Bayesian learning, where in both cases the agents exchanged beliefs with their neighbors practically under the same conditions. We showed that the consensus beliefs in both cases (when they exist) can be very different. In studying sequential decision making, we focused on the process of herding. We found that the mechanism of making decisions entails that only the first few agents reveal significant information and the subsequent agents practically bring little new information to the network. In other words, if several agents in a row make the same decision, this strongly affects the threshold of the next decision-making agent in an important way. Even if such agent receives a signal that clearly favors the opposite hypothesis, it will side with the decision of the previous agents. Another interesting phenomenon occurs when an agent makes a decision different from the one made by a series of agents preceding it. The opposite decision of the agent cancels almost all the aggregated knowledge of the previous agents. For the second class of problems, we also found optimal solutions. They are solutions to the estimation problem of unknown parameters in a network of agents that have private observations and can communicate with their neighbors. The criterion for optimality was the difference between the estimates of the agents and those of the fictitious fusion center that employs Bayesian theory. Our solutions are optimal in the sense that they converge to those of the fictitious center with time. We obtained solutions for a wide range of problems including ones where the noises in the observations of the agents are independent and when the noises are correlated. In the study of the third class of problems, the emphasis was on sequential Bayesian learning with distributed particle filtering algorithms. These algorithms are executed by a set of interconnected agents where some or all of the agents perform local particle filtering and interact with other agents in order to achieve the common goal of calculating a state estimate. The algorithms provide an attractive solution to large-scale, nonlinear, and non-Gaussian distributed estimation problems that are often encountered in applications involving agent networks. We proposed a range of approaches for accurate sequential state estimation for several classes of problems and demonstrated their excellent performance in many important scenarios.

Project Start
Project End
Budget Start
2010-08-01
Budget End
2013-07-31
Support Year
Fiscal Year
2010
Total Cost
$470,207
Indirect Cost
Name
State University New York Stony Brook
Department
Type
DUNS #
City
Stony Brook
State
NY
Country
United States
Zip Code
11794