Localization refers to a set of sensors estimating the precise location of a source using information related to their relative position to the source. This information can be distance, bearing, power level, and time difference of arrival. Localization is a fundamental component of a number of emerging applications, spanning the detection of biological threats, pervasive computing, where locating printers and computers permits a computer to send its print job to the nearest printer, in sensor networks, where individual sensors must know their own positions, and an emerging multibillion dollar wireless localization industry.
It is important to ensure that localization occurs in an efficient and time critical manner. This in turn depends on how the sensors are delpoyed. Thus, this study considers the issue of sensor deployment, as well as the development of computationally efficient algorithms that exploit this deployment to achieve efficient and time critical localization. Prior work considers this only from the perspective of the number of relative position measurements that are available for each source. This approach largely ignores the characteristics of the actual algorithms that perform localization. In general linear localization algorithms have poor noise performance, and instead it is better to consider nonlinear algorithms that may have false stationary points. Accordingly, the investigator will provide nontrivial regions surrounding sensors such that should the source lie within these regions then the localization algorithms are globally convergent.