The goal of this project is to construct new methods based upon variance reduction techniques, such as importance sampling, that can be used to efficiently simulate rare events caused by noise in lightwave systems. Importance sampling is a method of biasing (or, altering) the probability distributions used to generate random Monte-Carlo trials so that simulated errors occur more frequently than would be the case otherwise. The aim here is to exploit the mathematical structure of the equations governing the propagation of signals to allow importance sampling to be performed. For example, in optical fibers, the governing equation is the nonlinear Schroedinger (NLS) equation, which is a completely integrable Hamiltonian system. The inverse scattering solution of the NLS equation shows that each pulse has a set of modes associated with it; these modes correspond to changes in the pulse's amplitude, phase, position and frequency. Since any value of the pulse parameters yields a valid solution of the NLS equation, no resistance is encountered if any of them is changed by noise. These four modes thus provide a natural basis upon which to construct methods by which the noise is intentionally biased to produce large signal fluctuations.
The development of high-bit-rate data transmission over optical fibers is one of the major technological achievements of the late 20th century. Optical fibers have fueled the growth of the global internet and are revolutionizing the ways in which information is communicated and processed. Because of the enhanced levels of performance demanded of modern lightwave systems, traditional analytical or computational methods are by themselves insufficient to accurately model the rare events that determine the overall performance of these highly complex systems. At the same time, their often large development costs makes the accurate prediction of their behavior and performance essential. Recent work has demonstrated, however, that hybrid analytical/computational approaches can make accomplishing this task not only possible, but also practical. Proof-of-concept examples have shown that error probabilities in such systems as small as one part in a trillion should no longer be considered beyond the reach of estimation. The methods that will be developed as part of this project are expected to provide the basis for computational tools that can yield large reductions in the time required to determine the performance of such systems.