The goal of this research is to develop algorithms that simultaneously separate, localize, and track multiple moving acoustic sources. Most human beings have the ability to close their eyes while in a room with a half dozen people talking and separate out the various strands of conversation while physically localizing the speakers. This is something computers cannot yet do: it is easy to scatter a half dozen microphones about a table around which many people are speaking, but standard algorithms cannot use the signals from those microphones to separate, localize, and track the speakers. This project is exploring algorithms to perform simultaneous acoustic source separation, localization, and deconvolution, using modular architectures. Different modules encapsulate different pieces of knowledge: what the acoustics of a room can do to a sound, what human speech sounds like, what it means for sources to be separated, how sources tend to move about in space. The knowledge is combined in probabilistic framework, yielding estimates of the separated source waveforms and physical locations. These algorithms have the potential to improve devices like speaker phones, cellular telephones, stereos, computer speech recognition systems, burglar alarms, and hearing aids.