A basic assumption in classical communications systems is that the transmitter and receiver are synchronized: for each transmitted symbol, one receives one channel output sample. This assumption has been valid under high signal-to-noise ratio regimes, where timing errors can generally be tracked nearly perfectly. With the new coding techniques of the last decade, leading to much lower signal-to-noise ratios, this assumption is no longer valid; synchronization is an increasingly important issue in many applications. It is likely to become a bottleneck in real systems in the years to come, as practical near-capacity codes for basic channels become standard under low signal-to-noise regimes. Despite this, channels with synchronization errors, including even the simplest random deletion channel, represent a remarkably under-studied area in information theory. While there are families of practical near-capacity codes for basic channels with errors, such as the binary symmetric channel and the additive white Gaussian noise (AWGN) channel, there is not even a known provable expression for the capacity of the deletion channel, much less more difficult variations of channels with synchronization. This represents a fundamental gap in modern information theory. This research involves advancing the state of the art in coding for channels with synchronization errors, both by expanding on the information-theoretical foundations and by designing new codes and coding/decoding techniques. i