Joint source-channel coding and decoding techniques are rapidly emerging as important design tools for the complexity- and latency-constrained transmission systems that underlie the multimedia revolution. In such scenarios these techniques have the potential to offer the same performance with less complexity or delay compared with systems which completely separate the source and channel coding functions. The investigators develop theoretical insights and practical approaches into the construction of joint source-channel codes by addressing certain classes of optimal variable-length error correcting codes and their application to networks. The insights gained by these studies facilitates new advanced coding strategies as well as a fundamental theory that guides the design of future wireless and multimedia systems.

One approach to joint source-channel coding is to consider prefix condition codes which have an inherent source compression property and are in addition able to detect or correct errors resulting from noise on the communication channel. Thus, these codes typically require a smaller amount of redundancy than classical channel codes to achieve the same amount of error resilience. However, the current understanding of these codes remains rather limited which has led researchers to focus on heuristics. The investigators address these issues by tackling research problems in three interrelated directions. In particular, we (1) develop new minimum-redundancy variable-length codes with inherent error resilience properties and investigate issues related to their design and compression performance; (2) study the application of this family of codes to sensor and ad hoc networks and gain insight into their repercussions for higher communication layers; (3) investigate the construction of reversible variable-length codes including those with special error-correcting properties.

Project Report

The major goal of the project was to design codes for constrained source coding problems that have important industrial applications. The two main thrusts of this research have been (1) prefix condition codes with additional constraints, such as those which have been considered and used for video standards, and (2) data compression for electron beam lithography systems, which is a critical technology for the micro/nanofabrication of circuits. For the first thrust, fix-free codes or reversible-variable-length codes are prefix condition codes which can also be decoded in the reverse direction; the Huffman code is a famous example of a prefix condition code. Symmetric fix-free codes are fix-free codes in which each codeword is required to be a palindrome. They have been found to have more resilience to errors in transmission than ordinary fix-free codes, and for search and indexing applications they can use the same code table for decoding both in the forward and backwards directions, which is simpler to implement and reduces memory storage. We found a new approach to obtain all optimal binary symmetric fix-free codes. We also offered a conjecture, which if correct, can be implemented to provide a much faster construction of all optimal binary symmetric fix-free codes than earlier approaches. In terms of intellectual merit, there has been little mathematical understanding about fix-free codes, and this work contributes to that body of knowledge. In terms of broader impacts, fix-free codes were components of the video standards H.264 and MPEG-4, and efficient procedures to construct optimal symmetric fix-free codes may influence future communication standards. For the second thrust, the goal has been to develop specially designed data compression algorithms for the huge electronically controlled mask images of electron-beam direct-write lithography systems in order to speed up and improve such systems. The problem of layout image compression differs from the majority of work on image compression, which targets "natural" images, because there are (1) severe constraints on the decoder, (2) complications in the extraction of patterns, and (3) special structures within layout images to be exploited. In earlier research we introduced the lossless binary layout image compression algorithm Corner2, which assumes a somewhat idealized writing strategy, namely row-by-row with a raster order. One of the outcomes was the publication of the Corner2-MEB algorithm, which deals with the changes needed to handle the writing strategy of MAPPER, which has electron beam writers positioned in a lattice formation with each writing its designated block in a zig-zag order. In a different direction, Intel Corporation and a few other companies are using unidirectional layouts to simplify the manufacturing of circuits. Complementary lithography is an approach for fabricating unidirectional and gridded layouts where optical lithography produces unidirectional lines at a fixed pitch and electron-beam direct-write lithography cuts those lines. The PI will be publishing a journal paper on theoretical and practical aspects of data compression for the "cut" images. The intellectual merit of this portion of the project has been to focus on the increasingly important information transmission problem within electron-beam lithography. In terms of broader impacts, microelectronic circuits directly impact the speed and efficiency of many information technology systems including signal and media processing, aerospace, and defense. The success of the project could therefore impact technologies which are ubiquitous.

Project Start
Project End
Budget Start
2010-09-15
Budget End
2014-08-31
Support Year
Fiscal Year
2010
Total Cost
$275,604
Indirect Cost
Name
Texas A&M Engineering Experiment Station
Department
Type
DUNS #
City
College Station
State
TX
Country
United States
Zip Code
77845