This inter-disciplinary project has its roots in Natural Language (NL) processing. Languages such as English allow intricate, lovely and complex constructions; yet, everyday, ``natural? speech and writing is simple, prosaic, and repetitive, and thus amenable to statistical modeling. Once large NL corpora became available, computational muscle and algorithmic insight led to rapid advances in the statistical modeling of natural utterances, and revolutionized tasks such as translation, speech recognition, text summarization, etc. While programming languages, like NL, are flexible and powerful, in theory allowing a great variety of complex programs to be written, we find that ``natural? programs that people actually write are regular, repetitive and predictable. This project will use statistical models to capture and exploit this regularity to create a new generation of software engineering tools to achieve transformative improvements in software quality and productivity.
The project will exploit language modeling techniques to capture the regularity in natural programs at the lexical, syntactic, and semantic levels. Statistical modeling will also be used to capture alignment regularities in ``bilingual? corpora such as code with comments, or explanatory text (e.g., Stackoverflow) and in systems developed on two platforms such as Java and C#. These statistical models will help drive novel, data-driven approaches for applications such as code suggestion and completion, and assistive devices for programmers with movement or visual challenges. These models will also be exploited to correct simple errors in programs. Models of bilingual data will used to build code summarization and code retrieval tools, as well as tools for porting across platforms. Finally, this project will create a large, curated corpus of software, and code analysis products, as well as a corpus of alignments within software bilingual corpora, to help create and nurture a research community in this area.