The goal of this research to to develop a new generation of robust, highly effective unsupervised and semi-supervised models of meaning extraction. Our key methological insights include the use of global methods of inference to simultaneously consider many linguistic aspects of the task, the use of rich typed lexical dependencies, and the semi-supervised use of structured data from the web, such as dictionaries, thesauruses, encyclopedias, and so on. We are extracting meaning at three levels: word meaning, propositional meaning, and conceptual meaning. At the word level, we are learning lexical relations like hyponymy (leptin is-a hormone), synonymy, and others both from raw text and from structured sources like on-line dictionaries. At the propositional level, we are learning predicate-argument structure using a global unsupervised clustering model as well as developing semi-supervised methods of learning semantic frame extractors. At a larger structural level we are inducing scripts and structured narrative relations between verbs. These rich models of meaning will have a broader impact by providing a critical step towards the creation of systems with true language understanding capabilities. The results could thus impact the creation of natural-language applications in every field, from educational or tutorial applications, to information extraction tasks like legal discovery, to conversational agents. All deliverables of this project will be available on the web: WordNet expansions, induced frames and scripts, our temporal event classifier, and semantic role, frame, and script inducers.