Data quality problem is of great importance due to the emergence of large volumes of data. Many business and industrial applications critically rely on the quality of information stored in diverse databases and data warehouses. The goal of this research project is to develop a systematic methodology of data quality analysis and improvement to achieve robust decision making under imperfect information environments. The project develops a unified framework for data quality assessment and evaluation, deliveries practical solutions to improve data quality through information production and management, and disseminates research findings by maintaining a website to increase the awareness of information quality among academic and industrial professionals. The approach consists of developing Bayesian network models to capture inter-relationships between data quality metrics, applying statistical sampling schemes and data mining methods for data quality assessment, and generalizing statistical techniques for root cause identification and data quality improvement. The techniques are evaluated and validated using synthetic examples and real-life cases from telecommunication and information technologies (IT) industries. The outcomes of the project are expected to be generic and provide a concrete basis of data quality management that can be applied to different data-intensive applications. The project will have broad impacts on advanced theory and methodology of information quality management, enhanced decision making, and the creation of a workforce of data quality assurance researchers and practitioners. Knowledge gained and results obtained from this research project will be broadly disseminated via Internet (www.stevens.edu/engineering/seem/Research/projects/DataQuality.html), in conferences, workshops, and various levels of courses.