This proposal investigates some of the methodologic issues that have surfaced recently in HIV screening and in analysis of some AIDS data sets. New cost-efficient protocols for HIV screening and new statistical methodologies for analyzing such screening data for various purposes are proposed. New methodologies are also proposed for the analysis of several major AIDS surveillance data sets and repeated measures data under informative censoring arising from clinical trials of therapies for AIDS. Screening for HIV provides an important measure to control the further spread of HIV and the AIDS epidemic. However, the poor quality and design flaws of currently used protocols have greatly limited its effective use. Various new cost-efficient approaches are proposed to more accurately estimate and model prevalence, to screen with high precision tests to improve the accuracy of diagnosis of HIV and to reduce false negative rate for screening of donated blood. In addition, new statistical methods are proposed to analyze such screening data for various practical and to model the effects of borderline cases and pool sizes. Data from AIDS surveillance systems provide an indispensable source of information for studying the natural history and dynamics of the AIDS epidemic and for assessing the current health care needs and future planning. However, current statistical methods are inadequate to explore the full potential of these data bases. A general methodology for analyzing such data is proposed, which is easy to implement and flexible to take into account various practical considerations. Repeated measures data collected in longitudinal studies are often analyzed by the random effects model. However, such model may not be appropriate if the measured data are informatively censored. Modeling and incorporating the censoring process leads to untractable computations. A new approach to this problem is proposed.