In recent years, we've focused on the OME analysis system and developing robust general image analysis methodology, culminating in our pattern recognition tool called WND-CHRM. We have validated this pattern-recognition approach to biological image analysis using diverse imaging modalities ranging from fluorescence microscopy to X-rays of human knees. We have also validated a range of applications from scoring image-based assays to diagnosis of disease to prediction of future disease risk. The specific applications of this approach are covered in reports AG000674-09 and AG000685-06. A major effort in the previous year has been to rewrite the WND-CHRM code-base to make it more modular, better organized, easier to use, and accessible with the Python scripting language. A recent release of WND-CHRM 1.50 is available from our code repository (https://code.google.com/p/wnd-charm/). Whole-image analysis has proven very useful, but it is not always possible to compare whole images to each other. Examples of relatively homogenous images are those of cultured cells, or tissues like muscle, liver, and certain types of tumors. Our work on human knee X-Rays (see AG000685-06) was the first application where a certain degree of pre-processing was necessary to align images of different subjects to compare them to each other. In this case, we simply found the center of the knee joint in each image and extracted a fixed radius around this center for all patients. A much more complicated alignment problem exists in images with complex anatomy. Possibly the most extreme example of this are stained sections of brain tissue. A solution to the alignment problem would allow the use of generalized pattern recognition to address morphological differences in an anatomical context. The general approach to the alignment problem involves an initial training step where comparable anatomy is manually identified in several subjects by generating fiducial marks or regions of interest (ROIs). These ROIs are used to train a classifier to automatically identify the target tissue by systematically scanning images with overlapping ROIs and generating an anatomical map for each image. These maps would in turn be used to isolate target tissue and generate new ROIs that can be subsequently analyzed by experiment-specific classifiers as before. In many cases the identification of ROIs for subsequent analysis can be accomplished with traditional image segmentation techniques. However, for a great many image problems, including those of interest to our group, contrast is limiting and traditional segmentation is error prone. Analogous to our success with whole-image pattern recognition, the proposed work aims to identify target tissues automatically by manual training of classifiers based on general algorithms, rather than by developing specific segmentation algorithms for each imaging problem. Spatially-resolved pattern analysis places an extreme burden on the performance of our software. Instead of an entire image being considered at once, or split into a small number of tiles on a grid, to achieve spatial resolution, each image must be sampled thousands or millions of times. In order to make this type of application practical, the computational strategy used in the software must be reconsidered. Previously, all 3,000 low-level image features were calculated for each image sample, even when most of them were later found to be irrelevant to the classification problem because they lacked discrimination power. The major change in strategy to enable spatially-resolved pattern recognition is to eliminate unnecessary calculations. This requires an on-demand computing strategy for image features, which is a major architectural goal for the wndchrm software. Our current release addresses the architectural issues necessary for this strategy, and the overlaying code to make use of this architecture is currently in development. We have have also made the majority of the underlying C++ code accessible from the Python scripting language to make it easier to customize how WND-CHARM is used in new applications. It is now possible to compute on-demand features using the Python interface and perform further processing using mathematical and scientific computing libraries available for Python (numpy, scipy). The Python-related software is publicly available under the "pychrm" branch on our public code repository (http://code.google.com/p/wnd-charm/). In 2012, in collaboration with Jason Swedlow (University of Dundee, Scotland), a large international project sponsored by the Wellcome Trust was initiated to develop specific applications of the OME/OMERO system. Our group's contribution to this project involves providing interfaces between OMERO and WND-CHARM to enable image comparisons in large, diverse image repositories. The eventual goal is to use pattern recognition to annotate new images added to these collections automatically, based on previously annotated images and a large set of independent classifiers that opeerate autonomously in the background. The primary design goal of a system like OME/OMERO is to provide scientists with easy ways of organizing and annotating their image collections. The organizational structure of the images and their grouping by their annotations can also serve as the primary inputs for training pattern-recognition classifiers. Because classifiers require little or no additional input from the user, the natural convergence of these two technologies represent a powerful new mode for maximizing the utility of large scale scientific and medical image databases. Currently we have a functioning prototype that interacts with OMERO to read image data and annotations;use these for training a classifier;and return annotations derived from classification back to OMERO. Substantial work remains to make this integrated system practical. The Python interface needs to be restructured to parallel the restructuring in the underlying implementation of WND-CHARM in the C++ language. The OMERO system needs to be made more flexible to accommodate the types of annotations possible with WND-CHARM, as well as to store the computationally expensive image features so that they can be reused by different classifiers. We are working closely with members of Dr. Swedlow's group to merge these two projects into a practical, general-use system.