The goal of this research is to develop an intelligent visual database system that enables learning from user search patterns to automatically extract and annotate semantic objects in a large collection of images (i.e., learning by example). To this effect, the approach used in this project is based on a content-based hierarchical representation of images, where low-level homogeneous color regions (elementary regions) form the lowest level in the hierarchy. Higher-level (composite) nodes are formed by selected combinations of these regions that match in color and/or shape one or more example templates provided by the user. Composite nodes correspond to semantic objects that are generated automatically according to our learning by example procedure. A hierarchical content matching procedure is also developed, where the hierarchical image representation is searched in a top-down fashion. This procedure enables very efficient retrieval of images containing the "learned" objects in subsequent searches. The results of this research will yield procedures for automatic segmentation of semantic objects according to user provided example templates for automatic indexing of large visual databases, and very fast retrieval of images containing such objects in visual databases that employ the developed hierarchical content representation. www.ece.rochester.edu/~tekalp