Computer vision systems typically begin by finding boundaries in their input images, i.e. locations at which properties change sharply (often indicating the edges of objects). Boundary finding algorithms require an estimate of how much values vary within each image region. Currently-available algorithms assume that this scale of variation is constant across the image and, typically, that it is known a priori. This assumption holds only for the simplest images and fails for many common sorts of inputs, including images containing significant texture, functions describing image texture (e.g. texture orientation at each image location), and surface depths computed by stereo matching. This research will develop an algorithm for estimating scale within individual image regions and, using it, a boundary finder that can operate on a much wider range of inputs. The new scale estimator relies on two keys ideas. To eliminate contamination from a few "bad" values, the standard deviation (the traditional scale estimator), is replaced by an estimator from robust statistics (e.g. the -trimmed standard deviation). To avoid peaks in scale estimates near boundaries, the algorithm derives its scale estimate from the minimum-scale neighborhood of each image location.//