Segmentation can be performed on a two (or greater) dimensional image to determine one or more curves that define different segments of the image. Each (possibly non-contiguous) segment can be associated with a unique label that identifies pixels belonging to that segment. For some applications, it is useful to define a space of segmentations, where each segmentation corresponds to a different group of labeled segments of the original image. For example, a probability distribution called a “segmentation distribution” may define probabilities for each of different segmentations in the space. Aspects of a desired segmentation distribution may be unknown, such as its log partition function (which is the logarithm of its normalization factor). In some cases it is possible to evaluate a particular probability value for a particular sample in a distribution, but not to generate samples of that distribution. Various techniques can be used to generate samples from the distribution. As more samples are generated, the samples converge to the desired distribution, and can be used to perform various types of calculations, including for example calculations for Bayesian inference tasks.
For many Bayesian inference tasks, evaluating marginal event probabilities may be more robust than computing point estimates (e.g. the maximum a posteriori (MAP) estimate). Image segmentation, particularly when the signal-to-noise ratio (SNR) of the image is low, is one such task. However, because the space of segmentations is infinitely large, direct inference or sampling is often difficult, if not infeasible. In these cases, Markov chain Monte Carlo (MCMC) sampling approaches can be used to compute empirical estimates of marginal probabilities based on generated samples of a segmentation distribution. For example, different samples are generated by proposing a change to a segmentation that corresponds to a previously generated sample, and then determining, with some probability, whether or not to accept the proposed change for generating a new sample.
However, conventional MCMC approaches applied to segmentation problems can suffer from slow convergence to and/or sampling of the desired segmentation distribution, thereby limiting their usefulness in practical applications. For instance, this is generally the case in application of the Metropolis-Hastings MCMC sampling procedure. One interpretation of the factor in such slow behavior is that in a Metropolis-Hastings iteration (i.e., for each proposed change of segmentation) the probability of accepting the change (equal to the Hastings Ratio) can be relatively small (and may even decrease as the segmentation reaches an accurate estimate), thereby causing most proposals to be discarded.