In recent years, a number of techniques have been developed to increase the resolution of light microscopy beyond the diffraction limit. Some of these techniques involve stochastically activating the fluorescence of a subset of molecules that are present in a field-of view, capturing an image of those molecules that are activated with a digital camera, transferring the image to a computer, and then reversibly or irreversibly deactivating the fluorescence of the molecules. This process is repeated for a large number of cycles until the pool of nearly all of the molecules present have been adequately sampled, which may take 40,000 cycles or more. Each image of this large number of images is then analyzed by fitting either single Gaussian distributions or overlapping Gaussians to fluorophore-image centroids or spots. The locations and probabilities of fit for each molecule are then determined by successively storing each image in the computer's memory, performing a cross-correlation of the fits of the Gaussians, and then storing the location and confidence of fit for each molecule. While acquisition of the images can be accomplished relative quickly, handling and processing such a large number of images is relatively slow because these techniques necessitate acquiring and processing a large amount of image data in order to obtain a relatively small number of pixel locations. For example, 40,000 cycles are used to capture 40,000 separate images which results in processing 20 gigabytes of data in order to generate a single 256 kilobyte final image. For these reasons, engineers, scientists, and microscope manufacturers continue to seek less computationally demanding systems and methods for handling and processing the images.