Conventionally, image processing algorithms are optimized for single-capture images. However, applying those image-processing algorithms to combined images made up of multiple single-capture images can result in sub-optimal image quality. In such processes, image processing parameters are determined for each single-capture image in the combined image independently, neglecting the context of the single-capture image within the combined image. Thus, mismatching image processing algorithms may be applied to overlapping portions of the single-capture images, resulting in regions of different single-capture images that contain the same content appearing differently in each image.
Conventional methods for resolving this issue include process- or feature-matching around image borders or stitch lines, and pre-matching and locking features. However, the former results in low image quality, while the latter prevents the cameras from being adaptable to changing conditions (such as changing viewpoints or capture orientations, changing lighting conditions, moving objects, and the like). Furthermore, these methods can result in inefficient stitching and compression when combining single-capture images.