Image editing applications perform image segmentation to determine boundaries of regions within an image. Image segmentation is used, for example, to determine related areas of an image, such as related areas that form a figure of a person. An existing computing system including an image editing application uses an architecture combining one or more neural networks, such as a convolutional neural network (e.g., “CNN”), and one or more stages of post-processing to perform image segmentation on a received image. In available image segmentation systems, the one or more neural networks provide an image mask indicating an unrefined segmented area, and the one or more stages of post-processing refine the boundaries of the segmented area indicated by the image mask. The available image segmentation systems may use the post-processing stages to attempt to recreate details of the boundaries of the unrefined segmented area.
However, the inclusion of stages of post-processing reduces the speed at which the image editing application performs image segmentation, while increasing the computing system resources required to perform the segmentation. In addition, the segmentation mask generated by the combination of the neural networks with the stages of post-processing may include boundary artifacts that incorrectly identify the boundaries of the segmented area. A neural network architecture that can produce accurate image segmentation without including stages of post-processing may provide more accurate segmentation information while consuming fewer computing system resources.