A data set comprising one or more values for one or more entities may be considered dense or sparse, depending on the distribution of the values across the entities. For example, if most of the non-zero values are clustered together among neighboring or nearby entities, the data may be considered dense. Conversely, if non-zero values are widely separated or rare, with most of the values being zero or null, the data may be considered sparse. It may be difficult to analyze or compare sparse data sets, as average values across a region may be very low due to the large number of intervening null values. This may be further compounded with very large and sparse data sets, where only a relatively few entities have non-null values, and those values are very low. For example, performing individual comparisons between many thousands or millions of entity values may take significant amounts of memory and processing time, as well as processor-memory bandwidth. The comparisons may also result in a high rate of false positives or negatives with significant signals being difficult to extract, due to the sparsity of the data and low values relative to average values.