In various settings, data processing tasks involving large data sets can be required in various use cases. For example, email targeting campaigns can require processing data sets involving various tables of data about users from various data sources as well as performing database queries on these data sets to identify users to whom targeted emails will be sent. In this scenario, data might be needed from different data stores and from various data tables within the different data stores. Additionally, for a large scale email targeting campaign, data store queries involving billions and billions of data entries might be required in order to identify the targets of such a campaign. These operations can tax the computing resources of the data stores from which the data is being retrieved, which can hamper the performance of the data stores with respect to other operations that the data stores might be tasked to perform.