Typically, a large-scale data set may be based on raw data that has been collected but has not undergone any form of processing. The large-scale raw data may be multiple times larger than a corresponding reduced data set. For instance, a factor of about 600 is not uncommon in the area of seismic data interpretation. That is, if a rendering system visualizes 200 GB of seismic volume data, then respective raw data could be approximately 120 TB in size. Usually, this large-scale raw data has to be preprocessed, which may require hours or even days, in order to provide a reduced data set that is capable of being further processed or manipulated for analysis in a user-interactive environment. In another aspect, preprocessing of the large-scale data set may incur unwanted (or unknown) filtering effects on the reduced data set, which in turn may provide misleading results during further analysis.