Large data sets may exist in various sizes and organizational structures. With big data comprising data sets as large as ever, the volume of data collected incident to the increased popularity of online and electronic transactions continues to grow. For example, billions of records (also referred to as rows) and hundreds of thousands of columns worth of data may populate a single table. The large volume of data may be collected in a raw, unstructured, and unde-scriptive format in some instances. However, traditional relational databases may not be capable of sufficiently handling the size of the tables that big data creates.
As a result, the massive amounts of data in big data sets may be stored in numerous different data storage formats in various locations to service diverse application parameters and use case parameters. Data variables resulting from complex data transformations may be central to deriving valuable insight from data driven operation pipelines. Additionally, insights may be gained from functional linkages between operational data. Many of the various data storage formats use transformations to convert input data into output variables. These transformations are typically hard coded into systems and may be system specific and/or definitive of a systems environment. As a result, transferring data between differing data environments may be difficult and/or time consuming.