The evolution of computers with respect to memory storage expansion and processing capabilities has enabled massive amounts of data to be accumulated and analyzed by complex and intelligent algorithms. For instance, given an accumulation of data, algorithms can analyze such data and locate patterns therein. These patterns can then be extrapolated from the data, persisted as content of data mining model(s), and applied within a desired context. With the evolution of computers from simple number-crunching machines to sophisticated devices, services can be provided that range from video/music presentation and customization to data trending and analysis.
Accordingly, tasks that at one time required skilled mathematicians to perform complex operations by hand can now be automated through utilization of computers. In a simplistic example, many individuals, rather than utilizing a skilled accountant to compute their tax liability, simply enter a series of numbers into a computer application and are provided customized tax forms from such application. Furthermore, in a web-related application, the tax forms can be automatically delivered to a government processing service. Thus, by way of utilizing designed algorithms, data can be manipulated to produce a desired result.
As complexity between relationships in data increases, however, it becomes increasingly difficult to generate an output as desired by a user. For instance, multiple relationships can exist between data, and there can be a significant number of manners by which to review and analyze such data. To obtain a desired output from the data, one must have substantial knowledge of content and structure of such data and thereafter generate a complex query to receive this data in a desired manner. Furthermore, if the data must be manipulated to obtain a desirable output, the user must have an ability to generate algorithms necessary to make the required manipulations or outsource the task to a skilled professional. Thus, expert computer programmers and/or data analyzers are typically needed to properly query a database and apply algorithms to results of these queries. Moreover, if data and/or relationships therebetween are significantly altered, the expert programmers and/or data analyzers may have to reconfigure a database query and algorithms to manipulate data returned therefrom. Furthermore, if a user or entity desires a disparate output (e.g., desires to modify data analyzed and/or modify data output), then the expert must be summoned yet again to make necessary modifications. Due to complexity and number of relationships between data, these tasks can require a substantial amount of time, even with respect to one of utmost skill. Accordingly, cost, both in monetary terms and in terms of time, can become significant to a user and/or entity, particularly in a business setting, where data must be analyzed and manipulated to create a desired output.
Conventional systems and methodologies for altering data and extracting data from multi-dimensional structures often require use of relatively lengthy statements, where each type of object analyzed needs to be precisely specified. Accordingly, a user must have substantial knowledge of structure of the multi-dimensional structure. Furthermore, the length of the statements provides opportunities for users to incorrectly enter such statements, resulting in user frustration. Another deficiency associated with conventional systems relates to aggregating data with respect to a plurality of sets. Due to possible difficulties with such aggregation, conventional systems/methodologies do not enable users to aggregate data over multiple sets.
Accordingly, there exists a need in the art for a system and/or methodology that provides additional functionality with respect to database systems.