Analysis, processing, and visualization of various types and forms of data has become an integral part of daily business and even personal lives of many people with the proliferation of computing technologies. From business data such as sales, marketing reviews to student performance data in schools, people from a wide range of backgrounds and skill levels view, process, and try to make sense of ever increasing amounts of data.
A typical scenario for data analysis and viewing includes a user looking at a spreadsheet containing various dimensions of data trying to analyze through formulas or charts, commonly set up through manual configuration. Even to reach some level of automation, users typically have to have a general understanding of the data in front of them, select portions (data sets within overall data), select suitable analysis tools (for example, trending formulas, chart parameters, etc.). For small amount of data, this may not be a daunting task, but small amounts of data also provide a less accurate snapshot of the overall story. When more accurate results are desired or available data amounts are large, common, manual configuration based tools may be inadequate at best, unusable at worst.
Furthermore, size of a data set may often be not something in the user's direct control. Unless a user crafted the data by hand, they may have obtained it from another source. Many sources of data, especially sources that are more structured (e.g. a database, publically available data sets from the government, etc.) may be large enough to make manual analysis very difficult without the right tools and the proper know-how.