Systems based on a client/server model often include one or more servers to manage the collection, transfer, distribution, and/or storage of data. Each server in the system has runs various processes. In systems that deploy large numbers of servers, performance issues on one or more servers can cause performance issues, including bottlenecks, on the entire system.
One form of performance monitoring is designed to monitor the server processes that cause performance issues. This form of performance monitoring often includes parsing server logs and collecting performance related information about running processes. Collected information is typically dumped into a log file. Depending on how frequently information is collected from the server logs, log files can grow very large very quickly. One problem with very large log files is that they can put a strain on memory. Another problem is that large log files can be difficult to parse, making it difficult to extract useful information. These problems are compounded in a system that has a large number of deployed servers.
Given that memory imposes practical limitations on log file size, log files need to be cleared, erased and/or refreshed periodically. For example, log files may need to be cleared on a weekly or even daily basis. Clearing the log files on a periodic basis limits the amount of accessible historical data/information to the time from which the log file was last cleared to the present. For at least these reasons, performance monitoring is limited in its ability to provide accurate and meaningful statistics and/or other analytical conclusions about system performance.