Generally, in a typical customer contact center or call center, various aspects related to the performance, efficiency and quality of the contact center are evaluated and monitored manually via a sample cross-section of agent-customer interactions. These interactions are typically a combination of real-time and non-real time (e.g., tape recorded) interactions as well as voice and text-based interactions (to the extent that the contact center handles one or the other of voice and text customer communication, or both).
Due to the limited available pool of expert human assessors who can analyze and evaluate these interactions, the typical sample size used for these evaluations is a tiny proportion of the overall call volume. Moreover, the interactions chosen for evaluation and analysis are randomly sampled from the entire call traffic. A number of important decisions about the contact center operations such as agent training, process modification to improve efficiency/quality, call routing and client feedback, among others, are made based on the analysis of these highly limited samples of interactions. Thus, the sample of interactions chosen for the analysis plays an important role and should assuredly represent a fair picture of the variety of interactions conducted at the contact center and of the particular aspect being evaluated.
For example, if the performance of two agents is to compared based on the calls they handle then it is only fair to expect that the type of calls selected from the two agents should match in complexity, call type and customer experience. This can be a non-trivial task especially if the two agents work across different time shifts and/or on different days.
Similarly, performance evaluation of two units of a contact center can be misleading if the customer interactions from the two units are randomly selected without ensuring that they match on some critical aspects of the interactions such “agent vintage” (experience/longevity of the agent), “type of customer”, “typical nature of the interactions” and so on. In short, a random sampling of interactions to evaluate a variety of aspects of a contact center, as is being done conventionally, can be a highly unreliable and misleading exercise.