While customer support may be provided either in person or via phone by human agents, recent years have witnessed the appearance of a large number of automated systems that allow customers to obtain support without human contact. Examples include automated voice response system (VRS) and Internet-based websites. However, measurement of those automated customer support systems has proved to be challenging. Common measures involve the classification of customer contacts with the automated system and include the need to classify the contact in specific categories, such as success or failure. This can be particularly important in a case of technical support systems where determining whether the customer solves his problem is an essential measure of success.
In agent-based customer support systems, the success or failure of the interaction can be determined by the agent, as well other measures such as how much time the user spent solving his or her problem. However, measuring the usefulness of those automatic customer support systems often resorts to customer surveys or interviews, which demand the customer's goodwill or compensation. In such cases, the measure can become biased by unhappy customers who accept doing surveys to vent their frustration with the system, or by the particular demographics of the people who accept the compensation. In particular, when trying to measure how productive a user was in his or her interaction, survey-based measurements can affect the results in unpredictable ways.
Accordingly, there is a need for techniques that can examine the traces of what a user did in an automatic system (for instance, which documents he or she read, which options were chosen from menus, and which information was provided), and based thereon, classify the interaction according to pre-specified categories.