Users that interact with software programs sometimes provide feedback as to the user experience with (at least parts of) those programs. Program developers/authors use the feedback to simplify existing products as well as create easy-to-use, more intuitive new software products.
Existing feedback collection mechanisms suffer from a number of problems, which makes collecting useful feedback difficult. One cumbersome and fairly manual way to collect feedback is to conduct usability tests in which a usability engineer steers, directs and watches a test subject perform scripted tasks. The engineer manually collects information from this session. As can be readily appreciated, this manual collection method is fairly limited and does not provide for real-world and/or unscripted usage scenarios.
In real-world scenarios, one problem is that many users do not know how to find the mechanism needed to return feedback, and thus do not provide any. Another problem results from users providing the feedback after interacting with the program. Sometimes such subsequent feedback may be very general (“took too long” or “was too complicated”), which is of almost no help in fixing any specific problem. Other times users may have difficulty with a particular part of the program, but then not accurately recall the specific issue or issues that caused the difficulties with respect to that part. Still other times, a user may struggle through a difficult part of a program, but after further interaction, have a different understanding as to the program, which biases the feedback returned, or influences the user to not bother sending any feedback.