1. Field of the Invention
The present invention generally relates to efficiently classifying chat as acceptable or unacceptable to protect the safety of guests in an online environment.
2. Description of the Related Art
Online chat may refer to any kind of communication over the Internet which offers a real-time direct transmission of text-based messages from sender to receiver. Online chat may be a point-to-point or a multicast communication from one sender to one or more receivers. Chat may be a feature of an online community, a forum, a Web conferencing service, or any of a variety of other online text based communications.
Many online communities have rules or guidelines to avoid misunderstandings and to simplify the communication between users in a chat session. These guidelines vary from community to community. Hopefully, such guidelines protect the communities and their guests from bullying, lewd behavior, harassment, violence, and a host of other unwanted or inappropriate activities. Most chat communities monitor online chat to ensure compliance with the guidelines. Moderators are tasked with monitoring chat to determine the context of particular chat messages to ascertain whether a user has violated the community guidelines. It is both impractical and expensive for monitors to directly monitor all chat communication due to the amount of chat in most online communities.
Software is often used to filter chat which may be offensive and that may require further scrutiny by a moderator. The software filter most commonly deployed to monitor chat is a Bayesian filter. Bayesian filters work by using straight matching of human created dictionary of tokens to calculate the probability of chat being inappropriate and flagged for further examination by the moderator. These tokens are typically words, phrases or other classifiers. This has the advantage of being easy to predict and easy to manage directly, but at the cost of accuracy and efficiency. The filter is unable to distinguish between benign uses of certain terms or phrases and other more inappropriate uses of those same terms or phrases. Since many phrases have multiple or hidden meanings, the filter conservatively passes those chat messages over to the moderator. In all probability at some point in time during a chat session, most users will say something which will trigger a chat filter to tag a chat message and notify the moderator of the activity. Coupled with the number of false positive statements from the chat filters, moderators are quickly overwhelmed with requests to review chat sessions.
Chat moderators are required to determine the intention for many lines of chat associated with a particular guests chat session. Typically this involves reading a good portion of the history for the chat session. The moderator must determine the intentions of the guest authoring the chat in the session and whether the chat session is “bad and unacceptable” or “bad but acceptable”. The moderator must ultimately determine if the chat session which triggered the review should be overlooked or if any action should be taken on the guest account. The entire process relies tremendously on a moderator's intervention. The moderator's intervention places a huge burden on a company's manpower and is costly. Current filtering technology needs to be improved to escalate to the moderator only those chat sessions that are deemed contrary to the community guidelines and unacceptable as opposed to the chat sessions which may have made a small mistake and would otherwise be forgiven.