Children increasingly use computers in their everyday activities and may access a variety of content through computers. A parent or an organization may deem some content inappropriate for a child and may wish to prevent that child from accessing such content.
If a parent deems computer-accessible content inappropriate, the parent may use parental-control software to block a child from accessing the content's source (e.g., a computer application or a website). However, it may be difficult or time-intensive for a parent to ascertain whether a particular source should be blocked. The ever-growing number of available sources of content may multiply this burden, which may make the parent's task of managing parental-control software unduly difficult.
Parental-control-software vendors may make content gatekeeping quicker and easier for parents by allowing them to automatically block content sources that the vendors determine are inappropriate. A parental-control-software vendor may use various methods to flag inappropriate content. For example, a parental-control-software vendor may provide blacklists to which parents may subscribe. However, a blacklist may result in too many false negatives and false positives. For example, a blacklist may include a content source with valuable content that many parents would want open to their children. The same blacklist may fail to cover certain content sources that many parents would want to block.
In addition to, or instead of, blacklists, parental-control software may use heuristics (e.g., keyword detection) to guess whether content is inappropriate for a child. However, this method may suffer from the same fundamental defects as blacklists: too many false negatives and false positives. For example, a keyword that usually signals inappropriate content may be benign in some contexts, and some inappropriate content may contain no signaling keywords. What is needed, therefore, is a more efficient and effective mechanism for managing parental controls.