Filtering content has become a vital operation performed within virtually every organization and household that is connected to the Internet. The reasons for content filtering vary. Some reasons to filter content include blocking content associated with viruses, adult material (e.g., violence, pornography, etc.), and nuisance information (e.g., advertisements, etc.). Other reasons include restricting access to confidential material, etc.
By and large, organizations use a combination of automated services and manual techniques to filter content. Automated services may inspect links to certain services and provide pre-assigned ratings to content included therein or inspect the content itself and provide a rating. Manual techniques may include maintaining lists of words, phrases, and links that are used to determine if access to certain content is to be blocked or restricted.
One problem with content filtering is that within large organizations maintaining automated services and manual techniques for a plurality of resources, which may each have different access privileges, can quickly become a daunting exercise. Content ratings are continuously changing and new content is continuously received. In addition, access privileges of resources are regularly modified and new resources are added while others are deleted.
Conventional content filtering approaches do not provide a single generic approach that is re-usable and flexible enough to dynamically handle the changing environment associated with content filtering. This is so, because the tools and techniques do not exist for an organization to generically define and manage its own content filtering needs in manners that are uniquely needed by that organization and in manners that can be automatically and dynamically enforced.
Therefore, there is a need for improved content filtering.