Wherever information is conveyed to an audience, there may be a desire to analyze the semantic content of that information.
Such analysis of semantic content might be carried out in comparatively passive and/or after-the-fact fashion. For example, semantic content might be analyzed with the goal of monitoring and/or reporting, as is the case with the “word clouds” that are featured in the sidebars of various web-based blogs and that display the relative frequency with which various words appear in some already-published material.
There is also a need for active and/or before-the-fact analysis of semantic content. Especially where it is practical to analyze the semantic content of information before, or even in real time as, that information is conveyed to its audience, ability to filter, censor, or otherwise modify that information based on recognizable semantic patterns therewithin would be useful in a wide variety of circumstances.
An individual, or a public or private organization, that conveys, or is responsible for others who may convey, information in various forms to various audiences will generally have concerns regarding, and may even have a duty to monitor or control, the semantic content of such information based on issues of legality, secrecy, confidentiality, privacy, accuracy, and/or any of various legally mandated and/or self-imposed standards such as those of political correctness or based on other codes of proper conduct or appropriate behavior, violation of which could in some situations cause considerable embarrassment, loss, or other harm to befall that individual or organization.
For example, universities might wish to prevent illegal or improper distribution of copyrighted or controversial material. Corporations might wish to prevent inadvertent disclosure of proprietary information. Governmental organizations might wish to ensure that politically incorrect language is avoided in any literature disseminated by that organization.
Where an individual or organization does not possess the resources with which to manually edit or proofread all of the many forms of information emanating from that individual or organization, that individual or organization may therefore be exposed to considerable risk.
Or even where an individual or organization may possess resources capable of performing such manual editing or proofreading in situations where there is adequate time between the time that the information is created and the time that this information is conveyed to its audience, such manual editing or proofreading may be inadequate during live or near-live communication of information, where there may be little or no delay between the time that the information is created and the time that this information is conveyed to its audience.
There is therefore a need for ability to automatically analyze, filter, and/or censor information based on the semantic content of that information, and it would be desirable if such automatic analysis, filtering, and/or censoring could be carried out in more or less real time, so that such analysis, filtering, and/or censoring might not cause introduction of excessive delay between the time that the information is created and the time that this information is conveyed to its audience.
It would also facilitate implementation of such semantic analysis capability if it could be integrated or combined with existing functionality for parsing, interpreting, and/or converting the content of such information, as might typically be the case during preparation of a print job by a printer driver and/or during preparation of a raster image by a raster image processor (RIP), for example.