As the Internet becomes integrated into almost every aspect of people's lives, the amount of content available is growing at an exponential rate. It is common for web providers to operate databases with petabytes of data, while leading content providers are already looking toward technology to handle exabyte implementations.
In addition, the tools used to access this vast resource are growing ever more sophisticated. Although users may believe that they are simply logging into a website, sophisticated server software may search through vast stores of data to gather information relating to the users, for example based on their browsing history, preferences, data access permissions, relationships, location, demographics, etc. Simultaneously, the server may build a custom interface for users, e.g., using server-side languages. Building this interface may include selecting hundreds of content items, such as images, video clips, animation, applets, and scripts. In some cases, these content items may be selected from among a vast array of potential content items based on various policies, e.g., data access policies, privacy policies, optimization policies, etc. (collectively, “policies”). Some of these policies are implemented by software developers whereas other data can be provided by users.
These policies can be useful in social networking websites, e.g., to determine what personal, advertising, or other content to display to users, to determine what actions users can take, etc. Analysis of all of the relevant policies to determine what subset of data to analyze and present can take considerable time. However, unless this process occurs with no perceptible delays, users may lose patience and simply navigate to a different website. Therefore, efficiently determining what content will be gathered and how it may be presented is desirable to reduce website delays caused by analyzing policies.