Current systems for providing content items to users vary in their approach to user information, for example, in their ability to obtain not only user authenticated access but also meaningful information about their users and the extent to which their users control access to and use of their own information, updates, relevant content, licenses, and controlled/protected content.
Some systems simply decide what may be appropriate for or desirable for users based on a single known data point about the user. For example, location based services receive location information from a user's mobile device and identify nearby businesses, gas stations, or ATMs. Other location-relevant information may be provided as well, such as local weather reports. However, the information is selected based only on the user's location. The system has no way of knowing if any of the identified businesses or facts are more relevant for the particular user than any other, such as a password, magnetic strip card, SIM card, IP address, user ID, or DRM code.
Some systems guess what may be appropriate or desirable for users based on a single action. For example, contextual advertising systems may provide an advertisement for a web page based in part on a target word in the web page. These systems have no way of knowing if the advertisement is actually relevant to the user viewing the web page—the advertisement is chosen simply because it matches a target word on the web page. Some systems decide what products may be desirable for a user based on ratings of other similar products provided by the user. For example, some recommendation services receive limited user ratings, or implicit ratings based on views or purchases, of a certain kind of product—books or movies for example—and recommend other books or movies that the user may like based on similarity to items favorably rated, such as authors, themes, actors, directors, genres, and the like. This may yield inappropriate or incorrect recommendations. For example, after buying a single children's book, a recommendation service with limited information may continue to recommend children's genres despite there being no other connection between the user and children.
Presently available digital rights management (DRM) techniques protect and provide access to digital content. DRM systems are generally now based on conventional static key based encryption, where the presence of a particular key in the format selected by the DRM system unlocks access to a single particular digital content piece.
Location based systems, contextual advertising, and recommendation systems, are forced to decide what things may be relevant to them on the basis of the limited known information about a user. These systems may not achieve a high success rate of delivering information that is truly relevant to the user because the recommendations are based on limited available information explicitly shared with the system. The system does not know any other information about the user, including information collected by or shared with other systems. These systems, however, may not allow the user to have control of, or access over their personal information or that of the system. That is, the user has shared only a limited amount of personal information with the system either explicitly or implicitly.
Other systems may make more intelligent recommendations or grant access for users based on more detailed information about the user, but these systems may suffer from user privacy or validation problems. For example, deep packet inspection technologies can analyze information sent to and from a user on a broadband network. By inspecting all information sent or received by a user over time, the broadband service provider can develop a clearer picture of the user and what may be relevant to them. However, this approach raises serious privacy concerns because the user may not know that their personal information is being collected, and does not control to whom the information is provided.
These previous systems also suffer from being proprietary to the particular website, data system, or electronic service accessed. For example, web sites such as Facebook, Amazon, and ESPN, maintain some profile information associated with their users. However, the profile information stored by the user at one site is generally inaccessible to others, depriving the user of its benefit as they travel to other websites. Allowing one site to share information with others again raises privacy concerns. It often may be prohibitive for one system to obtain the appropriately informed user consent to share profile information with another system.
As the use of user profile information to gain access or deliver content and services increases, so too does the risk of unauthorized access to the profile information—either by an unauthorized content holder or an unauthorized service provider mining the profile information, or unauthorized content or service access by a replicated or stolen ID. Further, the risk of corruption or destruction of a user's profile/access credentials information increases as greater amounts of profile information are aggregated in a single location which represents a possible single point of failure for loss of the profile information. The consequences of the loss also increase as users come to rely more and more on systems that make use of the profile information. Further, as sets of users are encrypted or processed in repeatable methods, the risk for entire sets of profiles to be compromised, hacked, or stolen as a set goes up.