The present invention is in the area of Internet-implemented information collection and aggregation services, and pertains in particular to services provided by one enterprise for a plurality of other enterprises, while allowing the second enterprises to control format and content of delivery, including implementing transactions.
Through a number of techniques and computer architecture, much of which is subject matter of the above referenced Provisional application 60/199,609, aggregating end user account data is taught for normalization and presentation to end users through a growing syndicate of destination sites as described in greater detail below. As of this writing, on behalf of end users, first enterprises provide a service which maintains a large number of aggregated accounts from many second enterprises, such as service providers in the bank, brokerage, credit card, reward program, bills and email verticals. FIGS. 12 through 16 in this application illustrate a novel improvement to current account aggregation technology—a Syndicated Transactions Network. FIGS. 12-16 in conjunction with detailed description herein provide a detailed description of this invention.
Currently account aggregation is often effected through a procedure known colloquially as “screen scraping.” Internet Protocol and XML, among other procedures, enables Yodlee.com to harvest information of relevance to the end user. The Yodlee.com service is designed in part to keep the information provider engaged with its customer (for example, the service takes customers back to the information provider's site to allow the customer to transact in his account). the present inventors, however, have invented a solution that will improve upon the screen scraping-only model of aggregation by affording an even greater amount of engagement for the information provider with its customer and for the other reasons set forth below.
e-Finance Transaction Platform
Yodlee.com, according to embodiments of the present invention described in enabling detail below, has devised a novel improvement to existing account aggregation and transacting service that will (i) allow financial institutions to syndicate more effectively their Web strategies and transactional capabilities and (ii) place control over the customer relationship completely in the hands of the financial institutions or other information provider.
The technology paradigm of almost every financial institution today is a series of back-end host systems containing customer data communicating through a “business logic” or “rules” layer that allows certain kinds of transactions to occur. These transactions are then published on the Web through Web servers which may be provided by a number of hardware suppliers. A common denominator in virtually all of these implementations is the last link in the chain, the Web server connecting the institution to its customer on the Internet.
The present inventors have invented a provider-side syndication and filtering platform that will be delivered to providers as a software development kit (“SDK”) or other suitable implementation format made available to any provider desiring to control its customers' remote interactions online. This unique platform consists of a rules-based filter developed preferably as a servlet, but other embodiments may utilize other commonly known server extension technologies, allowing customers varying levels of access and transactional capability depending on a number of variables, such as who the user is, and from where the user accesses the account from the Web. The platform in some embodiments will utilize powerful internal and external directory services developed by leading computer networking vendors. It is expected that the filtering layer would ship with each Web server distributed by such a supplier. Combined with Yodlee.com's unique positioning in the financial institution marketplace, this approach would allow the company to rapidly seed the market for this service. The solution is referred to herein as an “e-Finance Transaction Platform” or (ETP).
The ETP has several important effects from the perspective of the financial services provider. First, as noted above, it allows the provider institution to give users different degrees of data and transacting capability depending upon the site from which the user accesses the institution. Through the rules embedded in the filtering platform, the institution is able, for example, to give the user (i) full access to the user's account (including full transactional capability) if the user accesses the account through the institution's site, (ii) somewhat less data and transactional capability if the user accesses the institution from, for example, a non-affiliated Web portal, (iii) less data and less transactional capability still if the user accesses the institution from a competitive money management site and (iv) very limited and/or no data and access if the user accesses the account from a competitive banking property. The ETP effectively allows the provider institution to gain significant control of the customer's remote Web experience.
Second, the ETP promotes a level of financial standardization and customer convenience as yet unrecognized among Web properties. The ETP, if embedded in Web servers, permits fully integrated “drag and drop” transactions to be conducted through a user-friendly graphical user interface on a client computer. For example, it is possible through an embodiment of the invention to purchase merchandise or pay a bill online by dragging currency from an online accessible checking account to an online retailer or bill presenter. The communication between the bank and either the retailer or bill presenter is transparent to the user.
Third, the ETP gives the financial institutions a far more effective vehicle for syndicating their online strategies than they have had heretofore. Financial Institutions are increasingly aware that online banking or brokerage services alone are not sufficient to maintain a meaningful share of attention from a large percentage of their customers. Such institutions need an effective transaction syndication vehicle to touch a high percentage of their customers who are not presently engaging with the institution at its own Web site.
Fourth, unlike the situation now existing with account aggregation, the platform will permit the provider institution to maintain a direct marketing relationship with its customer, knowing, for example, what types of activities the customer is conducting at the institution when the institution is accessed from a destination site other than the institution itself. This arrangement will be both more reliable (in terms of data integrity) and more auditable than any known current forms of account aggregation.
Given financial institutions' concerns about screen scraping, the inventors believe that financial institutions will be very receptive to this proposition, and will desire to regain control of the customer experience through access to the ETP. From a technology partner's perspective, this mechanism will give the technology partner an opportunity to participate in the recurring revenue stream available from financial transactions, rather than simply publishing data from financial institutions on the Web through its server network.
The concept for the ETP is born of both real and perceived shortcomings of screen scraping. The ETP may eventually obviate the need for any sort of widespread screen scraping of information providers, because information will be provided directly to the ETP (subject to the permission levels described above) by the information providers. For information providers that choose not to subscribe to the ETP, Yodlee.com will continue to offer aggregated information collected through its proprietary screen scraping technology, also described below.
Information on Scraping Technology
Looking back, it is apparent that as the Internet has gained momentum, consumers have demanded applications or services that make their online experience simpler, easier to use, and more satisfying. The development of successful Internet sites has typically corresponded with a number of themes which have developed over the last few years. When carefully analyzed, this evolution is a logical development of the emerging digital economy.
Prior to 1994 the Internet was not a mass media, in part because the existing technologies (such as FTP, Archie, Usenet, and Gopher) were not user friendly and required the end user to do all of the work (e.g., the end user had to learn of an existing data source, find the address, navigate to the destination, and download the information). As more consumers began accessing the Internet, Search Engines were created to solve this usability issue. With the advent of the commercial Search Engine, additional content could be easily added to the Internet and the end user had a means of finding and accessing this information. But consumers required better tools than Search Engines for organizing and accessing this wealth of generic content. Push technologies were explored, and eventually, the portal strategy was successfully adopted as an efficient way for consumers to easily access a variety of content sources in a single, easy to use format. As the volume of available online content continues to grow exponentially, portals are now confronted with the need to make different types of content available to different consumers based upon their particular preferences and tastes.
The phenomenal success of Internet portals and destination sites has demonstrated the importance of creatively and intelligently aggregating, organizing and presenting the mass of information available on the Web. Search engines, portals and destination sites have Internet strategies based on the frequency, duration and quality of end user visits to their sites. For this reason, destination sites and portals are constantly seeking content and/or technologies which drive quality traffic to their sites and keep it there. Recent trends indicate that Internet users are up to 25 times more likely to come back to a site when this information is organized according to personal preferences.
FIG. 1 displays the current process of acquiring online PI 100. The end user first selects an information provider site in step 110. The end user proceeds to step 120 by locating and entering the Internet address of the selected information provider. This step may be accomplished in several manners with varying levels of complexity. A simple means for accomplishing this step is the utilization of a bookmark or favorite whereas locating an information provider for the first time might involve significant time and effort performing online searches. In step 130, the end users logs into the selected information provider's Web site utilizing the site's specific logon protocol. This protocol usually involves verifying the identity of the end user using a user name and password or other means of verification, acquiring the verification data from cookies residing on the end user's system or a combination of requested data and cookie data. The end user continues in step 140 by navigating through Web pages on the information provider's Web site until the desired information is located. During this process, the end user is often required to visit Web pages of little or no use to the end user whose goals is to simply acquire the particular PI residing on the Web site. Ultimately in step 150, the end user is presented with the desired PI. The entire process 100 is repeated for each individual piece of PI desired by the end user. Under this PI access model, the end user must visit each separate information provider, track potentially different identity verification data for each, utilize a different user interface at each site and possibly wade through a significant number of filler Web pages.
FIG. 4 pictorial illustrates the architecture of this current access process. The end user 210 utilizes the client computer 220 to access each PI Web site 250 across the Internet 230. This current model suffers from several significant deficiencies. The end user must login to each site separately. Each separate site has its own graphical user interface. Each site wants the end user to stay and return; each visited site wants to retain end user focus for as long as possible. No true aggregation of PI exists; multiple accesses simply allow sequential access to particular pieces of PI.
One partial solution to these problems has recently evolved in the form of portal sites. Generic portal sites aggregate resources into categories and provide links to sites covering topics within those categories. Yahoo and Excite are examples of generic portal sites. These sites facilitate horizontal aggregation of generic content; horizontal aggregation refers to aggregation of PI access within a particular information provider category such as banks or utility companies. Some portal sites allow individual end users a limited capability to select and configure disparate generic PI. Generic PI refers to PI of interest to the particular end user that does not require specific identity verification to obtain. For example, an end user might be interested in the weather forecast for his local area. This information could be integrated into a portal page without requiring identity verification of the particular end user receiving this PI. The individualized portal page provides a significant benefit to users seeking to aggregate generic PI. However, current portal pages do not generally provide PI requiring identity verification such as an end user's stock portfolio or bank balance. Further, these pages do not facilitate transactions utilizing PI.
Under current technology, aggregating PI available over the Internet requires a significant burden in terms of time, effort and learning curve. An end user wishing to access his PI needs to individually visit a variety of information provider sites each with its own requirements, graphical user interface and login protocol.