The invention disclosed broadly relates to telecommunications network architectures and more particularly relates to servers for managing information streams in a telecommunications network.
The invention disclosed herein is related to the inventions described in U.S. Pat. No. 5,802,510, by Mark Alan Jones, entitled xe2x80x9cUniversal Directory Servicexe2x80x9d; U.S. Pat. No. 5,742,763, by Mark Alan Jones, entitled xe2x80x9cUniversal Message Delivery System for Handles Identifying Network Presences;xe2x80x9d and U.S. Pat. No. 5,832,221, by Mark Alan Jones, entitled xe2x80x9cUniversal Message Storage Systemxe2x80x9d, all of which are assigned to ATandT Corp., and are incorporated herein by reference for their disclosure of the concepts of a network presence, a directory system, a message delivery system, and a message storage system which are related to the invention disclosed herein.
Existing network information management technologies, such as the browser-centric technology of the Internets World Wide Web, require an immediate response by network information sources, such as news services, to requests for the information by the client. Internet browsers act as information gatherers by going out over the network to a specific information source and requesting information, such as an article. The information source can maintain a channel definition format file for that particular client. When the client makes a request to for an article from the source, the source must immediately respond with the requested article. The article is custom formatted and routed especially for the requesting client using the channel definition format. The requirement of an immediate and customized response in browser-centric technologies is a burden to the information source.
What is needed is to change from the existing client side model and, instead, maintain the source""s information in a highly networked service environment. In this manner, the information source would not be constrained to deliver the information at the time and in the form that each client requests. It would not be necessary for an information source to immediately respond to multiple clients requesting the same article to be delivered in individually customized formats. What is needed is a network service that can break the linkage between the mode in which information is gathered and the mode in which it is distributed, that linkage being referred to as mode locking. Mode locking arises where the source of information is incompatible with the destination of the information, such as where there are differences in protocol (e.g., HTTP vs. SMTP email protocol), differences in data format (e.g., HTML vs. RFC-822 email standards), differences in senses (speech vs. text or image vs. text), or differences in content expression (e.g., French vs. English). What is needed is a way to break the mode locking inherent in browser-centric network technologies. This would enable a significant improvement in flexibility for obtaining and disseminating information in a network. Customized services for a client such as the translation from one language to another or the summarization of articles could be performed independently of the task of gathering the article from the information source.
An information stream management network server is disclosed that enables distributing articles and messages to a destination in the network at times and in forms that are specified by a user, while also enabling accessing and receiving the articles and messages from sources in the network at times and in forms that are independent of the user. The network server handles both information pull articles and information push articles. The information push articles use declarative addressing to specified groups of users, thereby masking recipient endpoint identities and delivery preferences from sources and enabling broadcast communication to members of such a group.
Several embodiments of the invention are disclosed. In a first embodiment, the information stream management network server, includes an information gathering server and an information distribution server whose respective gathering and distribution functions are kept separate and are respectively defined by a system supervisor and by the endpoint users.
The information gathering server has an input from a network for accessing information pull articles from information pull sources in the network and for receiving information push articles from information push sources in the network. The information gathering server has at least one pull event driver having a specified pull event schedule for accessing articles from a specified information pull source in the network. A supervisory input independent of end users, provides the specified pull event schedule. The information gathering server includes an event driver queue processor including a scheduler to schedule pull source event drivers by their respective specified next pull event start times. The event driver queue processor selects a next scheduled pull event driver and runs it at the specified pull event start time to access articles from a specified pull source. Then, for every received article requested by at least one user, the event driver queue processor performs customized transformations on the article specified by the users and stores the transformed article objects in a buffer memory.
For the case of information push articles (such as email messages), the information gathering server includes at least one push event driver for receiving push articles from the information push sources in the network addressed to a declarative address specified by the endpoint user. The information gathering server includes an information push input buffer for buffering any push articles received from the information push sources. The event driver queue processor determines whether any push articles have been received from the information push sources addressed to the declarative address specified by the user. If so, it immediately selects and runs a push event driver in the information gathering server for such information push articles. Then, the push articles are treated in a manner similar to the articles from pull sources. For every received push article addressed to a user, the event driver queue processor performs customized transformations on the push article specified by the user and stores the transformed article object in the buffer memory. The transformed push article may also be immediately forwarded to the distribution server for distribution to the end user, if the user has specified immediate delivery.
The endpoint users define user task records that each specify an article type to be gathered, a customized transformation of that article type into a transformed object, a customized routing of the transformed object, the user""s destination address, and the time of distribution of the transformed object to the destination address. After the information gathering server has executed an event driver for gathering a pull article or a push article from a source which has been specified by at least one user task definition, the event driver queue processor loops through all of the user task records to perform every type of transformation specified for the article. The customized transformations can be changing the senses (speech to text or image to text) or changing the content, such as to produce notifications, summaries, language translations, compendiums, format conversions, and the like. The transformed article object is then stored in a memory buffer.
For each type of transformed article object, the event driver queue processor creates a distribution event record for each user requesting it, specifying the distribution time requested by the user, the user""s destination address for the object, and a memory pointer to the object. The distribution event records are then stored in a memory buffer.
The information distribution server has an input from the buffer memory and an output to the network. The information distribution server can access the distribution event records which have the distribution start time specified by the user for retrieving the transformed article objects from the buffer memory. The information distribution server can also access the user task records that specify the distribution routing, storage, and endpoint destination specifications provided by the user for distributing the articles in the network. A distribution event queue processor in the information distribution server includes a scheduler to schedule distribution event records by their respective the distribution event start times. The distribution event queue processor selects a next scheduled distribution event record and runs it at the distribution event start time to retrieve the articles from the buffer memory. Then, the distribution event queue processor outputs the retrieved transformed article objects using the specified storage and routing paths to an endpoint destination specified in the distribution event record.
Each push event driver extracts a declarative address from the envelope of the pushed article. The declarative address is a query that is evaluated to produce a set of address handles corresponding to specific end users. For each such user, that user""s push task record will be invoked to transform the article for that user. The task record then provides the routing information for distribution of the transformed article.
The buffer memory stores in a database the transformed articles accessed by the pull event driver and the push event driver. The buffer memory can store the push articles by the declarative address of the intended destination. The distribution server can include the declarative address information for retrieving from the database the transformed articles from the information push sources addressed to the declarative address.
In another embodiment, the information gathering server and information distribution server are combined as a multiple event queue server. The multiple event queue server has an input coupled to a network for accessing articles from information pull sources in the network. At least one pull event driver in the server, has a specified driver execution time for accessing articles from a specified information pull source in the network. A supervisory input coupled to the server provides the specified driver execution time. A storage coupled to the server stores the articles accessed by the pull event driver. The multiple event queue server is coupled to the storage and to the network, and has at least one user task record specified by the user, the record including a distribution execution time specified by the user for retrieving the articles from the storage and a distribution format specified by the user for distributing the articles to a destination in the network specified by the user. The multiple event queue server distributes the articles to the destination in the network at times and in forms that are specified by the user, while the server accesses the articles from the sources in the network at times and in forms that are independent of the user.
The multiple event queue server includes an event queue processor with a scheduler to schedule events by their respective execution times. The event queue processor selects a next scheduled event and runs it at the specified execution time to process the scheduled event. The multiple events include command events, driver execution events, information creation events, and information distribution events.
The multiple event queue server selectively modifies the retrieved articles as specified in the user task record, forming objects which are stored in the storage. The event queue processor selects a next scheduled driver execution event record and runs it at the execution time to retrieve the objects from the storage. The event queue processor outputs the objects to a destination specified in the user task record using a format specified in the user task record.
The multiple event queue server includes an information push input buffer for buffering any articles received from information push sources in the network. The event queue processor determines whether any articles have been received from the information push sources addressed to a declarative address, and immediately selects and runs a push event driver server for such information push articles. The event queue processor outputs the articles received from the information push sources to the storage.
In this manner, the server distributes the articles and messages to destinations in the network at times and in forms that are specified by the user, while the server accesses and receives the articles and messages from the sources in the network at times and in forms that are independent of the user.