1. Field of the Invention
The invention relates generally to information processing technology and, more specifically, to a system and method that generally provides for convergence and divergence of information in information streams, a database graph, or a database web distributed across a set of nodes, among other aspects.
2. Related Art
Systems of today, before this disclosure, do not capture the linkages needed to make information meaningful, do not manage change, and do not effectively manage the limited collection of primarily disconnected data that is captured, among other issues. Data becomes meaningful information as connections, or links, are made between other data in a collection. Data that are already known comprises a baseline of understanding; new, or differential data, are required to truly inform. Systems today are not designed to store, manage, distribute, and process differential data and the connections between data. Systems today manage a snapshot of known information with small set of selected linkages artificially applied as a result of key strategies or data analytics. A synopsis of the limitations of systems before this disclosure includes:                Databases typically snapshot the current state of the data it contains.        Updates to data in databases and other systems before the invention are typically completed by overwriting data, losing information history and evolution.        Databases before the disclosure include a small set of selectively predefined links, typically implemented primary key and foreign keys that are artificially specified by the database designer. Because most of the connections between data are lost, many of the links that give the data meaning and context are also lost.        Today's database structures are static and changes to structure are costly to implement, especially given the cost to update others system that use or supply data based on the statics structure.        Information management is ad hoc, resulting a proliferation of disconnected and orphaned data that requires after the fact indexing or tagging to reconstruct a subset of the linkage. This indexing strategy is applied to static data structures after they are populated, requiring continual re-analysis and re-indexing as content evolves.        Systems before this disclosure do not maintain change as a separate inspectable, discoverable, queryable object.        Today's systems before this disclosure do not make use of characteristics of immutability.        Today's systems do not typically allow retrieval of the state of information at any point in time.        Today's system before the disclosure cannot support the dichotomy of business: the need to share data and the need to keep data separate. Systems before the disclosure do not leverage the combination of immutability and relationships to allow information distributed across a system of nodes to cooperatively converge, creating cooperative advantage, and competitively diverge, creating competitive advantage, while at the same time maintaining the immutability and convergeability of the total data set.        Knowledge management systems of today, before the disclosure, depend largely on user-driven entry of tags or after the fact results of data analytics and insertion of artificial created links based on the analytic result. The success in populating tags is limited.        Today's systems before this disclosure do not support the implementation of systems and applications that can be distributed across a collection of virtual or physical nodes, maintaining a loose coupling of entities through characteristics of immutability and change.        Today's systems before this disclosure do not allow businesses or organizations to seamlessly move applications between Cloud environment, and between Cloud and non-Cloud environments, such as behind a firewall or on organization-maintained servers.        In today's systems before this disclosure, properties are just fields in a database rather than classes that can be reused, inherited, and updated in a way that does not require reworking existing structures and systems.        Overwriting: Today's systems before this disclosure typically change data by simply replacing and existing value with a new value. Historical values may be maintained, but an analyst must typically identify for which data historical values are important; software developers and database administrators must then write software to manage the process of inserting a new data item, storing historical entries for the new data item, and associating the historical values with the new data item.        Synchronization: Synchronizing data in today's distributed systems before this disclosure is nearly impossible. Data changes on one or more distribution nodes are nearly impossible to capture and push to all other nodes without other changes occurring before the updates are completed, frequently resulting in orphaned data, conflicting data, and synchronization problems. Reconciling data typically stoppage of the systems and/or development of additional software.        Today's system typically cannot interoperate without writing custom software using tools such as XML/web services, SOAP and REST, and custom information exchange protocols.        
Today's system link data through uncharacterized constructs, such as left joins and right joins and outer joins. The linkages provide no information about the context of the link; the context must be inferred based on what is being linked, the report that uses the link, the business rules in the query used to create a linkage, and the like.
Today's technology is focused on self-contained, fully-defined software-based computer systems that operate on and through the use of a specific set of information representations. The information representations are often poor models of reality as they are constrained by that availability of a limited set of static data types.
Today's “systems” collect data corresponding to the target information representation, the data may be stored in databases, and the systems manipulate, analyze, and report on the data. The possible execution paths of the systems are fully defined, and the systems programming logic may execute from a starting point to one or more specified endpoints.
Today's computer systems are typically limited by one or more of the following, among other limitations:                Static and constrained information representations: each system may operate on a constrained set of information that is force-fit into a static representation that once defined, cannot be readily changed;        Location and format/lack of interoperability: some of the data needed to properly complete an analysis or create a report may be collected by a different system, may be in a different format, or may be in a different location. Data cannot be easily shared between systems, a convoluted set of formatting, transfer, parsing, and reformatting procedures must be programmed to enable sharing;        Limited ability to represent and adapt to changing information structures: system inputs, outputs, and intermediate information representations must be fully defined prior to system development using the limited set of static data types available today. Typically, the system can only operate using those static information representations and cannot be easily adjusted when real-world changes to the information representation become apparent;        Stove-piped nature of data: data are not inherently relational; each data element must be explicitly linked to other data elements to create an information representation;        Mutability: data in systems are constantly changed with typically no maintenance of history: in cases where history is maintained, it requires specialized programs or software, and in many cases can only be retrieved through a complicated set of roll-back procedures;        Static execution paths and end states: an execution path or end state not defined before the system was programmed cannot be readily implemented.        