The structure of organizations and the flow of information in distributed systems are becoming more complex. Corporations, government agencies, and other organization entities usually employ large groups of people to accomplish a mission. Frequently, these employees are geographically separated but must work together to achieve common goals. As a result and with no fault of the employees, there is wasted effort expended during processes. Also, extraneous information may be injected into processes which distracts from accomplishing the mission.
To reduce wasted effort and misinformation, managers and other decision-makers seek to modify the organizational structure or the steps within a process. Also, nodes may be added to or deleted from a system to improve mission effectiveness. When changes to the system are made, it is desirable to measure the “before” and “after” effectiveness of the system.
The prior art teaches some methods for measuring the effectiveness and/or complexity of systems. For example, U.S. Pat. No. 6,128,773 to Snider discloses an inventive metric tool which estimates the entropy of the source code as a measure of the complexity of the software. The tool considers the dependency of the symbols in the software and constructs a data graph representing the structure of the program. Each symbol is shown as a node in the graph, with the dependencies shown as edges connecting the different nodes. The tool uses information theory to estimate the amount of information, or entropy, needed to describe those dependencies. The entropy provides a measure of the complexity of the program, as the greater the entropy, the more complex the dependencies, and the more complex the program.
U.S. Patent Application Publication No. 2005/0080817 to Janow teaches that the consumption of decision information by an organization's own structures results in an upper limit on the average sustainable per capita decision rate. Individual decision-makers insert management decisions into the control network and expect to eventually receive decisions back from it. The organizational entropy and the related maximum decision rate measures extra information used to support the partitioning of tasks to decision-making nodes. The invention teaches how to quantify organizational entropy, using information theory, and applies the new principles to tools for managing and re-engineering organizations in order to improve productivity in performing knowledge-intensive tasks. The embodiments are quantitative methods of choosing efficient organizational structures and sizes matched to the decision complexity. Some preferred methods are operations research (OR) optimization techniques that incorporate organizational entropy into their cost functions and also rules or heuristics for applying broad organizational re-engineering strategies that managers can use to improve performance.
Also, U.S. Pat. No. 6,862,559 to Hogg discloses a method for computing a diversity measure H(m) for combinatorial structures which involves identifying all M possible substructures having m elements from among the n elements of the combinatorial structure. The number of the substructures that are similar to each such substructure is determined, and the frequency of each distinct substructure is calculated using the number of similar substructures and the total number of substructures M. The method uses the frequency of each distinct substructure to compute an entropy corresponding to m. By the same process described above, and entropy corresponding to m+1 is computed. The entropy corresponding to m+1 is subtracted from the entropy corresponding to m to produce the diversity measure H(m). In the preferred embodiment, similar substructures are determined by being identical or isomorphic. In an alternative embodiment, a distance function is used to compute a distance between two substructures, and only if the distance is less than a predetermined threshold are the two substructures determined to be similar. In the preferred embodiment, the entropy is computed by summing the frequency of each distinct substructure multiplied by the logarithm of the frequency of each distinct substructure. In an alternative embodiment, the entropy is computed by summing the frequency of each distinct substructure by the logarithm of the quotient of the frequency divided by an expected frequency of the distinct substructure.
In non-patent literature, N. P. Suh classifies systems in terms of their complexity using Axiomatic Design theory. Other examples of work involving Axiomatic Design are applications relating to the design of an ergonomic workplace by M. G. Helander and L. Lin. Other techniques, such as the TRIZ method, are combined with Axiomatic Design and are being studied by D. Mann. In the management science field, E. Nowicki and C. Smutnicki seek to develop better algorithms to reduce complexity for operations research. Finally, in biological and other systems that occur in nature, S. Strogatz teaches that complexity is sometimes viewed as interactions of an organization with its environment.
As described above, the prior art describes a number of efforts to measure characteristics of systems. However, there exists a need for a method and apparatus for determining and displaying quantitative measures of complexity, wasted effort, and controllability of a complex or distributed system.