This invention relates to methods and systems for allocating resources in an uncertain environment.
By reference the following documents, submitted to the Sunnyvale Center for Innovation, Invention and Ideas (SCI3) under the US Patent and Trademark Office""s Document Disclosure Program, are hereby included:
Almost all organizations and individuals are constantly allocating material, financial, and human resources. Clearly, how best to allocate such resources is of prime importance.
Innumerable methods have been developed to allocate resources, but they usually ignore uncertainty: uncertainty as to whether the resources will be available; uncertainty as to whether the resources will accomplish what is expected; uncertainty as to whether the intended ends prove worthwhile. Arguably, as the increasingly competitive world-market develops, as technological advancements continue, and as civilization becomes ever more complex, uncertainty becomes increasingly the most important consideration for all resource allocations.
Known objective methods for allocating resources in the face of uncertainty can be classified as Detailed-calculation, stochastic programming, scenario analysis, and Financial-calculus. (The terms xe2x80x9cDetailed-calculationxe2x80x9d, xe2x80x9cFinancial-calculusxe2x80x9d, xe2x80x9cSimple-scenario analysisxe2x80x9d, and xe2x80x9cConvergent-scenario analysisxe2x80x9d are being coined here to help categorize prior-art.) (These known objective methods for allocating resources are almost always implemented with the assistance of a computer.)
In Detailed-calculation, probabilistic results of different resource allocations are determined, and then an overall best allocation is selected. The first historic instance of Detailed-calculation, which led to the development of probability theory, was the determination of gambling-bet payoffs to identify the best bets. A modern example of Detailed-calculation is U.S. Pat. No. 5,262,956, issued to DeLeeuw and assigned to Inovec, Inc., where yields for different timber cuts are probabilistically calculated, and the cut with the best probabilistic value is selected. The problem with DeLeeuw""s method, and this is a frequent problem with all Detailed-calculation, is its requirement to enumerate and evaluate a list of possible resource allocations. Frequently, because of the enormous number of possibilities, such enumeration and valuation is practically impossible.
Sometimes to allocate resources using Detailed-calculation, a computer simulation is used to evaluate:
Zdc=E(fdc(xdc))xe2x80x83xe2x80x83(1.0)
where vector xdc is a resource allocation plan, the function fdc evaluates the allocation in the presence of random, probabilistic, and stochastic events or effects, and E is the mathematical expectation operator. With such simulation capabilities, alternative resource allocations can be evaluated and, of those evaluated, the best identified. Though there are methods to optimize the function, such methods often require significant amounts of computer time and hence are frequently impractical. (See Michael C. Fu""s article xe2x80x9cOptimization via Simulation: A Review,xe2x80x9d Annals of Operations Research Vol. 53 (1994), p. 199-247 and Georg Ch. Pflug""s book Optimization of Stochastic Models: The Interface between Simulation and Optimization, Kluwer Academic Publishers, Boston, 1996.) (Generally known approximation solution techniques for optimizing equation 1.0 include genetic algorithms and response surface methods.)
A further problem with Detailed-calculation is the difficulty of handling multiple-stage allocations. In such situations, allocations are made in stages and between stages, random variables are realized (become manifest or assume definitive values). A standard solution approach to such multiple-stage Detailed-calculation resource allocations is dynamic programming where, beginning with the last stage, Detailed-calculation is used to contingently optimize last-stage allocations; these contingent last-stage allocations are then used by Detailed-calculation to contingently optimize the next-to-the-last-stage allocations, and so forth. Because dynamic programming builds upon Detailed-calculation, the problems of Detailed-calculation are exacerbated. Further, dynamic programming is frequently difficult to apply.
Stochastic programming is the specialty in operations research/management science (OR/MS) that focuses on extending deterministic optimization techniques (e.g., linear programming, non-linear programming, etc.) to consider uncertainty. The general solution approach is to construct and solve an optimization model that incorporates all the possibilities of what could happen. Unless the resulting optimization model is a linear programming model, the usual problem with such an approach is that the resulting optimization problem is too big to be solved; and aside from size considerations, is frequently unsolvable by known solution means. Creating a linear programming model, on the other hand, frequently requires accepting serious distortions and simplifications. Usually, using more than two stages in a stochastic programming problem is impractical, because the above-mentioned computational problems are seriously aggravated. Assumptions, simplifications, and multi-processor-computer techniques used in special stochastic programming situations fail to serve as a general stochastic-programming solution method.
In Simple-scenario analysis, future possible scenarios are created. The allocations for each are optimized, and then, based upon scenario probabilities, a weighted-average allocation is determined. Sometimes the scenarios and allocations are analyzed and, as a consequence, the weights adjusted. The fundamental problem with this method is that it does not consider how the resulting allocation performs against the scenarios, nor does it make any genuine attempt to develop an allocation that, overall, performs best against all individual scenarios. Related to this fundamental problem is the assumption that optimality occurs at a point central to individual scenario optimizations; in other words, that it is necessarily desirable to hedge allocations. Such hedging could, for example, lead to sub-optimality when, and if, the PRPA uses Simple-scenario analysis for allocating resources: because of economies of scale, it could be preferable to allocate large resource quantities to only a few uses, rather than allocate small quantities to many uses. Another practical example concerns allocating military warheads, where hedging can be counter-productive.
Also related to the fundamental problem of scenario analysis is its inability to accommodate utility functions in general, and von Neumann-Morgenstern (VNM) utility functions in particular. Arguably, according to economic theory, utility functions should be used for all allocations when uncertainty is present. Loosely, a utility function maps outcomes to xe2x80x9chappiness.xe2x80x9d The VNM utility function, in particular, maps wealth (measured in monetary units) to utility, has a positive first derivative, and, usually, has a negative second derivative. By maximizing mathematically-expected VNM utility, rather than monetary units, preferences concerning risk are explicitly considered.
(A classic example of Simple-scenario analysis theory is Roger J-B. Wets"" thesis, xe2x80x9cThe Aggregation Principle in Scenario Analysis and Stochastic Optimization,xe2x80x9d in: Algorithms and Model Formulations in Mathematical Programming, S. W. Wallace (ed.), Springer-Verlag, Berlin, 1989, p. 91-113.)
Simple-scenario analysis has been extended to what might be called Convergent-scenario analysis, which starts where Simple-scenario analysis ends. Using a weighted-average allocation, individual scenarios are re-optimized with their objective functions including penalties (or costs) for deviating from the average allocation. Afterwards, a new weighted-average allocation is determined, the penalties made more severe, and the process is repeated until the individual scenarios"" optimizations converge to yield the same allocation. The deficiencies of Simple-scenario analysis as previously described remain, though they are somewhat mitigated by the mechanism that coordinates individual-scenario optimizations. The mechanism, however, is contingent upon arbitrary parameter values, and hence the mechanism itself arbitrarily forces convergence. Further, such forced convergence is done without regard to whether the current allocation actually improves. Further still, the convergent mechanism tends to overly weigh scenarios that are highly sensitive to small allocation changes, even though it could be desirable to ignore such scenarios. Incorporating penalties for deviating from the average allocation can be cumbersome, if not impossible, and can result in significantly complicating and protracting the solution procedure.
The Progressive Hedging Algorithm is the most famous of the Convergent-scenario analysis techniques and is described in R. T Rockafellar and Roger J.-B. Wets, xe2x80x9cScenarios and Policy Aggregation in Optimization Under Uncertaintyxe2x80x9d Mathematics of Operations Research Vol. 16 (1991), No. 1, p. 119-147. Other Convergent-scenario analysis techniques are described in John M. Mulvey and Andrzej Ruszczynski, xe2x80x9cA New Scenario Decomposition Method for Large-Scale Stochastic Optimization,xe2x80x9d Operations Research 43 (1995), No. 3, p. 477-490, and some of the other prior-art references.
U.S. Pat. No. 5,148,365 issued to Dembo is another scenario-analysis method. Here, as with Simple-scenario analysis, future possible scenarios are created and the allocations for each are optimized. Afterwards, the scenario allocations and parameters, possibly together with other data and constraints, are combined into a single optimization problem, which is solved to obtain a final allocation. Though this method mitigates some of the problems with Simple-scenario analysis, the problems still remain. Most importantly, it does not fully consider how the resulting allocation performs against all individual scenarios. This, coupled with the disparity between objective functions used for optimization and actual objectives, results in allocations that are only fair, rather than nearly or truly optimal. Because this method sometimes uses a mechanism similar to the convergent mechanism of Convergent-scenario analysis, the previously discussed convergent mechanism problems can also occur here.
As a generalization, all types of stochastic programming (and scenario analysis is a form of stochastic programming) can have the following serious deficiencies when allocating resources. First, penalties can introduce distortions. Second, the process of forming tractable models can introduce other distortions. Third, existing techniques are frequently unable to handle discrete quantities. Fourth, constraints are not fully considered, with the result that constraints are violated with unknown ramifications, and, conversely, other constraints are overly respected. Fifth, existing techniques usually presume a single local optimum, though multiple local optimums can be particularly probable. Sixth, existing techniques can require significant computer time to compute gradients and derivatives. Seventh, and perhaps most important, practitioners frequently do not use stochastic programming techniques, because shifting from deterministic techniques is too complex.
Theoretical finance, theoretical economics, financial engineering, and related disciplines share several methods for allocating and pricing resources in the presence of uncertainty. (Methods for valuing or pricing resources also allocate resources, since once a value or price is determined, it can be used for resource allocation internally within an organization and used to decide whether to buy or sell the resource on the open market.) These methods tend to use mathematical equations and calculus for optimization. A frequent problem, however, is that once complicating factors are introduced, the solution techniques no longer work, and either computer-simulation Detailed-calculation or stochastic-programming methods, with their associated problems, are required. A further problem is that such methods, in order to be mathematically tractable, frequently ignore VNM utility functions and work with unrealistically, infinitesimally small quantities and values.
In conclusion, though innumerable methods have been developed to determine how to allocate resources, they frequently are unable to cope with uncertainty. Attempts to include uncertainty frequently result in models that are too big to be solved, unsolvable using known techniques, or inaccurate. As a consequence, resource allocations of both organizations and individuals are not as good as they could be. It is therefore a fundamental object of the present invention to obviate or mitigate the above-mentioned deficiencies.
Accordingly, besides the objects and advantages of the present invention described elsewhere herein, several objects and advantages of the invention are to optimally, or near optimally, allocate resources in the presence of uncertainty. Specifically, by appropriately:
Additional objects and advantages will become apparent from a consideration of the ensuing description and drawings.
The basis for achieving these objects and advantages, which will be rigorously defined hereinafter, is accomplished by programming a computer as disclosed herein, inputting the required data, executing the computer program, and then implementing the resulting allocation. The programming steps are shown in the flowchart of FIG. 1.
Step 101 entails generating scenarios and optimizing scenario allocations. In Step 103, the optimized allocations are grouped into clusters. In Step 105, first-stage allocations are randomly assigned to scenario nodes and, by using an evaluation and exploration technique to be described, Guiding Beacon Scenarios (GBSs) are generated. Step 107 entails using the GBSs and identifying the allocations within each cluster that perform best against the scenarios within the cluster. In Step 109, allocations that perform better against the scenarios within each cluster are created, typically by considering two of the better allocations, and then using line-search techniques. If there is more than one cluster, then in Step 113 the clusters are merged into larger clusters and processing returns to Step 109. Once only a single cluster remains and Step 109 is complete, the best allocation thus far obtained is taken as the final optimal allocation and is implemented in Step 115.