1. Field of the Invention
The present invention relates to automated forms of data processing, more particularly to data reduction in correspondence with multiple-term parametric relationships.
2. Brief Description of Related Art
Heretofore, efforts to fit nonlinear parametric relationships to experimental data have relied heavily upon approximations that incorporate least-squares data reduction techniques in formulating independent equations by minimizing sums of parametrically represented squared residuals or squared deviations with respect to included fitting parameters. Equations, which are thus formulated, will not consistently compensate for significant errors in variables which are included in more than a single term of a parametrically represented approximating relationship. Even considering forms of "Discriminate Reduction Data Processing" (ref. L. Chandler, U.S. Pat. Nos. 5,652,713 & 5,619,432), and least-squares techniques which provide for the evaluating of over-determined systems of equations, as may be represented by minimum norm solutions of "Total Least Squares" analysis (ref. L. Scharf, Statistical Signal Processing, Addison-Wesley, New York, pp 495-496, 1991), or by originally conceived forms of "Conformal Analysis" (ref. L. Chandler, ibid.), the prior art of establishing independent equations for parameter evaluating seems to offer no generally valid techniques for providing inherently appropriate analytically represented data function similitude as statistically represented by multiple nonlinear, generally non-orthogonal terms of approximating relationships. The terminology "least-squares techniques" herein refers to those techniques which are incorporated in processes which involve establishing form for entire sets of independent equations by minimizing representations of sums of squared residuals or represented squared deviations which may or may not be individually weighted.
The art of generating weight factors, or weighting coefficients, for providing weighted sums to be considered or utilized for data reduction applications, has been limited (by applied arithmetic functions which represent absolute values and squares of values) to the generating of positive values and constant proportions thereof to provide relative or proportionate weighting.
The heretofore art of generating weight factors, or weighting coefficients for application of least-squares techniques has been limited to the generating of positive values to provide relative or proportionate weighting of squared residuals or represented squared deviations. Corresponding forms of weighting coefficients (as provided for weighting of squared residuals or represented squared deviations) have been limited to a few known forms which are characterized by representations which include the following:
1. positive weight factors representing the squares of uncertainties in dependent variable measurements (ref. W. Press, B. Flannery, S. Teukolsky, W. Vetterling, Numerical Recipes, Cambridge University Press, N.Y., pp. 504-515, 1986); PA1 2. positive weight factors representing the weights of measurements of a dependent variable (ref. M. Hull, Encyclopedia of Science & Technology, McGraw-Hill, Vol. 9, pp. 648-649, 1987); PA1 3. positive factors which establish weighting of squared deviations, including representing squares of perpendicular distances between represented data points and approximating lines as proposed in 1878 by Adcock, referred to by York (ref. D. York, Can. J. Phys., Vol. 44, pp 1079-1086, 1966); PA1 4. positive weight factors representing the product of components of variable weighting divided by the sum of said components of variable weighting (ref. D. York, ibid.), and generally considered to represent the inverse of a effective variance (ref. B. Reed, Am. J. Phys., Vol. 57, No. 7, pp 642-646, 1989); PA1 5. positive weight factors representing the inverse of estimated variances or squared deviations as proposed in 1967 by Clutton-Brock (ref. M. Clutton-Brock, Technometrics, Vol. 9, No. 2, pp. 261-269, 1967); PA1 6. positive weight factors which include estimated uncertainty in providing representations for sums of one dimensional squared deviations representing distances between experimental data points and approximating lines (ref. F. Neri, G. Saitta, and S. Chiofalo, J. Phys. E: Sci. Instrum., Vol. 22, pp. 215-217, 1989); PA1 7. certain characterized forms of coordinate related coefficients and constant proportions thereof for weighting of squared residuals in correspondence with represented datum coordinates, including positive weight factors comprising precision weight factor coordinate normalizing proportions and positive weight factors comprising transformation weight factor coordinate normalizing proportions (ref. L. Chandler, ibid.). PA1 1. Nonlinear applications of least-squares techniques are often based upon a false assumption that a minimum value for the sum of squared residuals or represented squared deviations will correspond to an appropriate data representation. PA1 2. Least-squares techniques do not comprise means to represent the direct weighting of residuals nor to include direct residual-component weighting in the form of either represented variable-related proportions or represented term-related proportions. PA1 3. Least-squares techniques do not provide means for including representation for the residua of represented sums of parametrically expressed and biased residuals in the formulating of independent equations. PA1 4. Least-squares techniques do not provide for the inclusion of residual weighting coefficients which are comprised of component normalizing proportions which correspondingly weight individual components of represented multi-dimensional residuals. PA1 5. Implementing of least-squares techniques with included weighting of squared residuals or represented squared deviations inherently requires and includes identical said weighting in the formulating of each and every independent equation in a represented set of independent equations. PA1 6. Least-squares techniques do not provide for an optional selecting or a generating of more than a single form of weighting coefficient to be included in the formulating of a single set of independent equations. PA1 1. They are established as representative of simulated error-free data of a form similar to the data that is to be analyzed. PA1 2. They are closely representative of simulated error-affected data of a form similar to the data that is to be analyzed. PA1 3. The selection will provide a sufficient number of independent equations to determine solution sets of fitting parameters which are assumed to be descriptive of represented data.
Six formidable features of the least-squares approach to data reduction are:
Discriminate reduction data processing as introduced by earlier patent applications of present inventor (ref. L. Chandler, ibid.) provides an advanced form of data processing which is valid for a wide variety of two parameter applications and for multiple-parameter applications which can be represented without error in the included independent variables. Implied squared residual weighting is therein represented by including transformation weight factor coordinate normalizing proportions which are generated by a final stage discriminate rectifier which provides positive numerical correspondence to derivatives by means of absolute value rectification. Discriminate reduction data processing as implemented for multiple-parameter applications provides for the evaluating of approximating parameters which are generated by means of searching for minimum values for a represented sum of weighted squared deviations. This method of evaluating forces representation of evaluated approximating parameters to correspond to encountered minimum values for said represented sum without explicit regard to a true convergent solution to the represented equations. Discriminate reduction data processing thus provides inaccurate solutions for certain applications in which the provided weight factors are generated functions of the represented fitting parameters.
Also, least-squares techniques as established by prior art do not provide for the inclusion of multi-term variable related coefficients or providing forms of inverse deviation variation weighting which will establish indistinguishableness between fitting parameters that relate multiple variables which hypothetically have equivalent measured values, and fitting parameters that relate multiple representations of single variables whose measured values are considered to be correspondingly equivalent to hypothetical said equivalent measured values as would be included by a corresponding fitting function form.