1. Technical Field of the Invention
The present invention relates to a method and equipment for weather image prediction which has the objective of predicting weather phenomena such as the amount of precipitation or the amount of clouds, which is accomplished through predicting the local, short-term weather radar images by employing weather radar images obtained from a weather radar apparatus.
This application is based on the Japanese Patent Applications No. Hei 8-333763, No. Hei 9-056200, and No. Hei 9-112718 filed in Japan, the contents of which are incorporated herein by reference.
2. Description of the Related Art
With regard to conventional prediction methods employing weather radar images, cross-correlation methods are widely used, for example, as seen employed in Publication [1] Yoshio Asuma, Katsuhiro Kikuchi, and Hisashi Kon: "A Method for Estimating the Advection Velocity of Radar Echoes Using a Simple Weather Radar System", Geophysical Bulletin of Hokkaido University, Vol. 44, October, 1984, pp. 23-34 and Publication [2] Yoshio Asuma, Katsuhiro Kikuchi, and Hisashi Kon: "Experiments for a Very-Short-Range Prediction of Snowfall Using a Simple Weather Radar System", Geophysical Bulletin of Hokkaido University, Vol. 44, October, 1984, pp. 35-51.
This method employs two weather radar images measured at an arbitrary time interval .DELTA.T apart. While shifting one of the images, the correlation value of the image gray level is calculated, and the shift indicating the greatest correlation value is made to be the amount of movement of the precipitation field between the two frames. Employing this amount of movement, a parallel translation is performed on the precipitation field within the most current weather radar image. The resulting image is the forecast image.
Concretely speaking, the cross-correlation coefficient is obtained using the equation below from the two weather radar images R1, R2, as illustrated in FIG. 27, which are measured at a time interval .DELTA.t apart. Here, the gray level of the image or the lattice point (i, j) in the radar image is made to be R1(i, j) and R2(i, j), respectively, for the two measured images R1, R2. The fields to be correlated are A and B, respectively. The shift of the two radar images when calculating the correlation value is made to be (k,l). (In FIG. 27, the oblique lines indicate the field to be correlated, and the bold arrow in the center shows the direction of the movement of the echo pattern.) ##EQU1##
The cross-correlation value obtained through the calculation using the above equations may be as illustrated in FIG. 28, for example. At this point, a interpolation based on a second order function is performed on the cross-correlation value .sigma..sub.K,L at point (K,L) of the lattice point where the greatest cross-correlation value exists, and the four cross-correlation values in its vicinity, .sigma..sub.-x, .sigma..sub.+x, .sigma..sub.-y, .sigma..sub.+y. And the shift (k', l') between the point (not necessarily a lattice point) where the cross-correlation value resulting from the compensation is greatest is obtained through the following equation (FIG. 29, only the X component is illustrated). ##EQU2##
According to the above, when the two weather radar images R1, R2 are shifted by (K+k', L+l'), the cross-correlation value is greatest. From this fact, the movement vector of the echo pattern can be obtained from the equation (6) and equation (7) below. This movement vector shows the direction and the speed of the movement of the precipitation field. Here, V.sub.x and V.sub.y indicates the x component and the y component of the amount of movement. ##EQU3##
Next, by extrapolating the echo pattern within a weather radar image measured at a certain time by employing the movement vector obtained through the equations (6), (7), a radar image of a time after the measured time is predicted.
Using the weather radar image I (i, j) as the input image, a forecast image J (i, j) of a time .DELTA.T after the measured time of the weather radar image I (i, j) is obtained from the calculated movement vector, employing V.sub.x and V.sub.y. The forecast image J (i, j) is defined to be an image resulting from the parallel translation of the input image I (i, j) based on the amount of movement in the horizontal direction S.sub.x and the amount of movement in the vertical direction S.sub.y, EQU S.sub.x =.DELTA.T.multidot.V.sub.x (8) EQU S.sub.y =.DELTA.T.multidot.V.sub.y (9)
However, the amount of movement is not restricted to integer values. If the shift from the lattice point of the moved image is expressed by ##EQU4## then the forecast image J (i, j) is defined to be ##EQU5##
The lattice point of the forecast image J which have no correspondence to that of the input image, that is, the blank space of the forecast image resulting from the parallel translation, is set to have the value of zero.
Further, the forecast image can be obtained in the same manner even for the cases other than where v.sub.x &gt;0 and V.sub.y &gt;0.
At this point, the problems posed by the above described method will be made clear by a comparison with the characteristics of the echo within the actual weather radar image.
FIGS. 30A-30B illustrate an example of a typical echo. As can be seen in FIGS. 30A-30B, the radar echo within the weather radar image has large and small echo cells as its fundamental elements, and when these echo cells form a group, one precipitation field is formed. Hereinafter, these echo cells will simply be referred to as echoes. In a weather radar image, a precipitation field possessing a plurality of different dynamics may exist. Although echoes may be generalized as moving along with the flow of the atmosphere, there is a constant repetition of deformation and appearance and dissipation. In addition, especially in the case of FIGS. 30A-30B, the echoes appear at a certain position, move forming a band-shape, and dissipate at a certain position. However, the movement of the precipitation field formed by the group of these echoes is extremely slow when compared to the movement velocity of the echoes.
However, this cross-correlation method calculates one or more global movement vectors based on the correlation value of the gray level of a wide range from the radar images of the two frames. Thus, the moving velocity of the above echo and the moving velocity of the precipitation field cannot be distinguished. Consequently, in the case where the calculated moving vector corresponds to the moving velocity of the echo, there is a problem where the forecasted precipitation field moved to a location widely different from the actual location.
Moreover, since the image gray level changes from one frame to another due to unstable factors such as the deformation and appearance and dissipation of the echoes, there is a problem where the moving components of the echoes and the precipitation field cannot be stably and accurately calculated.
Furthermore, since one or more moving vectors are calculated with respect the radar image, and the image is simply undergoing a parallel translation, there is also a problem that, in the case where the precipitation field possessing a plurality of different movements exists, it is not possible to treat each of the precipitation fields separately, thereby reducing the accuracy of the forecast.
In conclusion, the above described method cannot distinguish the moving velocity of the echoes with the moving velocity of the precipitation field, and it makes the stable and accurate calculation of the moving velocities of the echoes and the precipitation field very difficult due to the influences of the unstable factors such as the appearance, dissipation, and deformation of the echoes, and further, it cannot handle precipitation fields possessing a plurality of different movements. Due to such various problems, the above method could not forecast a weather radar image of high accuracy.