Bad weather affected sequences annoy the human viewer and degrade the perceptual image quality. The challenging weather conditions also degrade the performance of various computer vision method which uses feature information such as object detection, tracking, segmentation and recognition. Thus it is very difficult to implement these computer vision methods robust to weather changes. Based on the type of the visual effects, bad weather conditions are classified into two categories; steady (viz. fog, mist and haze) and dynamic (viz. rain, snow and hail). In steady bad weather, constituent droplets are very small (1-10 μm) and steadily float in the air. Individual detection of these droplets by the camera is very difficult. In dynamic bad weather, constituent droplets are 1000 times larger than those of the steady weather. Due to this large size, these droplets are visible to the video capturing camera.
Rain is the major component of the dynamic bad weather. Rain drops are randomly distributed in 3D space. Due to the high velocity of the rain drops their 3D to 2D projection forms the rain streaks.
It is known in the art that rain effect not only degrades the perceptual video image quality but also degrade the performance of various computer vision algorithm which uses feature information such as object detection, tracking, segmentation and recognition. Thus there has been the need for removal of rain to enhance the performance of these vision algorithms.
There are substantial numbers of research works to find a solution on this subject before this present invention. Earlier technique removes rain effects by adjusting the camera parameters. In which exposure time is increased or depth of field is reduced. Earlier technique is not effective in scenes with heavy rain and fast moving objects that are close to camera.
In past few years many methods have been proposed for the removal of the rain. These methods require certain number of consecutive frames to estimate the rain affected pixels. For removing rain during acquisition Garg and Nayar [K. Garg and S. K. Nayar, When does camera see rain?, IEEE International Conference on Computer Vision, 2:1067-1074, 2005] proposed a method by adjusting the camera parameters. Here exposure time is increased or the depth of field is reduced. However, this method fails to handle heavy rain and fast moving objects which are close to the camera.
Garg and Nayar [K. Garg and S. K. Nayar, Vision and Rain, International Journal of Computer Vision, 75(1):3-27, 2007 & Detection and removal of rain from videos, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1:528-535, 2004] assumed that raindrops affect only single frame and very few raindrops affect two consecutive frames. So if a raindrop covers a pixel then intensity change due to rain is equal to the intensity difference between the pixel in current frame and in the later or previous frame. This produces lot of false detection. To reject the false rain pixels it is assumed that raindrops follow the linear photometric constraints. But in heavy rain, raindrops could affect the same position in two or more consecutive frames. Photometric model assumes that raindrops have almost same size and fall at same velocity. It is also assumed that pixels that lie on the same rain streak have same irradiance because the brightness of the drop is weakly affected by the background. It is found that the variation of the size and velocity of raindrops violate the assumptions of the photometric model. This method fails to discriminate between rain pixels and moving objects pixels when rain becomes heavier or lighter in the video or if rain is distributed over a wide range of depth. Thus all the rain streaks do not follow the photometric constraints. Thus gives a lot of miss detection. This method requires 30 consecutive frames for the removal of rain.
Zhang et al [Xiaopeng Zhang, Hao Li, Yingyi Qi, Wee Kheng Leow, and Teck Khim Ng, Rain removal in video by combining temporal and chromatic properties, IEEE international conference on multimedia and expo, 2006] proposed a method based on the chromatic and temporal properties. Chromatic property states that changes of intensities in R, G, and B color components due to the raindrops are approximately same. In practice, these variations across the color components are bound to a small threshold. Temporal property states that a particular pixel position is not always covered by the raindrops in all frames. It is found that slow moving objects also follow this chromatic property. This method uses k-means clustering to estimate the non-rain affected pixel value to inpaint the rain affected pixels. This clustering method is effective only in removing rain from static background when there is no moving object. This method uses all the frames available in a sequence for the removal of the rain.
Barnum et al [Peter Barnum, Takeo Kanade, and Srinivasa G Narasimhan, Spatio temporal frequency analysis for removing rain and snow from videos, Workshop on Photometric Analysis For Computer Vision (PACV), in conjunction with ICCV, 2007 & P. Barnum, S. G. Narasimhan, and T. Kanade, Analysis of Rain and Snow in Frequency Space, International Journal of Computer Vision (IJCV), 2009] proposed a method for the detection and removal of rain streaks by using frequency information of each frame. Here a blurred Gaussian model is used to approximate the blurring produced by the raindrops. This model is suitable when the rain streaks are prominent, but this blurred Gaussian model fails to detect the rain streak when it is not sharp enough.
Liu et al [Peng Liu, Jing Xu, Jiafeng Liu, and Xianglong Tang, Pixel Based Temporal Analysis Using Chromatic Property for Removing Rain from Videos, Computer and information science, 2(1):53-50, 2009] proposed a method for the removal of rain by using chromatic based properties in rain affected videos. It fails to detect all possible rain streaks. The reason could be that chromatic property is not satisfied in practice as described in previous discussion. This method requires at least three consecutive frames for the removal of rain.
U.S. Pat. No. 4,768,513 provides a method and device for measuring and processing light whereby laser light is irradiated onto positions of an organism which has been injected with a fluorescent substance having a strong affinity for tumors, the fluorescence and the reflected light produced by this irradiation are detected, and the detected intensity of the fluorescence is calculated and analyzed by means of the intensity of the reflected light.
The purpose of this invention is to provide a device and method for measuring and processing light which goes far in eliminating the uncertain factors which interfere with quantification of the fluorescence excited and which are caused, for example, by power fluctuations of the laser light for excitement or by fluctuations of the relative positions of the irradiating and detecting fibers and the organism's tissues.
In order to achieve the aforementioned purpose, the method and device according to said prior art comprise a method and device for measuring and processing light in which laser light for producing fluorescence is irradiated onto predetermined positions of an organism which has previously been injected with a fluorescent substance having a strong affinity for tumors, and the intensity of the fluorescence thus produced is detected. The device consists of a light-irradiating device which irradiates the organism with the aforementioned laser light, a light-detecting device which detects and outputs the fluorescence produced by the organism upon excitement by the aforementioned laser light as well as the aforementioned laser light reflected from the organism, and an analyzer unit into which the output signals of this light-detecting device are input and the intensity of the aforementioned fluorescence is calculated and analyzed in terms of the intensity of the reflected light. This method involves calculates and analyzes the intensity of the detected fluorescence based on the intensity of the detected light.
U.S. Pat. No. 4,773,097 provides an image analyzing apparatus for television information signals are supplied concurrently to a display device for reproduction and to a converter network which converts the analogue television information signals into digital signals. The digital signals are then stored in the memory of a computer. To compare the stored signals with the developed television signals, means are provided for retrieving the computer-stored digital words, converting the signals into analogue signals and supplying the converted signals and the developed signals simultaneously to a display device. To correct or modify any portion of the reproduction of the converted signals in relation to the reproduction of the developed signals, a correction circuit is provided for altering the digital bits corresponding to the desired portion of the reproduction.
U.S. Pat. No. 3,758,211 provides an atmospheric visibility measuring apparatus comprises a light projection means for projecting a beam of light into the atmosphere along a prescribed beam path, an optical detection means arranged to respond to light scattered by particles in the atmosphere from within another beam path surrounding an optical axis of the detector, and control apparatus for turning the light beam and the optical axis of the detection means in unison about a horizontal axis which extends substantially from the projection means to the detection means. The light projection means and the optical detection means are relatively mounted so that the optical axis of the detection means always intersects the light beam at a constant angle and at a constant range from the detection means. The control apparatus may comprise a rotatable horizontal shaft supporting the light projection means and the optical detection means. Alternatively a fixed light projector and detector may be arranged to co-operate with two mirrors provided on a rotatable horizontal shaft the mirrors being arranged to direct the light beam into the prescribed beam path and to reflect the scattered light onto the detector. The projection means and the detection means, or just the mirrors which form a part thereof, may be mounted separately and maintained in relative alignment by a follow-up servo system.
According to the said prior art there is provided apparatus for measuring the visibility conditions of the atmosphere including projection means for projecting a beam of light along a first beam path, detection means responsive to light incident on it from within a second beam path, the projection means and the detection means being relatively mounted so that the first and second beam paths will intersect at a predetermined angle and so that the detection means will receive light scattered from the part of the beam where the two beam paths intersect and which is at a predetermined constant range from the detection means, and including control means for rotating the said two beam paths in unison.
The art suggests possible involvement of two mirrors, mounted on opposite ends of a horizontal rotatable shaft at an acute angle to the axis of the shaft, projection means for projecting a beam of light via one mirror, and detection means for detecting scattered light via the other mirror. The projection means may comprise a lamp also mounted on the shaft.
This apparatus is for measuring the visibility conditions of the atmosphere comprising projection means for projecting a beam of light along a first beam path.
U.S. Pat. No. 7,660,517 provides a systems and methods for reducing rain effects in images. The invention is applicable to both still cameras and video cameras, and they are also applicable to both film and digital cameras. In general, they are applicable to any camera system where camera settings can be adjusted before or while images are being acquired.
It is an analytical model for the effects of dynamic weather on acquired images based on the intensity fluctuations caused by such weather. It also provides a method of adjusting camera settings to reduce the visibility of rain with minimal degradation of the acquired image. This method uses one or more inputs from a user to retrieve settings for an image acquisition device from a data repository. These settings are then used to adjust corresponding camera settings. The input from a user can be, at least, the heaviness of the rainfall, the motion of objects in the scene, the distance, of an object to be acquired from the camera, or the near and far distance of the scene. Camera settings that can be adjusted are, at least, the exposure time, the F-number, the focal plane, or the zoom. Although post processing is preferably not required to reduce the visibility of dynamic weather, such as rain, when the present invention is implemented, post-processing may still be applied if camera settings are ineffective, will cause too much image degradation, or to further improve the acquired image. Additionally, automatic detection of certain scene features, such as the heaviness of rainfall, can be performed to partially or totally replace user inputs. With automatic detection of scene features, the entire process of adjusting camera settings can be automated.
A rain gauge may also be provided in accordance with this invention. Camera settings may be adjusted to enhance the visibility of rain. The acquired images are then analyzed to determine the number and size of raindrops, which can be used to compute the rain rate. This method for measuring rain rate is advantageous in that it provides finer measurements, is inexpensive, and is more portable that other types of rain rate measurement devices.
Here exposure time is increased or the depth of field is reduced. However, this method fails to handle heavy rain and fast moving objects which are close to the camera.
It would be clearly apparent from the above state of the art that the presently available systems suffers from some inherent limitations such as assuming the shape and size of the raindrops and working on all the three color components, which adds to the complexity and execution at tiles. There is further known problems of huge buffer size and delay, and more importantly problems of the real time implementation of the algorithm.