Field of the Invention
The present invention relates to a local positioning apparatus used in a local positioning system particularly suited to detecting the condition of a subject, such as an automobile or other motor vehicle, on a road while either stationary or moving based on such local positioning information as the relative location, velocity, and attitude of the subject within a localized region with reference to an image obtained by imaging the area in front of the subject. More specifically, the present invention relates to a local positioning apparatus for correctly detecting the position of a subject within a localized region without being affected by changes in the condition of the road surface, weather, time of day, fixed lighting, moving lighting, and other changes in the imaging condition.
A local positioning apparatus according to the prior art used in an automobile is shown in FIG. 25 and described below. As shown in FIG. 25, this conventional local positioning apparatus LPP comprises an edge extractor 1P, threshold generator 3P, contour extractor 5P, matching operator 9P, lane (marker) contour extractor lip, region limiter 13P, current position detector 15P, curvature detector 17P, and yaw angle detector 19P.
The edge extractor 1P is connected to a digital imaging apparatus 100 (FIG. 1). The digital imaging apparatus 100 is mounted to the subject, which in this explanation of the prior art and the specification of the present invention below is, by way of example only, an automobile Am (FIG. 4), for capturing a perspective image Vi of the view to the front of the automobile AM in the direction of travel, and generating a digital image signal Si of the perspective image Vi. Included in the perspective image Vi are an image of the road, the lane markers Lm1 and Lm2 defining the boundaries (sides) of the lane Lm in which the automobile AM is currently travelling, and a lane marker Lm3 defining the far boundary of an adjacent lane (FIG. 5). The edge extractor 1P extracts the edge pixels of the lane markers Lm1, Lm2, and Lm3 from the digital image signal Si, and generates an extracted edge pixel signal Sx'. The extracted edge pixel signal Sx' contains only the edge pixels extracted by the edge extractor 1P, and thus represents an extracted edge image Vx'.
Using a known method, the threshold generator 3P scans the extracted edge pixel signal Sx' to extract a line for each of the lanes Lm delineated by the lane markers Lm1, Lm2, and L3, and determines a threshold value Eth' for extracting pixels representing the contours of the lane markings from the extracted edge pixel signal Sx'. Using the supplied threshold value Eth', the contour extractor 5P then extracts contour pixels and generates an extracted contour signal Sc' representing the contour lines of the lane markings.
The matching operator 9P then determines the line segment or arc matching the contour lines contained in the extracted contour signal Sc', and generates matching data Sm' containing all of the matching line and arc segments.
The lane contour extractor 11P then compares the matching data Sm' with typical lane dimension characteristics stored in memory to extract the matching line elements meeting these dimensional criteria as the contour lines of the lane, and outputs the result as lane extraction signal Smc'.
Based on this lane extraction signal Smc', the region limiter 13P defines a certain region around the extracted lane, and generates a region signal Sr' delimiting this lane region. By feeding this region signal Sr' back to the edge extractor 1P, the edge extractor 1P limits the area within the perspective mage Vi used for edge extraction to the region limits defined by the region limiter 13P.
Using the lane extraction signal Smc' from the lane contour extractor 11P, the current position detector 15P detects the position of the automobile AM on the road, or more specifically in relationship to the lane being followed.
The curvature detector 17P detects the curvature of the lane being followed while the automobile AM is moving. The yaw angle detector 19P detects the angle of the automobile AM relative to the lane, i.e., whether the automobile AM is travelling parallel to the sides of the lane or is following a course while would result in the automobile AM leaving the current lane being followed.
It should be noted that all of the processes described above are based on the perspective image Vi of the area to the front of the vehicle obtained by the digital imaging apparatus 100. It is obvious, however, that correct information about the lane dimensions cannot be obtained from the perspective image Vi because the perspective image is a simple two-dimensional representation of three-dimensional space. Specifically, shapes change in the perspective image Vi as the distance from the digital imaging apparatus 100 increases with the size of an object at a distance from the digital imaging apparatus 100 being displayed smaller than the same object in close proximity to the digital imaging apparatus 100. In addition, the edges of a road or lane are indistinct in a perspective image Vi, making edge detection difficult.
Road conditions are also not constant, and this further complicates road edge detection. For example, recognizing the contour of a road or lane by means of edge detection is, in fact, impossible when the side of a road or a lane marker is hidden by such as vegetation, dirt, or gravel. Edge detection is also not practically possible when the lane markers are not recognizable in part or in full because of soiling, damage, or other cause.
The brightness of the road surface is also extremely variable, and is affected by such factors as the weather, whether the road is inside a tunnel, and whether it is day or night. Stationary lights installed in tunnels and beside the road also only partially and locally illuminate the road surface, and spots of extreme brightness or darkness can result even from the same road surface. These conditions are further complicated at night by irregular changes in illumination resulting both from the headlights of the subject automobile AM and the headlights of other vehicles. Accurately determining the edge detection threshold value Eth' is effectively impossible under environments subject to changes in driving conditions, time of day, and weather, as well as dynamic changes in brightness in the perspective image Vi due to fixed or moving lighting even when the driving conditions, time, and weather remain constant.
In other words, it is not possible to obtain accurate dimensional information about the road and lane from an extracted contour signal Sc' that is based on such an inaccurate threshold value Eth'.
It is therefore clearly extremely dangerous to detect the local positioning of a vehicle relative to a road or lane based on such unreliable, inaccurate, and-distorted dimensional information, and to detect the road curvature and vehicular yaw based on such erroneously determined positioning information.