A biometric system based on the recognition of veinous pattern on the palm or the back of the hand is generally a combination of the following modules and their operations
U1. A Hand Placement and Detection Unit: the purpose of which is to detect the presence of a hand when the user places his hand on it in a certain manner. The detection unit informs the computer about the presence of a hand and prompts it for further processing.
U2. Illuminating and Imaging unit: The purpose of this unit is to illuminate the region of interest with uniformly diffused near-infrared light. The embodiment of the system is to be constructed such that it is not affected by the presence ambient light.
U3. Feature extraction unit: This unit extracts the essential information of the vascular pattern from the image captured by the imaging device. During registration this pattern is stored in a storage unit (hard-disk, smart card etc). During recognition, this pattern is compared to a stored pattern.
U4. Recognition unit: This unit compares two patterns, one obtained from the user (live pattern) and one from the storage database (recorded pattern) and makes a decision whether they belong to the same user.
With these modules in reference, we shall describe the previous (prior art) work, and their limitations in detail hereunder.
U1: Hand Placement Unit
The hand placement unit U1 should be able to restrict movement of the hand without causing any discomfort to the user. It should be designed in a manner as to ensure that the user intuitively places his hand consistently in the same position and angle. Prior art R2 describes a simple U-shaped hand docking unit which the user holds, while a camera snaps the image from above. R3 and R4 use a circular hand holding bar. R4 additionally uses a wrist pad to support the wrist on. These constructions ensure that the wrist is facing the camera, but they do not ensure that the user consistently places his hand in the same manner because there is enough leeway for movement and rotation of the hand and as shown in the figures.
FIGS. 1a and 1b show that there is enough space for the hand to move it in the direction of the arrow. If the holding bar is made shorter to remove this freedom of movement, the system would become constricting and uncomfortable for a person with a larger hand. In FIGS. 1c, 1d and 1e the rotational freedom for the hand is demonstrated. A camera viewing from above would see the flat portion of the hand at different angles in the three cases.
The problem is not only of an image transformation but also of illumination, as the light reflected from the hand also changes in these situations. The portion of the hand that is away from the illuminating unit appears darker, and the portion which is closer appears brighter. These variations cause distortions in the resulting vein pattern. The design of hand placement in R5, R7 and R8 is such that the user spreads his hands when it is placed against the imaging device. It is observed that in slim hands when the hand is held in this manner, the metacarpal bones at the back of the hand project out as show in FIG. 1g. 
This causes the resulting vein pattern to be distorted as the regions between the metacarpal bones appear darker than the neighboring region when observed under infrared radiation.
U2: Illuminating and Imaging
The design of the illuminating and imaging unit is based on the fact that highly diffused near infrared is absorbed by de-oxidized hemoglobin in the veins. This makes the vascular network appears darker compared to the neighboring tissue. The camera is selected and modified such that its spectral response of the imaging unit has a peak at this frequency. Prior art R2 specifies that after considerable experimentation with a variety of light sources including high intensity tungsten lights, it was necessary to irradiate the back of the hand using an IR cold source (LEDs). The entire prior art agree that the radiation has to be diffused and uniform on the surface of the hand. R8 mentions that room light can be used as infra-red source, but in an indoor environment during night there isn't sufficient IR that can be captured by a CCD camera. A cited patent in R8, US2006/0122515 describes a mechanism for obtaining diffused infrared for imaging vein patterns. The system employs a setup of multiple reflectors, diffusers and polarizers which makes the system bulky and hence non-portable. A low cost diffuser reduces the intensity of light making the image dull and increasing the intensity requires an array of LEDs which increase the cost of the device. Light correcting diffusers give better results but are expensive.
FIG. 2a shows the Image of the back of the hand under an array of infrared LEDs and a diffuser. The portion inside the circle appears fully white because the light from this portion exceeds the upper cut-off of the camera due to specular reflection from the surface of the skin. In the language of signal processing, this phenomenon is called clipping.
R5 and R7 describe an iterative feedback based mechanism which based on readjusting the brightness of the LEDs based on the image observed by the computer. As specified in the patent R7 this step has to be repeated several times until an acceptable level of illumination uniformity is obtained. Such an iterative approach is time consuming. In a real life situation such delays lead to user annoyance.
U3: Feature Extraction
Almost all the prior work is based on converting the image of the vein pattern into a binary image and thinning it into a single pixel binary image. The problem with thinning is that vital information about the width of the veins is lost. Also the drawback of using a method based solely on the binarized image is that the directionality and continuity of the vein pattern are not fully exploited. A better approach would be to represent the vein pattern with geometric primitives such as points, line segments or splines. R4 specifies a
method in which the branching characteristics of the vascular network as used for comparison, but it is highly probable for vascular patterns of completely different shape to have the same branching characteristics. It is illustrated here in FIGS. 3a and 3b that two different patterns have the same branching characteristics.
R6 employs a method in which the thinned vein pattern is represented in the form of line segments. The problem with such a method is there could be alternate representations of the same pattern in terms of line segments. For example the same pattern in FIG. 4a could have multiple interpretations in FIGS. 4b and 4c in terms of line segments.
U4: Recognition
In order to compare vein patterns, a distance function varying between 0 and 1 is defined. A distance function has a fundamental property that it is monotonic, in other wordsF(y,z)>F(y,x) if y has more resemblance to z than to x
In R5, this distance function is defined as the number of matching vein pixels between the live image and the recorded image. In R7, this function is defined as (number of matching pixels)/(number of pixels in original image). In the examples shown below, pattern y matches with one arm of the pattern x. Pattern z is a copy of pattern y. From observation we would expect the distance between y and z to be much smaller than the distance between x and z.
It can be observed that the number of common pixels between the pair (x, y) is same as the number of common pixels between the pair (y, z), and hence according to the distance defined in R5 we would obtain F(y,z)=F(y,x) which is incorrect.
Using the method specified in R7
            F      ⁡              (                  y          ,          x                )              =                            no          .                                          ⁢          of                ⁢                                  ⁢        common        ⁢                                  ⁢        pixels        ⁢                                  ⁢        in        ⁢                                  ⁢                  (                      x            ,            y                    )                                      no          .                                          ⁢          of                ⁢                                  ⁢        pixels        ⁢                                  ⁢        in        ⁢                                  ⁢        y                        F      ⁡              (                  y          ,          z                )              =                            no          .                                          ⁢          of                ⁢                                  ⁢        common        ⁢                                  ⁢        pixels        ⁢                                  ⁢        in        ⁢                                  ⁢                  (                      y            ,            z                    )                                      no          .                                          ⁢          of                ⁢                                  ⁢        pixels        ⁢                                  ⁢        in        ⁢                                  ⁢        y            
This would also result in F(y,z)=F(y,x) which is again incorrect.
These examples clearly suggest that a better definition for the distance measurement is needed.
R6 employs a method in which the thinned vein pattern is represented in the form of line segments and the line segment hausdorff distance (LSHD) is used for comparison. The problem with simple LSHD is that line segment representation of a pattern can be completely different for similar looking vein patterns. For example in FIG. 4 LSHD between these two representations of the same pattern would give a high value, as there is no corresponding segment in FIG. 4c for segment 2 in FIG. 4b. 