Jensen (JC95) proposed differential checking as a method of selecting an optimal bandwidth. Given as initial estimate, the search radiance is expanded incrementally by adding progressively more neighbouring photons. If the irradiance differential between the estimates exceeds some given threshold then the search is halted. This approach maximises the search radius and hence minimises noise, while at the same time avoiding penalties from bias.
Schregle (Sch03) proposed a bias compensation operator by using a binary search to hone in upon the optimal bandwidth. An estimate of the expected value of the combined bias and variance for the current bandwidth is calculated, and the variance contribution is differentiated from the bias by using knowledge that it matches a Gaussian distribution. This bias estimate can govern the binary search, and thus the optimum bandwidth can be discovered. The method is effective; adapting bandwidth across an image in order to preserve both peak irradiance and sharp edges in irradiance, at the same time allowing higher bandwidths, and therefore reduced variance, in smoothly varying regions.
Hey et al. (HP02) approach the problems of boundary and topological bias by using the underlying model geometry. During radiance estimation, polygons in the nearest neighbourhood to the sample position are determined, their area is calculated and photon power is distributed according to their incoming direction. Although very effective, Hey's method is restrictive in that it extensively refers to the polygonal mesh which defines each surface.
Spencer et al (SJ08) use a hierarchical data structure to cluster photon flux data, thus allowing estimate areas of an arbitrary size at a near-constant query cost. Relative photon density per unit area is adjusted by controlling the depth to which the balanced Kd-tree is traversed. A useful side-effect of this approach is that of better photon stratification at shallower cut-off depths. The result is that noise is rapidly reduced as the kernel radius is increased.
The problem of optimal sampling within a given domain is of great significance in many areas of computer graphics. Monte Carlo integration, dithering, importance sampling and a host of other techniques all benefit from well contrived sample distributions. Integrators which rely upon purely random functions to generate samples are known to converge more slowly than those which employ stratified or quasi-random methods. The quality of a sample set is sometimes measured by its discrepancy (Shi91) or the measure of equidistribution between points. Another useful tool for determining distribution quality is a set of points with a radially averaged power spectrum with a blue noise signature and low angular anisotropy are most suitable for convolution operations such as anti-aliasing (see (DW85,Coo86).
Optimal distributions are highly favourable in photon mapping since low photon discrepancy intrinsically means less noise in the radiance estimate. Jensen (JC95) proposed stratifying the photons as they are emitted. This can be accomplished by jittering, n-rooks, Poisson-disk and other sample schemes. Alternatively, a quasi-random sequence [KIAK95] produces well-ordered sets with the added property of good stratification across multiple dimensions.
The concept of diffusing a poorly-distributed point set into a more optimal configuration has been successfully adapted to solve many problems. McCool and Fiume (MF92) used Lloyd's method (Lio83) to generate correctly-spaced Poisson-disk samples. Ostromoukhov et al (ODJ04) also employed Lloyd's method to improve the spatial distribution of samples generated using a hierarchical method based on Penrose tiling. Kopf et al (KCODL06) used a similar scheme to enforce subdivision rules when generating blue noise.
Myszkowski (Mys97) introduced a bias-reduction strategy for the kernel-based density estimation framework established by Shirley et al (SWH*95). Myszkowski proposed an enhanced nearest-neighbour (ENN) method guided by a statistical bias-error metric. By calculating the error from an array of density estimates adjacent to the sample point, the ENN algorithm reduces bias by minimisation across the domain of sample radii.
Walter et al (WHSG97) addressed the issue of bias within the density estimation framework using polynomial regression. Given that boundary bias is also an issue with regression techniques, Walter used a locally-weighted, least squares variant augmented with a complex system to handle boundary regions. Introduced in the paper and further elaborated upon in his PhD thesis (Wal98), Walter also proposed a perceptually-driven bandwidth selector that chooses kernel sizes based upon the limits of visually objectionable noise.
A second class of bias-reduction algorithms focuses on adapting the filter support for the kernel to help reduce unwanted blurring. Isotropic kernels are used in both k-NN- and kernel based methods of density estimation as well as with techniques based on splatting (Col94). Jensen (Jen96b) first integrated the concept into the photon mapping paradigm primarily as a means of reducing proximity bias in caustics.
Later, Schjøth et al (SOS06) introduced a technique based around anisotropic diffusion filtering. Using a structure tensor derived from the photon distribution, they are able to define a shape-adapting kernel aligned to important visual details such as boundaries and discontinuities. The results are shown to be superior to isotropic kernels at the expense of a small precomputation and rendering overhead.
More recently, Schjoth et al (SFES07) proposed adapting ray differential tracing (ige99) to guide the shape of the density estimation kernel. At render time, ray differential information is used to create a unique kernel based upon the shape of the footprints of the photons within the gather radius. This approach is especially effective at rendering high frequency caustics. The authors also demonstrate the power of the technique in realising fine details using only a small number of photons.
Turk (Tur91) introduced a relaxation technique based on point repulsion to facilitate organic texture synthesis. This idea was later elaborated to define a method for re-tiling polygonal surfaces at arbitrary levels of detail (Tur92). More recently, Jensen (JB02) utilised the same concept to ensure equidistribution of irradiance samples around a polygon mesh.
It is an object of the present invention to provide a method and system for synthesising and rendering realistic caustics in which render time improved relative to the prior art, topology, proximity and boundary bias are all minimised, and edges and discontinuities are effectively preserved.
In accordance with the present invention, there is provided A method of rendering caustics in a computer graphics scene, the method comprising:                (a) Obtaining a photon map of said scene;        (b) Redistributing photons from said photon map into an arrangement with a blue noise setral signature by performing a relaxation step in respect of each of a number of photons;        (c) Determining a constraining vector for each of a number of photons; and        (d) Rendering said scene using the results for step (b) and/or step (c).        
Thus, the present invention provides a novel approach to noise and bias reduction upon reconfiguration of the photon map and a novel heuristic to rapidly identify and preserve edges and discontinuities. This approach relaxes the photons onto a configuration with a blue noise spectral signature. This allows for the use of compact kernels with only tens of nearest-neighbours (k-NN). In addition we apply a heuristic in order to preserve important features within the map. This dual strategy has several important advantages, including:                A blue noise distribution yields high-quality radiance estimates with very few photons. This greatly improves render time (by an average factor of 5 at least) and offsets the precomputation required to relax the distribution.        Topology, proximity, and boundary bias are all minimised by the compact kernel size.        Edges and discontinuities are effectively preserved using photon migration constraints.        
In a preferred embodiment, in order to redistribute photons over a desired surface of intersecting geometry, a point repulsion method may be used in step (b), whereby for each of a number of photons, at point x, a force of repulsion f is calculated as being the weighted sum of the offset k-nearest neighbours. In one exemplary embodiment, K=6, such that the relaxation step results in the points being relaxed converging to an ideal distribution, namely a hexagonal lattice with 6 adjacent neighbours per point.
For each point, is local density is beneficially determined based upon the weighted average densities stored at the surrounding K photons, the local density being derived from the proximity of the k-nearest neighbours.
The relaxation step preferably includes a diffusion operation applied to the flux of each of the number of photons.
The present invention extends to a computer-implemented system configured to perform the method defined above.
These and other aspects of the invention will be apparent from, and elucidated with reference to the embodiment described herein.
In the introduction, some of the prior research carried out into bias and noise reduction in photon density estimation has been reviewed. It is also described how a good primal photon distribution cannot always guarantee a low-discrepancy sequence on intersecting geometry. In the following, a new approach based upon photon redistribution, according to an exemplary embodiment of the present invention, is described. By performing an additional pass after photon tracing, the aim is to redistribute photons into an arrangement with a blue noise spectral signature. This approach can be broken down into two distinct steps:                Systematically search through the photon map for features and discontinuities, storing the inferred information in the photon data structure;        Iteratively relax each photon according to a repulsion vector derived from the k-nearest neighbours. The data obtained during the previous step can be used to constrain point migration and preserve important details.3.1. RelaxationIn order to redistribute photons over the surface of intersecting geometry we employ a point repulsion method similar to that described in [Tur91]. This approach is simple and intuitive and does not require a Voronoi tessellation of the sample distribution.        
For a given photon p at point {right arrow over (x)}, we calculate the force of repulsion {right arrow over (f)} as being the weighted sum of the offset k-nearest neighbours:
                              f          →                =                              1            K                    ⁢                                    ∑                              k                =                1                            K                        ⁢                                          (                                                      x                    →                                    -                                                            x                      →                                        k                                                  )                            ⁢                              (                                                                            r                      ⁢                                                                                          ⁢                                              τ                        ⁡                                                  (                          i                          )                                                                                                                                    d                        k                                            +                      ɛ                                                        -                  1                                )                                                                        (        1        )            
Here, dk, represents the distance from xk to x, ε is an arbitrarily small constant. We choose a value of K=6 given that points relaxed using Lloyd's method naturally converge to a hexagonal lattice-like distribution with 6 adjacent neighbours per point, r represents the radius of the disc enclosing the K+1 neighbouring photons. This is necessary to prevent the Kth photon always lying on the periphery of the disc and thus having zero weight. τ(i) amplifies the value of r and hence the magnitude of {right arrow over (f)}. We call this the over-relaxation coefficient and define it as:
                              τ          ⁡                      (            i            )                          =                              λ            min                    +                                    (                                                λ                  max                                -                                  λ                  min                                            )                        ⁢                          exp              ⁡                              (                                  -                                                            6                      ⁢                                              i                        2                                                                                    i                      2                                                                      )                                                                        (        2        )            
Where λmax and λmin represent the higher and lower relaxation bounds, blended together by a Gaussian falloff. Here, i is the current relaxation iteration and I the total number of iterations (see section 5.2 for more detailed discussion on the effects of varying λ).
One of the most notable advantages of photon mapping is its paradigm of decoupling irradiance from geometry. While there are established methods of tracking particle migration over surfaces [Tur92], it is simpler and more versatile to keep our algorithm geometry-independent. This can be achieved by projecting {right arrow over (f)} into the plane tangent to the surface at x. We derive an orthonormal basis from the surface normal {right arrow over (n)} and using it to define the new, projected vector, {right arrow over (f)}′:{right arrow over (u)}={right arrow over (n)}×(−nx1ny3−nz){right arrow over (v)}={right arrow over (n)}×{right arrow over (u)}{right arrow over (f)}′={right arrow over (u)}({right arrow over (u)}·{right arrow over (f)})+{right arrow over (v)}({right arrow over (v)}·{right arrow over (f)})  (3)
We apply the relaxation step to each photon in succession, computing the repulsion vector {right arrow over (f)}′ then adding it to the position of the photon.
We find that the number of relaxation steps required to remove all-frequency noise from a sample distribution depends on the method used to cast photons. Purely random point distributions exhibit noise across the entire spectrum of spatial frequencies. In this case, 30 or more relaxation steps may be required to satisfactorily remove all objectionable noise. Conversely, quasi-random distributions confine noise to higher frequencies. In these instances we find that 20 relaxation steps are usually sufficient (see section 4 for examples).
Point repulsion can be effectively expressed as a diffusion operation. The direction of migration is a function of the derived gradient of photon density meaning particles naturally flow from more dense to less dense regions. In exchange for removing objectionable noise from the sample distribution, photon relaxation introduces a systematic error that we call diffusion bias. This results in blurring and loss of high frequency detail and is especially apparent along edges and discontinuities.
To solve this problem it is necessary to acquire data on the structure of the photon map and use it to constrain movement relative to an axis perpendicular to the irradiance gradient. Schjøth et al [SOS06] use this approach to define a shape-adapting kernel based upon a derived structure tensor. While their solution could be adapted to constrain particle migration, relying purely on the distribution local to each photon does not give us enough precision. FIG. 2 demonstrates the ghosting artefacts that appear when we use a purely gradient-based approach.
To address these problems we introduce a novel method of controlling migration by assigning each photon a constraining vector {right arrow over (g)} and a weighting coefficient w. The vector {right arrow over (g)} defines an axis which lies in the plane of the photon. The weight w defines the extent to which the repulsive force {right arrow over (f)}′ is constrained by {right arrow over (g)}. Thus, a maximally constrained photon with weight 1 can only migrate along {right arrow over (g)}. An unconstrained photon with weight 0 can move freely it its plane.
In order to find these values we begin by computing the irradiance differential at a given origin xo from the distribution of the k-nearest neighbouring photons. The vector from xo to the average photon position defines the gradient vector {right arrow over (δ)}. We project this vector into the plane tangent to the surface (using the method described in equation 3) and then normalise it. If a discontinuity passes through the disc containing the k-NN then it is likely that it lies perpendicular to {right arrow over (δ)}.
We compute the signed distance Ψn from each photon n to the plane through the origin lying perpendicular to {right arrow over (δ)}(FIG. 3(a)). The photons are then sorted in ascending order of Ψ. Using this information we apply a heuristic, χ, that analyses each neighbour and assesses the uniformity of the distribution lying on each side:
                    χ        =                                            σ              L                                                      ψ                _                            L                                +                                    σ              R                                      ψ              R                                                          (        4        )            
The value of σ is derived from the standard deviation of the perpendicular absence between photons. We define it as:
                    σ        =                                            1              N                        ⁢                                          ∑                                  n                  =                  1                                N                            ⁢                                                                    (                                                                                            ψ                          _                                                n                                            -                                              ψ                                                  n                          -                          J                                                                    -                                              ψ                        _                                                              )                                    2                                ⁢                                  (                                      1                    -                                                                  ψ                        n                        2                                                                    r                        2                                                                              )                                                                                        (        5        )            
The weighting function on the right-hand side of the equation helps to alleviate false positives which may arise from signal noise. Ψ is the mean distance, defined as:
                              ψ          _                =                              1            N                    ⁢                                    ∑                              n                =                1                            N                        ⁢                          (                                                ψ                  n                                -                                  ψ                                      n                    -                    J                                                              )                                                          (        6        )            
Here, σL and σR are found by applying σ to the two subsets of K that lie to the left and to the right of a given photon, k. These ranges are defined as L=[0,k) and R=(k,K+1]. For the cases of Ψ0 and ΨK+1, the extremes of the gather radius, −r and r, are used respectively.
Our goal is to find the photon to K with the smallest corresponding value of χ, since any discontinuity is most likely to cross this point (FIG. 3(b)). In practise it is not necessary to test all the photons, merely those at discrete intervals along {right arrow over (δ)} equivalent to the mean distance between the photons in the estimate:
            ψ      _        K    ⁢                    K        π              .  
Once a suitable candidate discontinuity has been found we compute a homogeneity metric, l, that represents the ratio between the means of the left and right partitions:
                                              ⁢                            (        7        )            
The l function returns a value in the range [0,1] where 1 implies an entirely homogeneous distribution and 0 a maximal discontinuity. This value is used to determine the magnitude of the constraint to apply to each photon. We re-map a sub-range of l between the user-specified limits α and β to tune the sensitivity of the heuristic:
                                              ⁢                  =                                    1              -                                                                    i                    -                    α                                                        β                    -                    α                                                  ⁢                                                                  ⁢                where                ⁢                                                                  ⁢                α                                      <            β                                              (        8        )            
l′ is then clamped to the range [0,1]. We found that for scenes containing high-frequency caustics (for example, FIG. 8), a value of α between 0.1 and 0.2 was optimal to highlight discontinuities. Conversely, in scenes containing low-frequency caustics (for example, FIG. 1), and α value of 0.0 was more appropriate. FIG. 4 demonstrates the effects of different values of the two parameters on a sample distribution.
Whenever a photon p is found to lie within the feature gradient (FIG. 3(b)), we update its migration constraints as follows:{right arrow over (g)}p={right arrow over (g)}p+l′({right arrow over (δ)}×{right arrow over (n)}o)wp=max(wp,l′)  (9)
Where no is the surface normal at the origin xo. We use the maximum of the two values of wp and l′ because the weight of multiple gradients on each photon would most likely not average to 1, permitting unwanted migration acmes edges. Once the entire photon map has been evaluated we normalise all values of {right arrow over (g)}. For photons where wp is zero, we derive an arbitrary value of {right arrow over (g)} from the surface normal using the first line of equation 3.
Given the normalised constraining vectors and associated weights, we can redefine equation 3 to modify the repulsive force {right arrow over (f)}′p:{right arrow over (v)}p={right arrow over (g)}p×{right arrow over (n)}p {right arrow over (f)}″p={right arrow over (v)}p({right arrow over (v)}p(1−wp)·{right arrow over (f)}′p)+{right arrow over (g)}p({right arrow over (g)}p·{right arrow over (f)}′p)  (10)
While the structure preservation heuristic outline in the previous section is effective at constraining photons which lie on important features, the cost of applying the heuristic at every photon is relatively high. One of the benefits of iteratively sweeping through the k-nearest neighbours is that the heuristic can locate the most likely feature edge candidate regardless of where it lies in the local neighbourhood. Therefore, we need only ensure that every photon is captured and analysed at most once during feature detection. Choosing an optimum bandwidth for feature searching depends on the density and distribution of the photon map. We found that a value of K=150 worked well in all our examples.
The most effective way of preventing unnecessary gradient searches is to mark photons captured within each estimate radios as “touched”. Covering the map involves moving sequentially along a list of pointers to each of the photons. If a given photon is untouched, it is chosen as the center of a region to be searched. Otherwise, it is skipped. This approach effectively excludes photons that have already been swept and greatly decreases precomputation time. In our implementation we mark all photons within a distance of 0.6 r from xo as being touched at each search.
Unfortunately, the penalty of minimising the number of feature searches is that holes or weak migration constraints may appear along prominent discontinuities. This occurs when a discontinuity lies at the very edge of the search radius and is disregarded as noise by the heuristic. In addition, faceting artefacts may appear as a result of a straight gradient vector being used to constrain a curved edge. To address these problems we perform additional searches on photons lying along identified feature gradients (FIG. 3(b)). This means that once a feature of discontinuity is detected, the local neighbourhood is explored laterally across the distribution gradient, further identifying and reinforcing it.
In section 3.2 we showed how the feature detection and migration constraints are effective at preserving sharp edges. However, blurring and loss of fidelity may still occur due to photon migration across steep gradients that do not necessarily qualify as discontinuities. FIG. 5 (centre) demonstrates the effect of excessive diffusion on fine details.
To compensate, we introduce another constraint which applies a Gaussian falloff to the magnitude of each repulsion vector {right arrow over (f)}″p:
                                              ⁢                                            f              _                        p            ″′                    =                      ⁢                          exp              ⁡                              (                                                      -                                                                  (                                                                                                                                                                                          r                            p                                                                          )                                            2                                                        ⁢                                                            -                                              ln                        ⁡                                                  (                          0.5                          )                                                                                                            ρ                      2                                                                      )                                                                        (        11        )            
The net effect is that the motions of photons with high migration pressure are damped while those with a low pressure are free to move. This results in the effective diffusion of low-frequency noise while still preserving higher-frequency details. In equation 11, ρ is a scaling parameter that controls the extent of the constraint. In all our examples, we set this parameter to 0.3. We found that using the value of rp buffered from the previous relaxation iteration prevented unwanted artefacts being introduced into the distribution.
One of the drawbacks of using very compact kernel sizes is the potential for noise to appear as a result of variance in chromaticity and intensity between photons. This may occur when caustics from two different coloured emitters mix or when a spectrally-based photon model is used (see [Wal98] for an example). We can address this problem by applying a diffusion operation to the flux of the photons at each relaxation step.
Given a photon p, its new flux Φ′p can be calculated as the weighted mean of itself and k-nearest adjacent neighbours,
                              Φ          p          ′                =                                            Φ              p                        +                                          ∑                                  k                  =                  1                                K                            ⁢                                                Φ                  k                                ⁢                                  W                  ⁡                                      (                    k                    )                                                                                            1            +                                          ∑                                  k                  =                  1                                K                            ⁢                              W                ⁡                                  (                  k                  )                                                                                        (        12        )            
Where W (k) is an Epanechnikov kernel weighting function. Note that the migration constraining vectors of the photon k are used to prevent flux diffusion across discontinuities:
                              W          ⁡                      (            k            )                          =                              (                          1              ⁢                                                          ⁢              …              ⁢                                                          ⁢                                                d                  k                  2                                                  r                  2                                                      )                    ⁢                      (                                                            w                  k                                ⁢                                                                                                                      g                        →                                            k                                        ⁡                                          (                                                                                                    x                            →                                                    p                                                -                                                                              x                            →                                                    k                                                                    )                                                                                                    +                              (                                  1                  -                                      w                    k                                                  )                                      )                                              (        13        )            
To test the efficacy of our algorithm we compare images of various scenes rendered using photon relaxation and standard k-NN, static bandwidth photon mapping. In all cases we use an isotropic kernel with an Epanechnikov weighted filter. In addition, photons are cast from quad area light sources. Sampling over the domain of (u, v, θ, φ) is achieved using a low-discrepancy Halton sequence using the first 4 primes with a Faure permutation [Fau92] applied to improve the distribution. This method effectively reduces very low-frequency noise negating the need for high numbers of relaxation iterations. All images rendered with our technique use 20 photons in the radiance estimate unless otherwise stated. Below this threshold we find that kernel artefacts (which are typically negligible at larger bandwidths) begin to appear.
All the images in our examples (including test patterns) provides experimental validation of their techniques with are tone mapped using the operator outlined by Reinhard et al [RSSF02]. We apply our relaxation technique to photon maps from scenes containing a mixture of high- and low-frequency caustics generated as a result of both reflection and refraction. In all examples, λmax=2.0 and λmin=1.2. Table 1 provides further detailed statistics for all of the example images.
FIG. 1 demonstrates the effectiveness of photon relaxation at removing noise from caustics exhibiting large areas of uniform illumination. Such scenes are typically difficult to render efficiently using existing methods given there is little high-frequency detail to mask the irregularities in the caustic illumination, thus necessitating the use of a large kernel. Notice how all low-frequency noise has been removed using our technique while preserving sharp discontinuities around the shadow penumbra.
FIG. 6 contains a focussed cardioid caustic from an tinted metal ring. This scene highlights the effectiveness of the feature detection algorithm at locating and preserving discontinuities in illumination. Note how the compact kernel size effectively eliminates proximity bias along the bright edges of the caustic.
In FIG. 7, caustics caused by light passing through the thin plastic of the chair are projected onto the ground plane. The fidelity of the diffuse caustics on the underside of the monkey demonstrates how our meshless relaxation approach successfully preserves photon map integrity even on complex geometry. Also note how the detail is preserved around sharp discontinuities while simultaneously removing low-frequency noise in more uniform areas.
FIG. 8 is an example of a scene containing intricate caustics. The edges of the coins create a serrated pattern which is blurred out when a high-bandwidth kernel is applied. Notice how our method successfully identifies edge details which are correctly preserved and enhanced during the relaxation step.
Table 1 provides statistics for photon map pre-processing and render time speed-up. One of the most prominent advantages demonstrated by these data is the decreased render time due to fewer photons being required during density estimation. We also analyse the effectiveness of our approach when compared to existing methods and demonstrate the blue noise properties of the photon distribution by means of Fourier decomposition.
Previous literature on the subject of bias and noise reduction the use of photon test patterns. In particular, Schjøth et al [SOS06] use a purely random distribution of 50,000 photons superimposed with an ellipse and two vertical bars at a density 10 times that of the background. The flux of each photon is a constant. The authors present this pattern in order to demonstrate the edge-preserving qualities of their algorithm. In FIG. 10 we reproduce their examples to demonstrate and compare the effectiveness of our relaxation-based approach.
To demonstrate the blue noise properties of the relaxed photon sample distribution we apply a discrete Fourier transform to a 2562 pixel sample region with an absolute point density of 5%. From these data we derive the radially-averaged power spectrum and angular anisotropy. In all tests we use
TABLE 1Render statistics. Note that render time μ represents the speed-up in calculation of illumination from thecaustic component only. Std. and Rlx. represent values as applicable to the standard method and ourrelaxed method respectively.Red FilterCoinsChair (point)Chair (area)Ring (point)Ring (area)Image Resolution480 × 640840 × 525600 × 600600 × 600600 × 478600 × 478Polygons13,002126,79726,57026,5706,0626,062Photons58,000433,00074,00073,00060,00056,000Render time μ5.0910.056.255.55.544.86Feature search t1.26 s10.78 s 4.24 s 2.8 s2.67 s1.74 sRelaxation t6.67 s47.01 s12.99 s12.33 s5.83 s5.42 sIterations202020202020[α, β][0, 0.35][0.2, 0.35][0.2, 0.35][0, 0.35][0.2, 0.35][0, 0.35]Std.Rlx.Std.Rlx.Std.Rlx.Std.Rlx.Std.Rlx.Std.Rlx.Radiance estimate200203502020020200201002010020the measured anisotropy of white noise (0.26) as a baseline of −10 dB (see [Uli88]).
FIG. 11 shows analytical data obtained from a random point distribution after 50 relaxation steps. In this example over-relaxation is disabled (by setting both λmin and λmax to 1.0). After a sufficient number of iterations this configuration will eventually yield an ideal, hexagonal lattice-like distribution. Unfortunately, the time required to reach this state is prohibitive. We found that by increasing λmin to a value of 1.2 and λmax to 2.0, the photons relax into a configuration with distinct cellular patterns similar to those in [Tur91]. Significantly, the resulting distribution exhibits a blue noise spectral signature with stronger osculations and a corresponding lower render variance similar to that of the ideal state obtained when over-relaxation is disabled. Analytical data obtained using these parameters can be found in FIG. 12.
It should be noted that the present invention is not restricted to the above-described embodiment and preferred embodiments may vary within the scope of the appended claims. The term “comprising”, when used in the specification including the claims, is intended to specify the presence of stated features, means, steps or components, but does not exclude the presence or addition of one or more other features, means, steps, components or groups thereof. Furthermore, the word “a” or “an” preceding an element in a claim does not exclude the presence of a plurality of such elements. Moreover, any reference sign does not limit the scope of the claims. The invention can be implemented by means of both hardware and software, and several “means” may be represented by the same item of hardware. Finally, the features of the invention, which features appear alone or in combination, can also be combined or separated so that a large number of variations and applications of the invention can be readily envisaged.