Such a modelizing method is helpful to calculate neutron flux and/or thermohydraulics parameters within the core.
The results of such a modelizing method can be used to prepare safety analysis reports before building and starting a reactor.
These results can also be useful for existing nuclear reactors and especially for managing the nuclear fuel loaded therein. In particular, these results can be used to assess how the core design should evolve in time and decide of the positions of the fuel assemblies in the core, especially the positions of the fresh assemblies to be introduced in the core.
Such modelizing methods are implemented by computers. To this end, the core is partitioned in cubes, each cube constituting a node of a grid for implementing a digital computation.
In the state of the art methods, the cubes are numbered one after the other in a lexicographical order.
In such methods, most of the computational efforts are concentrated in the part dedicated to the iterative solving of large sparse systems, which end up being either linear systems or eigensystems.
When calculating thermohydraulics parameters, the system to be solved is a linear system and corresponds, in mathematical form, to a linear equation of the form:Ax=b  (1)
A typical whole nuclear core computation amounts to a sparse linear system being defined on the basis of between 150 and 200 fuel assemblies and typically several tens of thousands of cubes, meaning that a nontrivial computational effort is required for solving the associated algebraic systems.
The actual structure of the matrix A is characterized by the systematic presence of diagonal elements plus a limited number of nonzero offdiagonal elements, which each represent interactions between a cube and the directly neighbouring cubes only. In other words, only interactions between cubes sharing common surfaces are considered.
With a lexicographical grid, the few nonzero values [AD+ALD]ij, with AD and ALD being respectively the diagonal and lower diagonal part of A, represent the interaction of a cube i with itself and with directly neighbouring cubes j that have lower lexicographical indices, so with j≦i.
In order to solve the above-mentioned linear equation, a Gauss-Seidel (GS) procedure is usually implemented, meaning that the matrix A is split into its diagonal AD plus lower-diagonal part ALD on the one hand and its upper-diagonal part AUD on the other hand:A=[AD+ALD]+AUD  (2)
With the diagonal plus lower-triangular part being easy to invert implicitly, the GS procedure amounts to the iteration:x(n+1)=[AD+ALD]−1(b−AUDx(n))  (3)
which can and has been programmed very compactly and efficiently in the form:x(n+1)=[AD]−1(b−ALDx(n+1)−AUDx(n))=x(n)−r(n+1/2)  (4)with r(n+1/2)=[AD]−1(b−[AD+ALD]x(n+1)−AUDx(n)),  (5)
according to which, during a new GS iteration, each update for cube i “profits” from already realized updates (during the same iteration) for its neighbours cubes that have lower lexicographical indices.
As for the coupling with the remaining neighbours, i.e. the ones with higher lexicographical indices, other values must be used that emerged from previous GS iterations.
The convergence speed of this GS procedure is usually accelerated by application of a systematic Successive Over-Relaxation (SOR) measure with relaxation factor ω, which amounts to the final implementation of:x(n+1)=x(n)−ωr(n+1/2)  (6)
Upon convergence, the residual r converges toward 0 and the iterant x converges toward the exact solution of the linear system. On a sequential basis, i.e. with the GS iteration being performed sequentially by a single processor, the performance of this GS/SOR procedure is certainly not bad.
However, an important issue of concern in the currently implemented GS/SOR procedure is the highly sensitive dependence of computational performance on the choice of a value for the relaxation factor ω to be applied in the SOR scheme.
Minor variations in the value for the relaxation factor ω have been found to lead to substantial differences in convergence speed, meaning that small departures from a typically empirically determined optimum will lead to heavy losses in computational efficiency.
The relaxation factor ω is currently a parameter to be set by the user of the computer implementing the modelizing method. This user cannot be expected to be able to determine the optimum choice for the factor ω for each individual case of relevance. The value of this optimum choice may indeed depend on several parameters, like channel dimensions, material properties, temperature, etc., anything which determines the individual components of the matrix in the linear system to be solved. For a user, it is therefore not possible to predetermine shifts in this optimum choice depending on state changes. With the identified performance sensitivity, it can be expected that default values for the relaxation factor ω will, on average, lead to performance losses when applied as a fixed SOR parameter for different computational cases.
Further, with the current GS procedure, the distribution of iterative workload on different processors would lead to a severe degradation of computational efficiency, even for low numbers of parallel processors, so that no major speed computation improvement could be obtained through parallelization.