    Next: Structural optimization Up: Stochastic Reconfiguration Previous: Setting the SR parameters   Contents

## Stabilization of the SR technique

Whenever the number of variational parameters increases, it often happens that the stochastic matrix (2.16)

becomes singular, i.e. the condition number, defined as the ratio between its maximum and minimum eigenvalue , is too large. In that case the inversion of this matrix generates clear numerical instabilities which are difficult to control especially within a statistical method.

The first successful proposal to control this instability was to remove from the inversion problem (49), required for the minimization, those directions in the variational parameter space corresponding to exceedingly small eigenvalues . In this thesis we describe a method the is much better. As a first step, we show that the reason of the large condition number is due to the existence of ''redundant'' variational parameters that do not make changes to the wave function within a prescribed tolerance .

Indeed in practical calculations, we are interested in the minimization of the wave function within a reasonable accuracy. The tolerance may represent therefore the distance between the exact normalized variational wave function which minimizes the energy expectation value and the approximate acceptable one, for which we no longer iterate the minimization scheme. For instance is by far acceptable for chemical and physical interest.

A stable algorithm is then obtained by simply removing the parameters that do not change the wave function by less than from the minimization. An efficient scheme to remove the ''redundant parameters'' is also given.

Let us consider the normalized states orthogonal to , but not mutually orthogonal: (2.17)

These normalized vectors define directions in the dimensional variational parameter manifold, which are independent as long as the determinant of the corresponding overlap matrix   (2.18)   (2.19)

is non zero. The number is clearly positive and it assumes its maximum value whenever all the directions are mutually orthogonal. On the other hand, let us suppose that there exists an eigenvalue of smaller than the square of the desired tolerance , then the corresponding eigenvector is such that: (2.20)

where the latter equation holds due to the normalization condition . We arrive therefore to the conclusion that it is possible to define a vector with almost vanishing norm as a linear combination of , with at least some non zero coefficient. This implies that the directions are linearly dependent within a tolerance and one can safely remove at least one parameter from the calculation.

In general whenever there are vectors that are below the tolerance the optimal choice to stabilize the minimization procedure is to remove rows and columns from the matrix (2.18), in such a way that the corresponding determinant of the overlap matrix is maximum.

From practical purposes it is enough to consider an iterative scheme to find a large minor, but not necessarily the maximum one. This method is based on the inverse of . At each step we remove the row and column from for which is maximum. We stop to remove rows and columns after inversions. In this approach we exploit the fact that, by a consequence of the Laplace theorem on determinants, is the ratio between the described minor without the row and column and the determinant of the full matrix. Since within a stochastic method it is certainly not possible to work with a machine precision tolerance, setting guarantees a stable algorithm, without affecting the accuracy of the calculation. The advantage of this scheme, compared with the previous one(18), is that the less relevant parameters can be easily identified after few iterations and do not change further in the process of minimization.    Next: Structural optimization Up: Stochastic Reconfiguration Previous: Setting the SR parameters   Contents
Claudio Attaccalite 2005-11-07