Parallelisierung der QMR-Methode zur Lösung linearer Gleichungssysteme.
The solution of discretized partial differential equations leads to large sparse systems of linear equations. If properties of such systems are known, methods that specifically benefit from these properties could be used. E.g., the conjugate gradient method would be appropriate for symmetric positive definite systems. Unfortunately, characteristics are often unknown so that more general techniques have to be applied. This report introduces two iterative methods that are applicable to arbitrary regular non-symmetric coefficient matrices. Although different, both methods share the same basic idea called quasi-minimal residual approach. In this approach, an iterative method is defined by minimizing a factor of the residual norm rather than the residual norm as a whole. The method QMR (Quasi-Minimal Residual) combines the classical unsymmetric Lanczos algorithm with the quasi-minimal residual approach. Each iteration of the resulting process contains a matrix-vector product with the coefficient matrix as well as with its transpose. TFQMR (Transpose-Free Quasi-Minimal Residual) is derived by applying the quasi-minimal residual approach to the iterative method CGS (Conjugate Gradient Squared). TFQMR and CGS both involve two matrix-vector products per iteration. In contrast to QMR, matrix-vector products with the transpose of the coefficient matrix are not contained. While the two matrix-vector products of QMR are independent, this is not the case with TFQMR and CGS@. It is shown how the parallelization of QMR profits by the independence of the matrix-vector products. The parallelization of the three algorithms is compared using a specific massively parallel computer, the Paragon XPS 10.