Using the Isoefficiency Concept for the Design of Krylov Subspace Methods.




Krylov subspace methods are currently considered to be among the most powerful iterative methods for the solution of systems of linear equations. These iterative techniques are typically brought into play when the coefficient matrices are large and sparse; that is, when direct methods such as Gaussian elimination are no longer feasible because of their excessive storage requirement. Nowadays, numerical simulations of realistic problems often involve sparsity and their problem sizes are becoming increasingly larger. This is the reason why parallel hardware architectures with the potential of enabling high-performance computing enter the picture. Since Krylov subspace methods are hardly ever designed with parallelism in mind, there is need to concentrate the algorithm designer's attention to parallel performance models. Here, the isoefficiency concept is used to analyze the behavior of Krylov subspace methods on parallel distributed memory computers. The isoefficiency concept is employed to demonstrate the strong influence of inner product-like operations on the scalability of preconditioned Krylov subspace methods. Numerical experiments of message passing implementations of four different iterative methods on a mesh-based parallel computer support the theoretical results.