Volume 33, pp. 17-30, 2008-2009.
Analysis of some Krylov subspace methods for normal matrices via approximation theory and convex optimization
M. Bellalij, Y. Saad, and H. Sadok
Abstract
Krylov subspace methods are strongly related to polynomial spaces and their convergence analysis can often be naturally derived from approximation theory. Analyses of this type lead to discrete min-max approximation problems over the spectrum of the matrix, from which upper bounds on the relative Euclidean residual norm are derived. A second approach to analyzing the convergence rate of the GMRES method or the Arnoldi iteration, uses as a primary indicator the (1,1) entry of the inverse of $K^H_mK_m$ where $K_m$ is the Krylov matrix, i.e., the matrix whose column vectors are the first $m$ vectors of the Krylov sequence. This viewpoint allows us to provide, among other things, a convergence analysis for normal matrices using constrained convex optimization. The goal of this paper is to explore the relationships between these two approaches. Specifically, we show that for normal matrices, the Karush-Kuhn-Tucker (KKT) optimality conditions derived from the convex maximization problem are identical to the properties that characterize the polynomial of best approximation on a finite set of points. Therefore, these two approaches are mathematically equivalent. In developing tools to prove our main result, we will give an improved upper bound on the distances of a given eigenvector from Krylov spaces.
Full Text (PDF) [177 KB], BibTeX
Key words
Krylov subspaces, polynomials of best approximation, min-max problem, interpolation, convex optimization, KKT optimality conditions
AMS subject classifications
65F10, 65F15
ETNA articles which cite this article
Vol. 41 (2014), pp. 159-166 Jörg Liesen and Petr Tichý: Max-min and min-max approximation problems for normal matrices revisited |
< Back