## Efficient parallel computation of the characteristic polynomial of a sparse, separable matrix

This paper is concerned with the problem of computing the characteristic polynomial of a matrix. In a large number of applications, the matrices are symmetric and sparse: with O(n) non-zero entries. The problem has an efficient sequential solution in this case, requiring O(n2) work by use of the sparse Lanczos method. A major remaining open question is: to find a polylog time parallel algorithm with matching work bounds. Unfortunately, the sparse Lanczos method cannot be parallelized to faster than time Ω(n) using n processors. Let M(n) be the processor bound to multiply two n × n matrices in O(log n) parallel time. Giesbrecht [G2] gave the best previous polylog time parallel algorithms for the characteristic polynomial of a dense matrix with O(M(n)) processors. There is no known improvement to this processor bound in the case where the matrix is sparse. Often, in addition to being symmetric and sparse, the matrix has a sparsity graph (which has edges between indices of the matrix with non-zero entries) that has small separators. This paper gives a new algorithm for computing the characteristic polynomial of a sparse symmetric matrix, assuming that the sparsity graph is s(n)-separable and has a separator of size s(n) = O(nγ), for some γ, 0 < γ < 1, that when deleted results in connected components of ≤ αn vertices, for some 0 < α < 1, with the same property. We derive an interesting algebraic version of Nested Dissection, which constructs a sparse factorization of the matrix A - λIn where A is the input matrix and In is the n × n identity matrix. While Nested Dissection is commonly used to minimize the fill-in in the solution of sparse linear systems, our innovation is to use the separator structure to bound also the work for manipulation of rational functions in the recursively factored matrices. The matrix elements are assumed to be over an arbitrary field. We compute the characteristic polynomial of a sparse symmetric matrix in polylog time using P(n) (n + M(s(n))) ≤ P(n)(n + s(n)2.376) processors, where P(n) is the processor bound to multiply two degree n polynomials in O(log n) parallel time using a PRAM (P(n) = O(n) if the field supports an FFT of size n but is otherwise O(n log log n) [CK]). Our method requires only that a matrix be symmetric and non-singular (it need not be positive definite as usual for Nested Dissection techniques). For the frequently occurring case where the matrix has small separator size, our polylog parallel algorithm has work bounds competitive with the best known sequential algorithms (i.e., the Ω(n2) work of sparse Lanczos methods), for example, when the sparsity graph is a planar graph, s(n) ≤ O(√n), and we require polylog time with only P(n)n1.188 processors.

### Duke Scholars

## Published In

## DOI

## ISSN

## Publication Date

## Volume

## Issue

## Start / End Page

## Related Subject Headings

- Computation Theory & Mathematics

### Citation

*Algorithmica (New York)*,

*29*(3), 487–510. https://doi.org/10.1007/s004530010068

*Algorithmica (New York)*29, no. 3 (January 1, 2001): 487–510. https://doi.org/10.1007/s004530010068.

*Algorithmica (New York)*, vol. 29, no. 3, Jan. 2001, pp. 487–510.

*Scopus*, doi:10.1007/s004530010068.

## Published In

## DOI

## ISSN

## Publication Date

## Volume

## Issue

## Start / End Page

## Related Subject Headings

- Computation Theory & Mathematics