Skip to main content
Journal cover image

Fast and efficient parallel linear programming and linear least squares computations

Publication ,  Conference
Pan, V; Reif, J
Published in: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
January 1, 1986

We present a new parallel algorithm for computing a least squares solution to a sparse overdetermined system of linear equations Ax=b such that m×n matrix A is sparse and the graph, G = (V,E), of the matrix (Formula presented.) has an s(m+n)-separator family, that is, either | V | < n0 for a fixed constant n0 or, by deleting a separator subset S of vertices of the size ≤ s(m+n), G can be partitioned into two disconnected subgraphs having vertex sets V1,V2 of the sizes ≤ 2/3 (m+n) and each of the two resulting subgraphs induced by the vertex sets S U Vi, i=l,2, can be recursively s(| S U Vi|)-separated in a similar way. Our algorithm uses O(log (m+n) log2s(m+n)) steps and ≤ s3(m+n) processors; it relies on our recent parallel algorithm for solving sparse linear systems and has several immediate applications of interest, in particular to mathematical programming, to sparse nonsymmetric systems of linear equations, and to the path algebra computations. We most closely examine the impact on the linear programming problem, l.p.p., which requires to maximize cTy subject to ATy ≤ b, y ≥ O where A is an m×n matrix. Hereafter it is assumed that m ≥ n. The recent algorithm by N. Karmarkar gives the best known upper estimate (O(m3,5 L) arithmetic operations where L is the input size) for the cost of the solution of this problem in the worst case. We prove an asymptotic improvement of that result in the case where the graph of the associated matrix H has an s(m+n)-separator family; then our algorithm can be implemented using O(m L log m log2s(m+n)) parallel arithmetic steps, s3(m+n) processors and a total of 0(m L s3(m+n) log m log2s(m+n)) arithmetic operations. In many cases of practical importance this is a considerable improvement of the known estimates: for example, s(m+n) = √8(m+n) if G is planar (as occurs in many operations research applications, for instance, in the problem of computing the maximum multicommodity flow with a bounded number of commodities in a network having an s(m+n)-separator family), so that the processor bound is only 8 V8 (m+n)1.5 and the total number of arithmetic steps is O(m2.5L) in that case. Similarly Karmarkar’s algorithm and the known algorithms for the solution of overdetermined linear systems are accelerated in the case of dense input matrices via our recent parallel algorithms for the inversion of dense k×k matrices using O(log2k) steps, k3 processors. Combined with a modiication of Karmarkar’s algorithm, this implies solution of l.p.p. using O(Lm log2m) steps, m2.5 processors. The stated results promise some important practical applications. Theoretically the above processor bounds can be reduced for dense matrix inversion to O(k2.5) and for l.p.p. to o(m2.16S) in the dense case and to o(s2.5(m+n)) in the sparse case (preserving the same number of parallel steps); this also decreases the sequential time bound for the l.p.p. by a factor of m0 335, that is, to O(Lm3.165).

Duke Scholars

Published In

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

DOI

EISSN

1611-3349

ISSN

0302-9743

ISBN

9783540167662

Publication Date

January 1, 1986

Volume

227 LNCS

Start / End Page

283 / 295

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 46 Information and computing sciences
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Pan, V., & Reif, J. (1986). Fast and efficient parallel linear programming and linear least squares computations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 227 LNCS, pp. 283–295). https://doi.org/10.1007/3-540-16766-8_26
Pan, V., and J. Reif. “Fast and efficient parallel linear programming and linear least squares computations.” In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 227 LNCS:283–95, 1986. https://doi.org/10.1007/3-540-16766-8_26.
Pan V, Reif J. Fast and efficient parallel linear programming and linear least squares computations. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 1986. p. 283–95.
Pan, V., and J. Reif. “Fast and efficient parallel linear programming and linear least squares computations.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 227 LNCS, 1986, pp. 283–95. Scopus, doi:10.1007/3-540-16766-8_26.
Pan V, Reif J. Fast and efficient parallel linear programming and linear least squares computations. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 1986. p. 283–295.
Journal cover image

Published In

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

DOI

EISSN

1611-3349

ISSN

0302-9743

ISBN

9783540167662

Publication Date

January 1, 1986

Volume

227 LNCS

Start / End Page

283 / 295

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 46 Information and computing sciences