Parallel processing can be harmful: The unusual behavior of interpolation search
Several articles have noted the usefulness of a retrieval algorithm called sequential interpolation search, and Yao and Yao have proven a lower bound log log N - O(1), showing this algorithm is actually optimal up to an additive constant on unindexed files of size N generated by the uniform probability distribution. We generalize the latter to show log log N - log log P - O(1) lower bounds the complexity of any retrieval algorithm with P parallel processors for searching an unindexed file of size N. This result is surprising because we also show how to obtain an upper bound that matches the lower bound up to an additive constant with a procedure that actually uses no parallel processing outside its last iteration (at which time our proposal turns on P processors in parallel). Our first theorem therefore states that parallel processing before the literally last iteration in the search of an unindexed ordered file has nearly no usefulness. Two further surprising facts are that the preceding result holds even when communication between the parallel processing units involves no delay and that the parallel algorithms are actually inherently slower than their sequential counterparts when each invocation of the SIMD machine invokes a communication step with any type of nonzero delay. The presentation in the first two chapters of this paper is quite informal, so that the reader can quickly grasp the underlying intuition. © 1989.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Computation Theory & Mathematics
- 46 Information and computing sciences
- 08 Information and Computing Sciences
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Computation Theory & Mathematics
- 46 Information and computing sciences
- 08 Information and Computing Sciences