Parallel processing can be harmful: The unusual behavior of interpolation search


Journal Article

Several articles have noted the usefulness of a retrieval algorithm called sequential interpolation search, and Yao and Yao have proven a lower bound log log N - O(1), showing this algorithm is actually optimal up to an additive constant on unindexed files of size N generated by the uniform probability distribution. We generalize the latter to show log log N - log log P - O(1) lower bounds the complexity of any retrieval algorithm with P parallel processors for searching an unindexed file of size N. This result is surprising because we also show how to obtain an upper bound that matches the lower bound up to an additive constant with a procedure that actually uses no parallel processing outside its last iteration (at which time our proposal turns on P processors in parallel). Our first theorem therefore states that parallel processing before the literally last iteration in the search of an unindexed ordered file has nearly no usefulness. Two further surprising facts are that the preceding result holds even when communication between the parallel processing units involves no delay and that the parallel algorithms are actually inherently slower than their sequential counterparts when each invocation of the SIMD machine invokes a communication step with any type of nonzero delay. The presentation in the first two chapters of this paper is quite informal, so that the reader can quickly grasp the underlying intuition. © 1989.

Full Text

Duke Authors

Cited Authors

  • Willard, DE; Reif, JH

Published Date

  • January 1, 1989

Published In

Volume / Issue

  • 81 / 3

Start / End Page

  • 364 - 379

Electronic International Standard Serial Number (EISSN)

  • 1090-2651

International Standard Serial Number (ISSN)

  • 0890-5401

Digital Object Identifier (DOI)

  • 10.1016/0890-5401(89)90038-2

Citation Source

  • Scopus