Tolerating Memory Latency through Push Prefetching for Pointer-Intensive Applications

Published

Journal Article

Prefetching is often used to overlap memory latency with computation for array-based applications. However, prefetching for pointer-intensive applications remains a challenge because of the irregular memory access pattern and pointer-chasing problem. In this paper, we proposed a cooperative hardware/software prefetching framework, the push architecture, which is designed specifically for linked data structures. The push architecture exploits program structure for future address generation instead of relying on past address history. It identifies the load instructions that traverse a LDS and uses a prefetch engine to execute them ahead of the CPU execution. This allows the prefetch engine to successfully generate future addresses. To overcome the serial nature of LDS address generation, the push architecture employs a novel data movement model. It attaches the prefetch engine to each level of the memory hierarchy and pushes, rather than pulls, data to the CPU. This push model decouples the pointer dereference from the transfer of the current node up to the processor. Thus a series of pointer dereferences becomes a pipelined process rather than a serial process. Simulation results show that the push architecture can reduce up to 100% of memory stall time on a suite of pointer-intensive applications, reducing overall execution time by an average 15%. © 2004, ACM. All rights reserved.

Full Text

Duke Authors

Published Date

  • January 1, 2004

Published In

Volume / Issue

  • 1 / 4

Start / End Page

  • 445 - 475

Electronic International Standard Serial Number (EISSN)

  • 1544-3973

International Standard Serial Number (ISSN)

  • 1544-3566

Digital Object Identifier (DOI)

  • 10.1145/1044823.1044827

Citation Source

  • Scopus