Rethinking FTP: Aggressive block reordering for large file transfers

Published

Journal Article

Whole-file transfer is a basic primitive for Internet content dissemination. Content servers are increasingly limited by disk arm movement, given the rapid growth in disk density, disk transfer rates, server network bandwidth, and content size. Individual file transfers are sequential, but the block access sequence on a content server is effectively random when many slow clients access large files concurrently. Although larger blocks can help improve disk throughput, buffering requirements increase linearly with block size. This article explores a novel block reordering technique that can reduce server disk traffic significantly when large content files are shared. The idea is to transfer blocks to each client in any order that is convenient for the server. The server sends blocks to each client opportunistically in order to maximize the advantage from the disk reads it issues to serve other clients accessing the same file. We first illustrate the motivation and potential impact of aggressive block reordering using simple analytical models. Then we describe a file transfer system using a simple block reordering algorithm, called Circus. Experimental results with the Circus prototype show that it can improve server throughput by a factor of two or more in workloads with strong file access locality. © 2009 ACM.

Full Text

Duke Authors

Cited Authors

  • Anastasiadis, SV; Wickremesinghe, RG; Chase, JS

Published Date

  • January 1, 2009

Published In

Volume / Issue

  • 4 / 4

Electronic International Standard Serial Number (EISSN)

  • 1553-3093

International Standard Serial Number (ISSN)

  • 1553-3077

Digital Object Identifier (DOI)

  • 10.1145/1480439.1480442

Citation Source

  • Scopus