Skip to main content

ROBUS: Fair cache allocation for data-parallel workloads

Publication ,  Conference
Kunjir, M; Fain, B; Munagala, K; Babu, S
Published in: Proceedings of the ACM SIGMOD International Conference on Management of Data
May 9, 2017

Systems for processing big data-e.g., Hadoop, Spark, and massively parallel databases-need to run workloads on behalf of multiple tenants simultaneously. The abundant disk-based storage in these systems is usually complemented by a smaller, but much faster, cache. Cache is a precious resource: Tenants who get to use the cache can see two orders of magnitude performance improvement. Cache is also a limited and hence shared resource: Unlike a resource like a CPU core which can be used by only one tenant at a time, a cached data item can be accessed by multiple tenants at the same time. Cache, therefore, has to be shared by a multi-tenancyaware policy across tenants, each having a unique set of priorities and workload characteristics. In this paper, we develop cache allocation strategies that speed up the overall workload while being fair to each tenant. We build a novel fairness model targeted at the shared resource setting that incorporates not only the more standard concepts of Pareto-efficiency and sharing incentive, but we also define envy freeness via the notion of core from cooperative game theory. Our cache management platform, ROBUS, uses randomization over small time batches, and we develop a proportionally fair allocation mechanism that satisfies the core property in expectation. We show that this algorithm and related fair algorithms can be approximated to arbitrary precision in polynomial time. We evaluate these algorithms on a ROBUS prototype implemented on Spark with RDD store used as cache. Our evaluation on an industry-standard workload shows that our algorithms score high on both performance and fairness metrics across a wide variety of practical multi-tenant setups.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Proceedings of the ACM SIGMOD International Conference on Management of Data

DOI

ISSN

0730-8078

ISBN

9781450341974

Publication Date

May 9, 2017

Volume

Part F127746

Start / End Page

219 / 234
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Kunjir, M., Fain, B., Munagala, K., & Babu, S. (2017). ROBUS: Fair cache allocation for data-parallel workloads. In Proceedings of the ACM SIGMOD International Conference on Management of Data (Vol. Part F127746, pp. 219–234). https://doi.org/10.1145/3035918.3064018
Kunjir, M., B. Fain, K. Munagala, and S. Babu. “ROBUS: Fair cache allocation for data-parallel workloads.” In Proceedings of the ACM SIGMOD International Conference on Management of Data, Part F127746:219–34, 2017. https://doi.org/10.1145/3035918.3064018.
Kunjir M, Fain B, Munagala K, Babu S. ROBUS: Fair cache allocation for data-parallel workloads. In: Proceedings of the ACM SIGMOD International Conference on Management of Data. 2017. p. 219–34.
Kunjir, M., et al. “ROBUS: Fair cache allocation for data-parallel workloads.” Proceedings of the ACM SIGMOD International Conference on Management of Data, vol. Part F127746, 2017, pp. 219–34. Scopus, doi:10.1145/3035918.3064018.
Kunjir M, Fain B, Munagala K, Babu S. ROBUS: Fair cache allocation for data-parallel workloads. Proceedings of the ACM SIGMOD International Conference on Management of Data. 2017. p. 219–234.

Published In

Proceedings of the ACM SIGMOD International Conference on Management of Data

DOI

ISSN

0730-8078

ISBN

9781450341974

Publication Date

May 9, 2017

Volume

Part F127746

Start / End Page

219 / 234