TY - GEN
T1 - Competitive Algorithms for Block-Aware Caching
AU - Coester, Christian
AU - Levin, Roie
AU - Naor, Joseph
AU - Talmon, Ohad
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/7/11
Y1 - 2022/7/11
N2 - Motivated by the design of real system storage hierarchies, we study the block-aware caching problem, a generalization of classic caching in which fetching (or evicting) pages from the same block incurs the same cost as fetching (or evicting) just one page from the block. Given a cache of size k, and a sequence of requests from n pages partitioned into given blocks of size ß = k, the goal is to minimize the total cost of fetching to (or evicting from) cache. This problem captures generalized caching as a special case, which is already NP-hard offline. We show the following suite of results: • For the eviction cost model, we show an O(log k)-approximate offline algorithm, a k-competitive deterministic online algorithm, and an O(log2k)-competitive randomized online algorithm. • For the fetching cost model, we show an integrality gap of O(ß) for the natural LP relaxation of the problem, and an O(ß +log k) lower bound for randomized online algorithms. The strategy of ignoring the block-structure and running a classical paging algorithm trivially achieves an O(ß) approximation and an O(ß log k) competitive ratio respectively for the offline and online-randomized setting. • For both fetching and eviction models, we show improved bounds for the (h, k)-bicriteria version of the problem. In particular, when k = 2h, we match the performance of classical caching algorithms up to constant factors. Our results establish a strong separation between the tractability of the fetching and eviction cost models, which is interesting since fetching/eviction costs are the same up to an additive term for the classic caching problem. Previous work of Beckmann et al. (SPAA 21) only studied online deterministic algorithms for the fetching cost model when k > h. Our insight is to relax the block-aware caching problem to a submodular covering linear program. The main technical challenge is to maintain a competitive fractional solution to this LP, and to round it with bounded loss, as the constraints of this LP are revealed online. We hope that this framework is useful going forward for other problems that can be captured as submodular cover.
AB - Motivated by the design of real system storage hierarchies, we study the block-aware caching problem, a generalization of classic caching in which fetching (or evicting) pages from the same block incurs the same cost as fetching (or evicting) just one page from the block. Given a cache of size k, and a sequence of requests from n pages partitioned into given blocks of size ß = k, the goal is to minimize the total cost of fetching to (or evicting from) cache. This problem captures generalized caching as a special case, which is already NP-hard offline. We show the following suite of results: • For the eviction cost model, we show an O(log k)-approximate offline algorithm, a k-competitive deterministic online algorithm, and an O(log2k)-competitive randomized online algorithm. • For the fetching cost model, we show an integrality gap of O(ß) for the natural LP relaxation of the problem, and an O(ß +log k) lower bound for randomized online algorithms. The strategy of ignoring the block-structure and running a classical paging algorithm trivially achieves an O(ß) approximation and an O(ß log k) competitive ratio respectively for the offline and online-randomized setting. • For both fetching and eviction models, we show improved bounds for the (h, k)-bicriteria version of the problem. In particular, when k = 2h, we match the performance of classical caching algorithms up to constant factors. Our results establish a strong separation between the tractability of the fetching and eviction cost models, which is interesting since fetching/eviction costs are the same up to an additive term for the classic caching problem. Previous work of Beckmann et al. (SPAA 21) only studied online deterministic algorithms for the fetching cost model when k > h. Our insight is to relax the block-aware caching problem to a submodular covering linear program. The main technical challenge is to maintain a competitive fractional solution to this LP, and to round it with bounded loss, as the constraints of this LP are revealed online. We hope that this framework is useful going forward for other problems that can be captured as submodular cover.
KW - caching
KW - competitive analysis
KW - online algorithms
UR - http://www.scopus.com/inward/record.url?scp=85134422641&partnerID=8YFLogxK
U2 - 10.1145/3490148.3538567
DO - 10.1145/3490148.3538567
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85134422641
T3 - Annual ACM Symposium on Parallelism in Algorithms and Architectures
SP - 161
EP - 172
BT - SPAA 2022 - Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures
PB - Association for Computing Machinery
T2 - 34th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2022
Y2 - 11 July 2022 through 14 July 2022
ER -