Refreshment policies for web content caches

E. Cohen*, H. Kaplan

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review


Web content caches are often placed between end-users and origin servers as a mean to reduce server load, network usage, and ultimately, user-perceived latency. Cached objects typically have associated expiration times, after which they are considered stale and must be validated with a remote server (origin or another cache) before they can be sent to a client. A considerable fraction of cache hits involve stale copies that turned out to be current. These validations of current objects have small message size, but nonetheless, often induce latency comparable to full-fledged cache misses. Thus, the functionality of caches as a latency-reducing mechanism highly depends not only on content availability but also on its freshness. We propose policies for caches to proactively validate selected objects as they become stale, and thus allow for more client requests to be processed locally. Our policies operate within the existing protocols and exploit natural properties of request patterns such as frequency and recency. We evaluated and compared different policies using trace-based simulations.

Original languageEnglish
Pages (from-to)1398-1406
Number of pages9
JournalProceedings - IEEE INFOCOM
StatePublished - 2001
Externally publishedYes
Event20th Annual Joint Conference of the IEEE Computer and Communications Societies - Anchorage, AK, United States
Duration: 24 Apr 200126 Apr 2001


Dive into the research topics of 'Refreshment policies for web content caches'. Together they form a unique fingerprint.

Cite this