Abstract
Web content caches are often placed between end-users and origin servers as a mean to reduce server load, network usage, and ultimately, user-perceived latency. Cached objects typically have associated expiration times, after which they are considered stale and must be validated with a remote server (origin or another cache) before they can be sent to a client. A considerable fraction of cache hits involve stale copies that turned out to be current. These validations of current objects have small message size, but nonetheless, often induce latency comparable to full-fledged cache misses. Thus, the functionality of caches as a latency-reducing mechanism highly depends not only on content availability but also on its freshness. We propose policies for caches to proactively validate selected objects as they become stale, and thus allow for more client requests to be processed locally. Our policies operate within the existing protocols and exploit natural properties of request patterns such as frequency and recency. We evaluated and compared different policies using trace-based simulations.
Original language | English |
---|---|
Pages (from-to) | 1398-1406 |
Number of pages | 9 |
Journal | Proceedings - IEEE INFOCOM |
Volume | 3 |
State | Published - 2001 |
Externally published | Yes |
Event | 20th Annual Joint Conference of the IEEE Computer and Communications Societies - Anchorage, AK, United States Duration: 24 Apr 2001 → 26 Apr 2001 |