I have given the idea I got in Manchester on peering in a cluster some additional thought and even began on a implementation, but I have now come to the conclusion that it is possible to achieve a similar goal (and even design) without changing the peering model a single bit. For a large cache there are mainly two resource limits that is a problem: * Filedescriptors * Disk I/O bandwidth. These two factors sets a upper limit on how many clients a single cache can handle, especially the filedescriptor limit. Fault tolerance is also a issue. If you have one single large cache, then you have a single point of failure of the service for all clients. Another issue is upgrade path. It is feasible if the cache can be upgraded in incremental steps when the demand increases. To overcome this one can deploy multiple caches. *** One level structure *** Multiple caches, possibly peering with each other in a sibling relationship. Clients distributed using some kind of load balancing. Drawbacks: 1) Hard to peer as your cache is distributed on a number of servers. High administration to maintain peerings (both local and remote). 2) If the number of cache nodes grows large then peering may become a performance problem. Solutions: 1) Change the cache_peer directive to imply "every cache that is registered on this name", combined with a periodic rechecks of the name to discover changes. It is easier to maintain a single DNS entry than multiple cache configurations with possibly diff erent administrative domains. 2) Cache digests helps to move the limit quite a bit. *** Two level structure *** Multiple caches. Smaller "frontends" that gets the requests from the clients and larger "backends". The backends are configured as parents to the frontends. Each frontend has a small cache for hot objects.