Users of wide area network applications are usually concerned about both response time and content validity. The common solution of client-side caching that reuses cached content based on arbitrary time-to-live may not be applicable in narrow bandwidth environment, where heavy load is imposed on sparse transmission abilities. In such cases, some users may wait for a long time for fresh content extracted from the origin server although they would settle for obsolescent content, while other users may receive the cached copy which is considered valid, although they would be ready to wait longer for fresher content. In this work, a new model for caching is introduced, where clients introduce preferences regarding their expectations for the time they are willing to wait, and the level of obsolescence they are willing to tolerate. The cache manager considers user preferences, and is capable of balancing the relative importance of each dimension. A cost model is used to determine which of the following three alternatives is most promising: delivery of a local cached copy, delivery of a copy from a cooperating cache, or delivery of a fresh copy from the origin server. The proposed model is proven to be useful by experiments that used both synthetic data and real Web traces simulation. The experiments reveal that using the proposed model, it becomes possible to meet client needs with reduced latency. We also show the benefit of cache cooperation in increasing hit ratios and reducing latency. A prototype of the proposed model was built and deployed on real-world environment demonstrating how users can set preferences towards Web pages, and how cache managers are affected.