DynCoDe : An Architecture for Transparent Dynamic Content Delivery

Himabindu Pucha
School of Electrical and Computer Engineering, Purdue University
1285 Electrical Engineering Bldg., Box 400 
 West Lafayette, IN USA 47907
765-497-7628
hpucha@purdue.edu
Saumitra M. Das
School of Electrical and Computer Engineering, Purdue University
1285 Electrical Engineering Bldg., Box 479
West Lafayette, IN USA 47907
765-430-4335
smdas@purdue.edu

 

ABSTRACT

Delivery of web content is increasingly using dynamic and personalized content. Caching has been extensively studied for reducing the client latency and bandwidth requirements for static content. There has been recent interest in schemes to exploit locality in dynamic web content [1,2]. We propose a novel scheme that integrates the distribution and caching of personalized content which rely heavily on dynamic generation of content. In the proposed architecture, resource intensive processes involved in content generation are pushed to the network edges. We have performed a preliminary evaluation of the architecture under real world network conditions and have noticed significant improvements in bandwidth consumption, user response time and server scalability showing the feasibility of such a scheme.

Keywords

Internet applications, Client/server.

1. INTRODUCTION

Static caching is a well-researched area[3] while dynamic caching has proved to be difficult to deploy. Personalization of content is fast becoming ubiquitous with content providers. Traditionally, requests for personalized content bypass the caches causing the server to execute code which limits its scalability. The rate at which a web server can deliver dynamic content is also considerably slower than static content. The larger sizes of personalized pages allow for large gains from caching schemes which can exploit the locality in dynamic content. However, dynamic requests have a high rate of expiration which limits their cacheability. Our architecture combines distributed generation and caching of this dynamic content on proxies closer to the user. Among the advantages to this approach are (1) content generation close to the requestor resulting in bandwidth savings and reduction in latency, (2) enhanced server scalability by distributing resource intensive processes and (3) availability of content in the event of network partition. Presently dynamic content forms 40-50% of the total web traffic. In the future, this percentage is expected to increase. Apart from web servers which generate dynamic content, Content Distribution Networks (CDN) replicate and serve static content for the content providers. CDNs do not provide any mechanisms for serving or caching dynamic content. Our architecture could be deployed as a solution for a CDN to generate and serve dynamic content for content providers.

2. DESIGN AND IMPLEMENTATION

In our architecture, the proxy determines the appropriate contents for a requestor based on a personalization profile for that requestor. DynCoDe provides an active-cache programming platform in the form of cache APIs that the server can use to generate the content on the proxy itself. The proxy also provides an execution environment whereby on a user request it dynamically links and executes the appropriate content provider's code that would help it build the personalized content. Proxies form a non-hierarchical distributed system to avoid problems with placement, redundancy and queuing delays inherent in hierarchical systems. A web server in our architecture actively pushes all the content and the code necessary for dynamic content generation to the proxy nodes close to users. DynCoDe has strong consistency semantics to deal with the high rate of expiration of personalized content components such as news and stock tickers. The two approaches considered were client-side validation of content and server side invalidations. We chose server side invalidations to tradeoff maintaining state for proxies with the high latency caused due to client validation of content. This choice is further justified by the fact that the number of proxies, as described in our architecture, would be much fewer than the number of end clients, thus reducing the impact of keeping state on performance of the server. Our performance evaluation quantifies the resource utilization due to keeping state in the server. On detecting a resource change, the server sends invalidation messages to all proxies that have recently accessed and cached the resource. This causes the proxies to invalidate this content. Current caches hide client accesses to the content and hence prevent the collection of these access statistics by the web servers which has led to content providers marking content to be uncacheable. The proxy periodically reports access statistics to the server. The proxy maintains a separate directory structure for each of the content providers that it plans to serve. Each portal directory has two sub directories, one to store the content and one to store the code that would be executed to generate the user specific web page from the content. The same directory structure is also replicated on the server. This way both the proxy and the server can easily  index the files in each other's file systems. When a proxy joins the network, it reads a configuration file that contains the names of the set of negotiated content providers and their servers. The proxy then authenticates itself to the web server and establishes persistent TCP connections with it. The current implementation uses two persistent connections between the proxies and the web servers, one for fetching user profiles and content from the server and the other to send invalidations and code from the server to proxy. After the initial code is downloaded, the proxy is ready to accept client requests for that particular content provider. On receiving a client request, the proxy checks to see if it had already linked the dynamic library for the page assembler module of that particular content provider and if not links it and invokes the buildPage function to generate the page. The buildPage function in the dynamically linked library takes the user name and password of the user as input, obtains the personalization profile and content for the requestor from the cache or server and builds the page. Subsequently, the server can invalidate content, code or personalization profiles all of which results in fetches being made from the server.

2.1 Proxy Architecture

The core components of the DynCoDe proxy include (1) an invalidation module to receive an execute invalidation from servers, (2) a code storage module to receive updated code from the server for generating content, (3) a fetching module to fetch personalization profiles and content from the server. (4) a page assembler module provided by the server and executed on the proxy used to generate dynamic content for a requestor and (5) a cache replacement module to evict objects from the cache. The page assembler module assembles a personalized web page for the requestor from the content fragments. This is possible because we assume a dynamic page consists of a permutation of content fragments which are in themselves static and cacheable. The proxy caches content fragments in memory for additional gain. The page assembler module take as input the personalization profile of the requestor and produce as output the user specific web page that is sent back to the requestor. This module makes use of the cache and fetch API to retrieve both content and user profiles and is implemented as dynamic libraries in C that are linked and invoked on a client request. The cache replacement module calculates the rank of each of the objects presently residing in the cache using the following formula and evicts objects based on lowest rank first.

Rank = F ^{f} * R ^{r} * S ^{s}

F = frequency of access, f = positive value, R = recency of access, r = negative value, S = Size, s = positive to favor bigger objects, negative to favor smaller objects

2.2 Server Architecture

The DynCoDe server is a concurrent pre-forked multi-threaded server with special code to interact with the proxy nodes. The web server is enhanced with several key components to support the DynCoDe architecture. The most important components support (1) statistics gathering from the proxies, (2) keeping state, on a per proxy basis, such that the appropriate proxies can be sent invalidation when code or content is modified by the content providers on the server, (3) active push of content generation code to the proxies that are serving it, hence maintaining consistency and uniformity in the way the personalized web pages are generated for the end clients, (4) caching of personalization profiles and content and (5) authentication and establishment of persistent TCP connections for transfer of control information and data.

3. PERFORMANCE EVALUATION

To measure the performance of our architecture under real network conditions we set up a test bed. The test bed consists of the web server running on a node at University of Illinois, Chicago and the proxies and the clients running on various parts of the campus network at Carnegie Mellon University, Pittsburgh. To simulate the client requests we used Apache Bench, which produces a steady stream of requests. According to [4], the number of items in a typical customized yahoo page are less than 1000, each item is typically less than 1KB and also each customized page is typically over 20KB. We in our own measurements have noticed the typical customized MSN and MyYahoo pages have an average size of about 100KB mainly due to presence of images. So we took measurements for file sizes ranging from 10KB to 100 KB and also for an expiration rate of 1 minute for each of the objects i.e. a new version of each object comes out every 60s. This high refresh rate was used to evaluate the performance of the DynCoDe architecture in presence of continuous invalidations.

3.1 Client Latency

The average response time to the clients is cut by about 50% in the DynCoDe architecture. This reduction in response time increases with file size. For file sizes of 10KB, the response time reduces to 58ms from 125ms and for file sizes of 50KB, it reduces to 102ms from 255ms. This is due to the fact that proxies are placed at the access networks closer to the clients then the web server. The large reduction in the average response time is impressive in view of the high expiration rate of objects.

3.2 Bandwidth Savings

For calculating the bandwidth savings we assumed that the DynCoDe node in the local network would be serving only a modest 10 dynamic customized pages/second. We plotted the bandwidth savings with relation to the invalidation rate. Intuitively as the invalidation rate decreases the bandwidth savings would increase, as most of the content in the cache could be used to build the personalized content and less content needs to be fetched from the web server. Significantly even a high invalidation rate of 1 minute for each object in the cache leads to 83% savings in bandwidth. Making a more reasonable assumption that content changes only every 5 min the bandwidth savings increase to 97%.

3.3 Server Scalability

From our measurements we have seen that maintaining the per-proxy state consumes very little CPU time compared to actually processing the dynamic content requests for the clients. As the number of proxies used by a server increases, the memory overhead on the server to maintain state for the proxies increases linearly. The significant fact is that the amount of memory needed to keep state for 100 DynCoDe proxies is only 60MB, which is quite reasonable given the fact that these 100 DynCoDe nodes could potentially increase the serving capacity of the server 100 fold. This supports our assumption that server side invalidations do not limit the scalability of the server while maintaining strong consistency semantics for the content and code.

4. CONCLUSION

The aim of this project was to push the generation and caching of personalized web content to the nodes in the access networks, which are much closer to the clients, than the actual web servers which reside across the WAN. The major benefits of such architecture include improved bandwidth consumption, user response time, server availability and scalability. Based on these assumptions we have developed the DynCoDe architecture where the DynCoDe nodes located in the edge networks are responsible for caching and serving the personalized content to the clients and the server is responsible for pushing content to DynCoDe proxies on the edges and also for maintaining the consistency of such content that had been pushed. The evaluation of the architecture under real network conditions has indicated that the user response time has been cut by half, bandwidth savings under the worst-case conditions are greater than 80% and the server request serving capacity could be improved by orders of magnitude.

5. REFERENCES

  1. P.Cao, J.Zhang, and K.Beach. Active cache: Caching dynamic contents on the web. Proceedings of Middleware '98.
  2. TA.Iyengar and J.Challenger. Improving web server performance by caching dynamic data. USENIX Symposium on Internet Technologies and Systems,1997 .
  3. J.Wang. A survey of web caching schemes for the internet. ACM Computer Communication Review}, 25(9):36--46, 1999.
  4. V.S. Pai, M.Aron, G.Banga, M.Svendsen, P.Druschel, W.Zwaenepoel, and E.M. Nahum. Locality-aware request distribution in cluster-based network servers. ASPLOS, 1998.