ContentsIndexPreviousNext

Journal reference: Computer Networks and ISDN Systems, Volume 28, issues 7–11, p. 931.

WWW Access to Legacy Client/Server Applications

Stephen E. Dossick and Gail E. Kaiser

Columbia University
Tech Report #CUCS-003-96

Abstract
We describe a method for accessing Client/Server applications from standard World Wide Web browsers. An existing client for the system is modified to perform HTTP Proxy duties. Web browser users simply configure their browsers to use this HTTP Proxy, and can then access the system via specially encoded URLs that the HTTP Proxy intercepts and sends to the legacy server system. An example implementation using the Oz Process Centered Software Development Environment is presented.

Keywords
HTTP, client, server, legacy, proxy, Oz

Introduction

Thousands of Client/Server applications have been developed. We are concerned with those applications that consist of specialized server software to run on (possibly several) central computers, and specialized client software run by each user of the system.

The World Wide Web[1] has spawned the creation of many Web browsers, each of which is "programmable" via the common HTML standard. These Web browsers form a platform on which software developers can build clients for Client/Server systems. A "mediator" can be created that acts as a gateway between the Web browser clients and the existing server. In a Client/Server architecture where the client software does nothing more than display and format information retrieved from the server, a World Wide Web browser can be used as a replacement for the client software used by the human users of the system. In other systems, where the client software performs additional tasks on behalf of the server or the users, the mediator can perform this work on behalf of the Web browser-based clients.

Some have used CGI programs to accomplish the tasks of the mediator[2]. A normal HTTP server is set up somewhere on the same LAN as the existing server application. specialized CGI programs are written to handle user requests, and the user is presented with an HTML Forms-based interface. This is unsatisfactory for a number of reasons:

Another approach, modifying the existing server software's code to "imitate" a Web server, is effective only for very simple paradigms. It is easy to imagine moving a database engine to the web in this fashion, where URLs would represent queries on the database. Indeed, Oracle and other database vendors are doing just this with some of their products[3]. Unfortunately, more interactive applications cannot be moved to the Web easily in this fashion.

It may also be possible to use browser-specific tools like Mosaic's CCI[4] and Netscape's Plug-In APIs[5] to create Web browser based clients for Client/Server systems. However, using these APIs limits the use of the resulting Web-based client software to specific platforms and specific Web browsers. This is an unnecessary limitation which negates many of the benefits of creating a Web-based client.

Every Web browser available today supports the use of an HTTP proxy[6] for retrieving documents on behalf of the Web browser. These proxies are often used to give Web access to users behind an Internet firewall or other system that restricts access to corporate networks. When a Web browser user requests a document, rather than connecting directly to the site specified in the document's URL, the Web browser contacts the HTTP proxy. This proxy then fetches the document on behalf of the Web browser, feeding the information back to it.

HTTP proxies and HTTP servers can be used to create mediators that allow Web browser clients to access the existing server in a Client/Server system. This approach allows for the easy migration of Client/Server applications and their data to the World Wide Web. This mediator-based approach does not require any changes to the existing application server code.

This paper proceeds as follows. First, we describe the motivation for our use of HTTP proxies as a method of connecting an existing Client/Server system to the Web. Then, we list the requirements our method would have to fulfill. Next, we describe the architecture we've developed for moving existing Client/Server applications to the Web. We describe in detail one example of using this approach, in which we describe using the architecture to move an existing Client/Server system to the Web. Finally, we consider some future work and extensions to the research presented here.

Motivation

By using a Web browser as the platform for client access to an existing Client/Server system, a number of advantages are gained over the more traditional use of specialized client software:

Requirements

We identify several requirements for moving an existing Client/Server system to the Web:

Architecture

In order to connect an existing Client/Server system to the Web while fulfilling all the requirements stated above, we decided on an architecture that interposes an HTTP proxy between the Web-based clients and the existing application server (Figure 1). This HTTP proxy has been designed to intercept HTTP requests for data and transform them into requests for the legacy system using its native protocols.

Figure 1: Illustration of our architecture. The mediator sits between the Web browser and the server application, gatewaying requests between them. Normal WWW sites can be accessed as before, with the mediator acting as an HTTP Proxy.

The HTTP proxy (or "mediator") can be run on the same machine as the original server application or it can reside on a LAN attached to the server machine. It does not need to run on the same computer as any of the Web browser users, and a single mediator can be used to translate requests from multiple Web-based users of the system. This solves the resource problem of having one mediator per Web-based user.

There are a few different possible approaches to creating the mediator:

Our architecture supports all of these methods of creating the mediator. In addition, a variant of the above architecture is possible which does not use an HTTP Proxy server. Many of the newer HTTP servers (including Netscape's servers[7] and Apache[8]) support APIs which allow code to be linked directly into the running server. Using these APIs, it may be possible to turn an ordinary web server into a mediator in our architecture.

Challenges to our architecture

Several factors complicate our architecture. First, the HTTP protocol is completely stateless, while many existing Client/Server systems use stateful protocols. With the current HTTP specification, Web browsers connect to a server, send a request, receive the response, and then close the connection. The specialized clients in most Client/Server systems open a connection to the server and then leave that connection running for the entire lifetime of the session. In order to emulate this behavior with Web based clients, the mediator can do several things:

Another problem complicating our architecture is the need to have certain HTML files served directly by the mediator. There may be URLs handled by the mediator which do not translate into commands for the existing server. These URLs may, for example, provide the HTML user interface for the system or provide help information to new users of the system. To solve this issue, the mediator maintains a small document tree out of which it serves HTML files. When the mediator receives a request from a Web client, it must decide if the the request is to be gatewayed to the server or if it is to be answered with a document from the mediator's own document tree.

In addition to these complications is the serious question of how to handle application-specific functions that the original clients handled on behalf of the server. These include invoking external tools and using platform specific libraries, devices, or operating system services. These application-specific features of the existing client often cannot be emulated by a Web browser client using only HTML. A number of techniques can be used to solve these issues:

Example

The Oz system[12] is a Client/Server rule-based workflow system currently targeted as a software development environment. The clients for the system support the invocation of external tools to be used during software development (i.e. the client is responsible for running the compiling and editing tools used by the developers in the system).

Over the course of its research, our group has developed several clients for use with the Oz system, including a TTY-based client, an Xview client, and a Motif client. In creating our mediator, known as "OzWeb", we decided to base our work on the TTY client. We were aiming to create a mediator that had no user interface -- it was to run as a daemon process on a machine connected to a LAN. While X toolkits like Xview and Motif allow for the creation of an application without any windows, it was easier to begin with a minimalist interface (like the one in our TTY client) and simply strip away any pieces we didn't need.

OzWeb is conceptually broken into two parts: a standard HTTP proxy, and code to communicate with our Oz server (see Figure 2). The HTTP portion simply receives requests for Web documents from clients and then handles the requests, funneling the data back to the Web browsers. The second portion, which communicates with our application server, receives HTTP requests and responds to them either by sending requests to our server software or by responding with HTML pages from OzWeb's document tree.

Figure 2: Ozweb acts as a mediator between multiple Web-based users and the application server. OzWeb can gateway requests to the server or can serve documents out of its document tree.

In order to determine what to do with a request from a Web browser, OzWeb looks at the URL in the request. If the URL refers to an Internet site, OzWeb retrieves the document on behalf of the Web browser. If the URL is of the form:

http://oz/command name/parameter 1/parameter 2/.../parameter n/

OzWeb contacts the Oz server to perform the command on behalf of the Web based user in the same manner that the TTY client would supply the command and its parameters. As mentioned in the Architecture section, certain URLs are handled by the HTTP Proxy itself, without contacting the server application. For example the main startup screen of our system is represented by the following URL:

http://oz/index.html
This URL presents the user with a message welcoming them to the system (Figure 3) and sets up the display and command choices that are to be shown to the user.

Figure 3: Netscape 2.0 displaying the first screen shown a user connecting to the Oz system via the OzWeb mediator.

When OzWeb receives a request for a document, it checks the site field, and in this case, since the site field says "oz", it realizes that this is a request for the Oz system (as opposed to an actual request for a document from the Web).

In this case, OzWeb next looks to see if there is a command called "index.html" which is designated as a command which the server should handle. Realizing that this isn't an internal command, OzWeb attempts to find a file called "index.html" in its document tree. In this respect, it is acting as a mini Web server, not as a traditional HTTP Proxy. Upon finding the file called "index.html", OzWeb sends the contents of the file down to the Web browser and closes the connection.

The existing Oz client software relies heavily on the use of multiple windows as well as multiple independent window panes. It is not a simple interface to emulate. As a proof of concept, our first application of the OzWeb Proxy was to simulate as closely as possible the existing GUI Oz Client interface in the user's Web browser, using Netscape 2.0 Frames as the mechanism for mimicking the existing menus, pop-up windows, etc.

The Oz server relies on its clients to invoke the tools that are used in the course of software development. For example, when a user asks to compile a file, the server tells the client to run the compiler on a given source file. In the case of our system, we knew that all Web-based users would have X display access to the system running the OzWeb mediator. Because of this, we are able to have the mediator start the tools on behalf of the Web clients. If a tool is interactive, we rely on X access to make the interface appear on the user's screen (by resetting the X DISPLAY variable).

As discussed in the >Architecture section, in the case of a system where X display access is not possible, or where certain external tools or system libraries must be executed on the user's machine, it is possible to create a MIME content handler and a corresponding new MIME type that allows small pieces of specialized code to be run on the user's system.

The OzWeb mediator has been specifically designed so that a single instance of OzWeb can allow multiple users to access the system via the Web. An unlimited number of users can configure their browsers to use the OzWeb proxy as their HTTP proxy server, and they can all then access the Oz server. With many hundreds of users, it may be more practical to have a number of OzWeb proxy server machines, achieving a load balancing effect.

Great care was taken in the design of the OzWeb proxy server to make sure that NO state information is maintained by OzWeb itself. Instead, the HTML sent down to browsers as a result of executing a command in the Oz system is encoded in such a way as to make it possible to tell the last commands executed by the particular client. Each URL link in the resulting HTML has a parameter added. When the user clicks on one of these links, this parameter is sent to the mediator along with the rest of the URL. This allows the mediator to determine the sequence of commands the user has executed in the past.

Related Work

Brooks et al.[15] have developed HTTP Stream Transducers. These services, modeled as HTTP Proxy servers, allow the data stream from a Web site to be modified to allow for per-user markup. For example, when a user visits a Web site they they have already seen, the Stream Transducer can add HTML to the page to tell the user the date on which they last viewed the document. Stream Transducers can be hooked together in sequence between the user's Web browser and a web site in order to provide aggregate functionality.

Lotus' InterNotes[16] product uses CGI mechanisms to allow Web browser access to documents and forms managed by the Notes Server. Documents to be placed on the Web are pretranslated by a program that converts them to HTML. These documents and forms are accessed through a standard HTTP server as though they were normal HTML documents. In the case of Notes Forms, the Submit button sends the contents of the form to a Lotus-supplied CGI program that incorporates the data back into the Notes database. While this does allow for some Web-based use of their system, the interaction model is limited. Web-based users do not have access to the integrated email and applications which standard Notes clients use.

Barta et al.[17] describe a toolkit for the creation of "Interface-Parasite" gateways. These gateways allow a synchronous session-oriented tool, a telnet session for example, to be used via a Web browser. Their toolkit requires no changes whatsoever to the source code of the Server application, but their toolkit cannot handle an application in which the Client can perform actions independently of the server.

Ockerbloom[18] proposes an alternative to MIME types, called Typed Object Model (TOM), that could conceivably be employed instead of a MIME extension to allow the use of external tools in the client. Objects types exported from anywhere on the Internet can be registered in ``type oracles'', specialized servers that may communicate among themselves to uncover the definitions of types registered elsewhere. Web clients who happen upon a type they do not understand can ask one of the type oracles how to convert it into a known supertype. In this way, the Web clients would not have to be set up to handle a new MIME type. They could simply query the type oracle, which could return information on how to run the external tools.

Conclusions and Future Work

The architecture presented here fulfills our requirements nicely. The OzWeb mediator allows Web-based users access to most features of the Oz system. No changes were made to the Oz Server or to its protocol for communicating with clients. Existing TTY, Motif and Xview clients continue to work unchanged with the Oz Server and may operate concurrently with Web-based clients.

We were able to achieve our goal of connecting to the mediator using the standard HTTP protocol and a standard Web browser (Netscape 2.0). Other browsers supporting the Netscape HTML extensions have been used as well.

We have already identified several areas for future work on our mediator-based approach to moving Client/Server systems to the Web:

References

  1. Tim Berners-Lee and Robert Cailliau, World Wide Web Proposal for a HyperText Project, CERN European Laboratory for Particle Physics, Geneva CH, November 1990, http://www.w3.org/hypertext/WWW/Proposal.html.
  2. Jean-Claude Mamou, ODBC and Mosaic, October 1995, http://www.w3.org/hypertext/WWW/Gateways/OQL.html.
  3. Oracle Corporation, Oracle WebSystem,http://www.oracle.com/.
  4. National Center for Supercomputing Applications, NCSA Mosaic Common Client Interface, March 1995, http://www.ncsa.uiuc.edu/SDG/Software/XMosaic/CCI/cci-spec.html.
  5. Netscape Communications Corporation, Netscape Navigator 2.0 Plug-In Software Development Kit, January 1996, http://home.netscape.com/comprod/development_partners/plugin_api/index.html.
  6. Ari Luotonen and Kevin Altis, World-Wide Web Proxies,First International World Wide Web Conference, Geneva CH, May 1994, http://www.w3.org/hypertext/WWW/Proxies/.
  7. Netscape Communications Corporation, Netscape Server API (NSAPI), http://home.netscape.com/newsref/std/server_api.html.
  8. The Apache Server Group, Apache Server API, http://www.apache.org/docs/API.html.
  9. The Internet Engineering Task Force, MIME (Multipurpose Internet Mail Extensions) RFC #1521, September 1993, ftp://ds.internic.net/rfc/rfc1521.txt.gz.
  10. Steven Lucco, Oliver Sharp, Robert Wahbe, Omniware: A Universal Substrate for Web Programming, 4th International World Wide Web Conference, Boston MA, December 1995, http://www.w3.org/pub/Conferences/WWW4/Papers/165.
  11. The GNU Project, GROW: The GNU Remote Operations Web, http://www.cygnus.com/tiemann/grow/.
  12. Israel Ben-Shaul and Gail E. Kaiser, "A Paradigm for Decentralized Process Modeling, Boston MA, 1995, Kluwer Academic Publishers.
  13. Object Management Group, What is CORBA?, http://ruby.omg.org/corba.htm.
  14. Mike Beasley, Nigel Edwards, Mark Madsen, Ashley McClenaghan, Owen Rees, A Web of Distributed Objects, 4th International World Wide Web Conference, Boston MA, December 1995, http://www.w3.org/pub/Conferences/WWW4/Papers/85.
  15. Charles Brooks, Murray S. Mazer, Scott Meeks, and Jim Miller, Application-Specific Proxy Servers as HTTP Stream Transducers, 4th International World Wide Web Conference, Boston MA, December 1995, http://www.osf.org/www/waiba/papers/www4oreo.htm.
  16. Lotus Development Corp., Lotus InterNotes Web Publisher, September 1995, http://www.lotus.com/corpcomm/334a.htm.
  17. Robert A. Barta and Manfred Hauswirth, Interface Parasite Gateways, 4th International World Wide Web Conference, Boston MA, December 1995, http://www.w3.org/pub/Conferences/WWW4/Papers/273/.
  18. John Ockerbloom, Introducing Structured Data Types into Internet-scale Information Systems, PhD thesis proposal, Carnegie Mellon University School of Computer Science, May 1994.


    About the Authors

    Stephen E. Dossick
    No biographical information available.
    http://www.cs.columbia.edu/~sdossick/

    Gail E. Kaiser
    Gail E. Kaiser is a tenured Associate Professor of Computer Science and the Director of the Programming Systems Laboratory at Columbia University. She was as an NSF Presidential Young Investigator in Software Engineering, and has published about 90 refereed papers in collaborative work, software development environments, software process, extended transaction models, object-oriented languages and databases, and parallel and distributed systems. Prof. Kaiser is an associate editor of the ACM Transactions on Software Engineering and Methodology and has served on about 25 program committees. She received her PhD and MS from CMU and her ScB from MIT.
    http://www.cs.columbia.edu/~kaiser/



    Research Credits

    The Programming Systems Laboratory is supported in part by Advanced Research Project Agency under ARPA Order B128 monitored by Air Force Rome Lab F30602-94-C-0197, in part by National Science Foundation CCR-9301092, and in part by New York State Science and Technology Foundation Center for Advanced Technology in High Performance Computing and Communications in Healthcare NYSSTF-CAT-95013.

    The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the US or NYS government, ARPA, Air Force, NSF, or NYSSTF.