ContentsIndexPreviousNext

Journal reference: Computer Networks and ISDN Systems, Volume 28, issues 7–11, p. 1113.

Design considerations for the Apache Server API

Robert Thau

Abstract

The Apache web server[1] is a high-performance freeware server, which offers a high degree of drop-in compatibility with the popular NCSA server[2]. Internally, the server is built around an API (application programmer interface) which allows third-party programmers to add new server functionality. Indeed, most of the server's visible features (logging, authentication, access control, CGI, and so forth) are implemented as one or serveral modules, using the same extension API available to third parties.

The purpose of this paper is not so much to describe the API in detail, as to explain some of the design decisions which went into it, and why things are done the way they are. In order to do this, it explains what problems the API tries to solve, and how it is structured to solve those problems. It also compares and contrasts the approach taken in Apache to the approaches taken to similar problems in other web servers, and reflects a bit on some features, or misfeatures, which, in the view of the author (who was largely responsible for it), might in retrospect have been done better --- including features supported by other servers but not by Apache (yet).

Basic organization of the API

We begin by discussing why it is important for a server to have an API at all. The most obvious reason is performance. The most commonly deployed mechanism for customizing a server is the Common Gateway Interface, CGI. Use of CGI machinery has fairly substantial associated costs. The most obvious of these is that the server must fork off a separate process to handle each CGI request. What may be more significant, for complicated applications, is that a CGI script must consult its configuration files, if it has any, and initialize itself on each request; using a server API, the initialization can be done once, when the server reads its own config files, and need not be redone thereafter.

Another reason for having an API is allowing third party developers to easily change aspects of the server functionality which you can't easily access using CGI --- causing all requests to be logged in a different format, or supporting a new form of access control or authentication.

In fact, a fairly early decision was made to support as much of the NCSA-compatible server functionality as possible through API-compliant modules, rather than via specifically committed code in the server core (both to keep the code clean, and to check the the API itself allowed third parties to provide similar functionality in different ways via modules of their own). As we shall see, this materially shaped the handling of configuration-file commands. Also, some of the more unusual API-level machinery in Apache (for instance, the sub-request machinery discussed below) exists so that server-side includes and directory indexing would not require more specific support in the server core.

However, a complication arises with support of, say, access control through an API. Suppose that a programmer wants to add a new form of access control. They presumably do not want to rewrite all of the other server machinery as well --- that is, they do not want to write their own code to handle directory indexing, server-side includes, and so forth for requests covered by their new form of authentication.

In order to handle the new authentication mode, the server has to pass off requests which use it to the programmer's own code. However (assuming the request does properly authenticate), the API code has to transfer control back, so that the normal server code which handles directory indexing, and so forth, can continue.

The way this is arranged in Apache (and in many other server APIs, including the Netscape server API[3], which predates Apache's) is to divide the handling of requests into a set of phases. For Apache, these are:

User code can override any and all of these phases. Specifically, a user module which elects to handle any of these phases may, at its option, handle it, decline, or abort processing of the request and invoke the server's error processing. If the user module handles the phase, then depending on the phase in question, other user modules may or may not get a shot at it --- URI to filename translation, for instance, can only happen once, but all loggers are run for each request.

The way this works in Apache is that a server extension module can declare handlers for a particular phase --- these are C functions which return an integer status code, and which take as argument a pointer to a structure known as a request_rec, which contains and coordinates all of the information regarding a particular request: the URI requested, type of the request, relevant directory-specific configuration information, and the like. In fact, the interface between the server core and extension modules (including the ones which implement the server's native NCSA emulation functionality) is through a module structure which consists mostly of pointers to handlers for various phases, or NULL, if the module elects not to handle that phase (there is also other information concerned with configuration management, described below).

As a special case, a module can declare several handlers for the response-handling phase, distinguished by the types of entities (scripts, directories, ordinary files of particular kinds, or anything at all) that they wish to handle. The server core code does a dispatch based on the type of the entity requested to find the handler (or handlers, if the first one declines the request) which it eventually invokes.

Configuration issues

One of the goals in writing the server was making it easy to configure user or third-party modules in the same manner as the server's own native mechanisms. This means, in particular, that it is desirable to be able to use all of the server's native configuration mechanisms (including commands in .htaccess files, and so forth) to configure user modules. In fact, given that (by design) many of the ``native'' mechanisms are in fact implemented using the API, it is actually necessary.

Command handlers and command tables

So, we need modules to be able to handle commands read from configuration files. Apache does this by allowing each module to have a command table which describes the commands it implements, and says which configuration files they may appear in. An example would be the command table for the server's own mod_alias module, which reads:

command_rec alias_cmds[] = {
{ "Alias", add_alias, NULL, RSRC_CONF, TAKE2, 
    "a fakename and a realname"},
{ "ScriptAlias", add_alias, CGI_MAGIC_TYPE, RSRC_CONF, TAKE2, 
    "a fakename and a realname"},
{ "Redirect", add_redirect, NULL, RSRC_CONF, TAKE2, 
    "a document to be redirected, then the destination URL" },
{ NULL }
};

The entries in each command table include the name of the command itself (which is matched in a case-insensitive fashion by server core code), a pointer to the command handler (a C function which is responsible for processing the command), an argument which is passed to the command handler (in case a given handler handles multiple commands --- note that Alias and ScriptAlias have the same handler), and items which tell the server core code where the command may appear ( RSRC_CONF), what sort of arguments it takes ( TAKE2 means two string arguments), and which give a description of what arguments should be supplied, in case of syntax errors.

The command handler, in turn, is passed these arguments and has to implement the effect of the command. Typically, this is done by manipulating a module-specific data structure. In Apache, modules can have private data associated with each virtual server, with each directory for which separate configuration information exists, and for each request which is currently in progress. The exact nature of these data structures is up to the module itself --- server core code deals with them exclusively by means of void * pointers which it treats as opaque. (However, there are core routines which make it easier to manage certain common kinds of data structures, such as the push_array utility routine used in the add_alias command handler below).

Returning to mod_alias, for another example, the command handler for Alias and ScriptAlias is as follows:

typedef struct {
    char *real;
    char *fake;
    char *forced_type;
} alias_entry;

char *add_alias(cmd_parms *cmd, void *dummy, char *f, char *r)
{
    server_rec *s = cmd->server;
    alias_server_conf *conf =
      (alias_server_conf *)
        get_module_config(s->module_config,&alias_module);
    alias_entry *new = push_array (conf->aliases);

    new->fake = f; new->real = r;
    new->forced_type = cmd->info;
    return NULL;
}

The command handler begins by getting the module's private data associated with the virtual server being configured. It then creates a new alias_entry using the utility routine push_array, and intializes its contents based on the arguments given to the command in the configuration file, and the cmd->info data from the module's command table.

The dummy argument is a placeholder for the per-directory data associated with the directory being configured; this module doesn't have any. The two char * arguments are, of course, the arguments from the command line itself in the configuration file; there are two, corresponding to the TAKE2 directives for the Alias and ScriptAlias lines in the command table.

Per-directory configuration

It is sometimes useful to configure a given module to act differently on files in different directories. For NCSA compatibility, and just because it's a good idea, Apache supports two ways of configuring modules on a per-directory basis. The server's central configuration files can have <Directory> sections containing commands which are restricted to a given directory; alternatively, the directory itself can have a .htaccess file which is scanned for commands by the server at run-time. Either way, the commands are processed by the exact same handlers, which are declared in command tables with a similar format to those listed above.

To support this, each module can have private per-directory data, which is (again) treated by the server core as opaque. This information can be accessed by handlers for the various phases of request handling (access control, logging and the like) through the data structure which coordinates all the details of handling a particular request. Handlers can get at the per-directory information relevant to a particular request by using get_module_config on the per_dir_config member of the request_rec structure associated with a given request.

(As an example of why this might be useful: suppose one has a server extension which does a text search on the names of files in each directory, perhaps emulating the WN server's search feature[5]. It might be useful to control which files in a given were subject to the search, or indeed, which directories were subject to any kind of search at all. Given the Apache API, it is easy --- in fact, the path of least resistance --- to make this easily configurable both from the server's central access.conf file and from .htaccess files. The same mechanisms are used by the server internally, of course, to support the NCSA-compatible set of .htaccess directives which control directory indexing, file typing, and so forth).

Tradeoffs in supporting per-directory configuration files

As mentioned above, Apache is capable of handling per-directory configuration files, which are read and parsed on each request. This allows a user to specify that a particular server extension should behave differently in that user's own directories than it would by default, without having to alter a central configuration file (subject to the applicable AllowOverride restrictions, which are specified in the module's command table). In fact, the user doesn't even need write permission on the server's central configuration files --- which may be just as well if the user in question is not trusted to configure the server's treatment of other peoples' data (a not infrequent occurence at, say, university sites, or ISPs with a wide variety of mutually distrustful clients).

The cost of supporting .htaccess files is not trivial. Even though most directories at almost any site don't have .htaccess files of their own, the server can't tell a priori which do, and so it currently checks them all anyway. These checks do take a significant amount of time, particularly on file systems where attempting to open a file which does not exist is an expensive operation. Indeed, many servers do not support per-directory configuration files at all, for precisely that reason.

This cost could be reduced, of course, if information about the presence of .htaccess files were cached (along with their parsed contents). It would be possible to accomplish this without any change to the Apache API, and a future version may well support such a facility. In the meantime, it is worth noting that Apache sites which don't need the flexibility of per-directory configuration files, and which don't want to bear the cost, can turn it off by putting the following in their access.conf:


    <Directory />
    AllowOverride None
    ... other directives as appropriate
    </Directory>

and making sure that no other AllowOverride directives are present. (If AllowOverride None is in effect for a particular directory, Apache doesn't bother attempting to read the .htaccess file, since it could not legally have any effect).

Other approaches to per-directory configuration

The discussion above presumed a dichotomy --- either one supports per-directory configuration files, and accepts either the overhead of searching for them or the awkwardness of caching their contents, or one deals with a central registry and the associated problems with controlling access to it.

The Microsoft IIS server takes a sort of middle way --- it integrates handling of web server access permissions with the file system's own ACLs (access-control lists)[10]. This is a very neat solution to the problem of allowing individual authors or groups to manage their own external access policies, so long as the underlying file system supports a useful notion of ACLs. However, to take the same approach to per-directory control of other aspects of server behavior --- in particular, to allow for per-directory control of user extensions implemented through ISAPI --- would require, at the very least, an unusually rich metastructure within the filesystem.

Servers with integrated support for some sort of scripting language could offer yet more control, by allowing users to write server extensions of their own. As with allowing users to write central configuration files, there is a potential problem with allowing untrusted users to cause the server to take arbitrary actions. However, depending on the language in question, it may be possible to set a security policy for user extensions which keeps them safe --- for instance, by tightly restricting the files which a user extensions could access. (For instance, in a server which could be extended in Java, this would amount to allowing the webmaster to write a security manager for ``server applets''). The author is not aware of any server which implements such an approach, as yet --- he is indebted to Matthew Gray for the idea.

In particular, such an approach could neatly solve the ``control the search engine'' problem given above, as follows: rather than writing a search-engine server extension, which could be controlled via configuration-file commands, one would extend the server with a search-engine library routine, which individual users could cause to be invoked in extensions of their own.

Unusual features of Apache configuration management

Configuration management is probably the most varied aspect of server APIs. Apache's system is uncommonly flexible (in large measure because it had to allow several discrete modules to emulate the large and idiosyncratic command set of the thoroughly monolithic NCSA 1.3 server). In fact, it is arguably a bit too flexible --- Apache command handlers can say they want their arguments raw, and impose their own grammar on what they find; overuse of this facility could lead to unnecessary confusion for the webmasters who have to deal with it. Some other servers have particular configuration-file idioms which may be used to configure the behavior of user extensions (e.g. the AuthTrans, NameTrans, PathCheck etc., directives in Netscape Server Object.conf files), with a more restricted (and therefore, inevitably more consistent) syntax.

Likewise, Apache is unusually flexible in allowing a module to use completely arbitrary data structures to hold per-virtual server and per-directory information, as opposed to say, a fixed set of parameter-value associations read in from the configuration file. So, for instance, logging modules typically make the file descriptors of the log files they open part of their per-virtual-server data. This makes it more convenient for them to log transactions of different virtual servers into different files; otherwise, the module would have to maintain its own mapping between virtual servers and open files, which would be something of a nuisance.

Resource management

A web server has to manage many sorts of resources, which need to be allocated, and then freed, in the course of serving a request. It needs to open files, allocate memory for scratch areas, read through directories, spawn (and clean up after) subprocesses, and so forth. It is particularly important to keep track of all of this in a persistent-process web server, in which each individual server process may handle several hundred requests before dying; such designs have been adopted by many servers, including Apache, to eliminate the overhead of a fork per incoming request. If the process runs out of file descriptors, or allocates and fails to free large amounts of memory, the consequences can be unpleasant.

One approach, followed in the NCSA 1.4 web server code, is to keep track of which files were opened, and where memory might be allocated, to process a particular request, and to track it all down and deallocate it later. The problem with this approach is that writing the code to explicitly track down and free every last byte of allocated memory can be awkward and error-prone. (A particular source of trouble is memory which may have been allocated by handlers for commands in .htaccess files).

Apache takes a different approach, based on the notion of resource pools. There is a resource pool associated with each ongoing request; there is also a resource pool associated with the configuration state information. Allocations of finite resources (memory, open files, etc.) may be tied to such pools --- this is done by means of functions such as palloc and pfopen, which perform like their usual counterparts ( malloc and fopen), but take a pointer to the relevant resource pool as an extra argument.

The resource pool itself is a data structure which keeps track of all such allocations; thus when the server is done with the resource pool (e.g., done handling a particular request), all resources allocated in the pool are freed en masse; therefore, one doesn't have to write large amounts of seek-and-destroy code to make sure it all does finally get freed up. (Similar approaches to storage management have been used by many compilers, including at least gcc and lcc, from which the idea was originally poached).

This offers programmers most of the advantages of full-scale garbage collection (in particular, being able to allocate small amounts of scratch space without large amounts of bookkeeping). However, it does not incur the same degree of implementation complexity.

One problem which sometimes comes up with this approach is that resources are not freed until the pool is cleared. This is a particular problem with regard to memory. To deal with this, modules can establish resource pools of their own (via make_sub_pool) which may be cleared or destroyed at will. For instance, this becomes very important in directory indexing, for reasons discussed below.

Performance implications

The resource pool system was originally designed solely as a programmer's convenience, without much concern for efficiency one way or the other. (Experience with early server-pool versions of the NCSA server showed that memory leaks --- and file-descriptor leaks, which are just as devastating --- can be extremely hard to track down. Use of the resource-pool allocation primitives eliminates any possibility that the resource so allocated can leak). However, on some systems, there actually seems to be a performance benefit as well.

The source of this benefit is that memory allocation via palloc is unusually cheap, as C memory allocation primitives go --- the resource pool code internally mallocs large chunks of memory, and doles them out piecemeal, so in the common case, all palloc has to do is round up the requested size, bump a pointer, and do a range check. Library mallocs often do much more work. Likewise, memory allocated as scratch space for handling a particular request has to be released as well. With the resource pool system, the cost of doing this is very small --- the memory blocks associated with the resource pool are just added to a free list, typically dealing with serveral hundred palloc calls at a time, since the blocks are quite large. However, the cost of an explicit call to the C library free is sometimes substantial.

The down side to this, as discussed above, is that memory allocated for a particular request is not freed until the request is over and done with (barring the use of sub-pools); this means that some space is not returned to the scratch pool as soon as it could be. It would be interesting to compare the amount of memory lost to this sort of surplussage to that taken up by malloc control blocks and fragmentation under similar conditions, but the present author is not aware of anyone doing an actual benchmark of this sort inside a web server. Experience has shown, however, that total memory consumption is reasonable. (Apache processes in the author's system --- admittedly one with a relatively simple configuration, and no virtual hosts --- tend not to grow past 200K bytes, not counting shared text; this compares favorably with the NCSA server, which uses malloc directly, in similar configurations).

Resource management in other servers

Other APIs have some sort of pool-based system for memory allocation; ISAPI filters have access to a callback (AllocMem) which allocates memory which is automatically freed once the request is done[7]; the 2.0 version of the Netscape server API[4] has a pool-based allocator as well. The sub-pool mechanism, and the provision for pool-based allocation of open filehandles, seem to be a bit more idiosyncratic. Both have been found to be useful in practice. Pool protection for file handles is particularly useful, as the consequences when a server process runs out of them can be ugly, and experience has shown that such problems are difficult to track. (Such problems can arise, for instance, when a file is opened and then closed in the normal course of events, but the programmer forgets to close it in case of early returns due to some sort of error condition). The overhead of providing this protection, in comparison with the other costs associated with opening a file, is trivial.

Of course, nothing prevents a programmer using one of the other server APIs from simply providing their own pool structure, for memory or anything else; indeed, they are free to use the Apache code so long as they give proper acknowledgment.

Allowing native functionality to be superseded

One of the design goals for Apache was to allow people who wanted to supersede the native functionality of the server to do so. For instance, one might want to change the way the server decides what media type is associated with a particular request (HTML, GIF, AIFF, etc.) --- e.g., to make the server to consult a metadata database instead of making inferences from the filename (as in John Franks' WN server[5]).

Following through on this raises some complications. For instance, suppose someone actually did write a module which consulted a metadata database to determine media types. Now, the server's standard directory indexing functionality (which is written as a module) selects an icon to put by each filename based on the media type. Naturally, if the metadata module were present, we would want the directory indexer to consult it, instead of the server's standard machinery, when selecting icons. And even if we were still using the standard mechanism to determine media types, we would still want the directory indexer to take into account whatever AddType directives might be present in the directory's .htaccess file (augmenting or overriding the server's usual rules for inferring file types from file names).

The sub-request mechanism

In short, we would want the directory indexer to be able to determine the media types using the same mechanisms that the server itself would use had the files been requested directly --- whatever they happen to be. Apache accomodates this by means of a mechanism called sub-requests.

Specifically, what the directory indexer actually does to determine the media type for a given filename is to call the function sub_req_lookup_file, which resolves the filename in question relative to the directory being indexed, and then runs all of the server's usual processing (access control and typing), stopping just short of actually running the request and producing output. The result of this is a request_rec structure, of the same type used to coordinate handling of ordinary requests, which the directory indexing code can inspect to see what happened --- in particular, what media type was assigned, and whether access was denied. (Server-side includes actually work by running sub-requests to completion, skipping only the logging).

Resource-management implications

One problem with the sub-request approach is resource handling. It turns out that the server allocates about 1K of memory in the course of handling a directory indexing sub-request. This isn't a whole lot for any one file, but if the server is indexing a directory of several hundred files, it adds up.

The server deals with this by giving sub-requests their own resource pools, which can be cleared out when the invoking code (directory indexing, in our example) has extracted the information it needs. This keeps actual memory usage to quite reasonable figures.

Other desiderata

Some of the other desiderata for the server had a nontrivial impact on the API, and should be mentioned here at least briefly.

Back-compatibility

The current Apache server maintains a high degree of back-compatibility with the NCSA 1.3 and 1.4 web servers; for most purposes, it is a drop-in replacement. Unfortunately, that server supports commands whose effects cross module boundaries (for example, the Options command affects handling of CGI scripts, server-side includes, directory indexing, and numerous other features; likewise, per-directory require directives convey information which is of interest to all authentication types).

The way these commands are handled in Apache is to put those commands in a command table associated with the server core, and to allow modules to invoke functions such as allow_options and requires to allow them access. This mechanism is a little ad hoc, and not terribly elegant, but it does the job adequately well.

(As a side note, the NCSA development team has continued to add more such commands --- in particular, the Satisfy directive in NCSA 1.5 affects how access control and authentication relate to each other, if both are enabled for a given directory. In the Apache framework, this means that the command would have to be implemented by the server core code which invokes the various phases as needed; obviously, an individual module can't do anything about this).

Protocol independance

The initial Apache server handled only status-quo HTTP. However, there are variant protocols under discussion, some of which are already widely deployed (examples include Netscape's SSL[6], and the SHTTP proposal; there is also the HTTP-NG work). It is, of course, desirable to have as little of the code as possible depend on protocol details, so that as much of it as possible cand be transferred to servers implementing new protocols.

Apache accomodates this by insulating modules written to the API from protocol details wherever it seemed reasonable to do so. For instance, API modules send data back to the clients by means of functions rputc and rprintf which might transfer the data directly, encrypt it, packetize it, or whatever --- module code does not know or care.

(However, that protocol independance does not currently carry through into the server core, which actually implements functions such as rprintf and rputc --- currently the details of those functions are quite tied up with the actual protocol code. For the same reason, it is not possible to write a plug-in module which, say, does HTTP-on-SSL --- the extant code for this, written quite ably by Ben Laurie, involved actually rewriting those functions. In the future, it might be useful to have a separate formal internal interface which separated protocol code from the server core, in order to make it a little easier to experiment with new protocols).

Problems with the API

I'd like to close with a brief discussion of a few deficiencies of the current API, and ways in which the server might be improved.

No protocol API

As discussed above, there is currently no way to simply plug a new protocol into Apache and have it work --- you need to go at some of the existing code with hammer and tongs. This could obviously be improved upon.

Hard to customize some existing modules

Another way in which the current server isn't as configurable as one might like is that while plug-in modules can customize a lot of server functionality, it is still sometimes only by replacing more of the server code than one would ideally like.

For instance, server-side includes are implemented as a module, so that if you want to support a different style of server-side inclusion directives, you can do that. However, if you just want to add a few inclusion directives to the existing set, that is awkward --- you have to rewrite, or at the very least copy, code to support the entire existing set of inclusion directives, and then add your own.

It would be far preferable to be able to write only code for the inclusion directives --- i.e., for the inclusion module to support plug-ins of its own. In fact, the Spinner server supports just such a facility[8].

Similar comments might be made with regard to a number of the other modules; for instance, it is impossible to change the format of server-generated directory listings without entirely copying the code which reads the directory and gathers information on the entries. Likewise, although there is an experimental module which allows the format of access-log entries to be specified, it doesn't allow for, say, selective logging of transactions based on client address (unless, again, the code for the formatting engine is duplicated entire).

Order dependancies

As indicated above, there are phases of request handling in which the first module which elects to handle a particular request supersedes all the others. This can mean that the results of processing a given request depends on the order in which modules are consulted. For instance, the mod_userdir module, if active, does a particular sort of translation for URIs of the form, /~foo. The reason that Alias commands can be used to override this translation for a particular /~foo URI is because mod_alias is consulted by the server core before mod_userdir. So, if modules are consulted in the wrong order, there may be unpleasant surprises.

One approach to at least managing this difficulty would be allowing modules to specify a ``priority'' with which they ought to be consulted, as is done for ISAPI filters[7]. Another approach might be to allow modules to be linked in either before or after a particular other module. These approaches would at least give webmasters the tools to deal with this sort of problem, although the tools in question might prove to be somewhat arcane in practice.

Fortunately, this problem hasn't proved (at least, not yet) to be significant trouble in practice.

Startup and tear-down hooks

The Apache API allows modules to declare a setup handler, which is run after all configuration files have been read (to open log files, and so forth). Other server APIs offer a richer set of facilities along these lines; for instance, they may allow user code to declare handlers for bare startup (before config files are read), server shutdown, and events related to maintenance of the process pool (a new server subprocess starting up; an old one going away). Some of these effects can be achieved in Apache regardless, by means which can only be described as hacks (e.g., done_init_yet flags); others are hard to simulate even with these means. A more complete set would be a useful addition.

Other approaches

As mentioned above, providing extensibility and flexibility is at least as important as raw performance in evaluating the utility of a server API. There are, of course, other ways to accomplish this goal besides dynamically linking in raw C code, as in Apache (and NSAPI[3], ISAPI[7], etc.). In particular, two different approaches have been tried, with some success, and it's worth a brief look at the trade-offs.

Extension languages

Some servers, most notably Spinner[8], make it easy to extend the server in a language other than C; Spinner itself is written in a language called uLPC, and allows extensions to be written in this language as well. The OpenMarket server's TCL-based configuration files also bear mention in this connection[9], as they do provide a facility for the user to write programs to control the server; likewise, the 2.0 Netscape server has a Java API (which bears a very strong family resemblance to the Netscape server's C API[4]). What all of these languages have in common is that they shield the programmer from some of the notorious pitfalls of C (e.g., unchecked subscripts and pointer accesses). This makes simple server extensions much easier to write.

However, there are disadvantages as well. Some of these languages are interpreted; in the case of extensions such as, say, a search engine, the additional overhead may be significant. Another difficulty is that third-party libraries (e.g., for database access) are generally easiest to get at by means of a C-based interface; using an extension language to access these facilities requires inter-language calls, which are sometimes awkward. Lastly, if the extension language has portability problems (as Java does, at least for the moment), those rebound upon the server as well.

(It would, of course, be possible to use an API for dynamically linked code to link in an interpreter for an extension language).

External processes

To this point, we have considered code run in the context of the server itself (i.e., in the same address space as the raw server code). This allows for very low overhead, but it also incurs a certain level of risk; if the extension code is faulty, nothing limits the amount of damage it can do to the rest of the server.

There is an alternative, which is to encapsulate server extensions as separate servers, which the main web server contacts via socket-based IPC. For example, Digicool's ILU requester[11] allows server extensions to be packaged up as ILU objects (ILU being a freely redistributable ORB, in the CORBA sense --- a distributed system for managing method calls between objects in different processes and on different systems). Interestingly, support for this mechanism has been added to both Apache and the Netscape server by use of their respective APIs.

This approach has a number of advantages over directly linking with the server code. To begin with, if the extension code runs in a separate address space, the amount of damage it can cause to the rest of the server is much more limited. Likewise, it limits the exposure of extension code to server internals. It also provides a somewhat friendlier environment for debugging, and can allow somewhat more freedom in the choice of implementation language (to the extent that the underlying protocols have been implemented in the languages in question; the ILU protocols have bindings to a large number of languages, ranging from C to Common Lisp).

There is a cost to all this, of course, counted up in the overhead of the required inter-process communication. However, since the extension server is typically itself a persistent process, it does not incur those portions of CGI overhead which have to do with CGI process startup (the fork, exec, and linking in of dynamic libraries, each time a CGI script is invoked, parsing the Perl code or config files, etc.), which may account for the lion's share.

In the present author's view, a more serious drawback of this sort of approach, at least in the case of the ILU requester itself, is that it makes it awkward to configure extension facilities in the same manner as one configures the native facilities of the server itself (i.e., to specify parameters to an ILU-based authentication scheme in the same manner as one configure's the server's native authentication and access control).

This may be more a problem with current instances of the RPC-based approach, than with the approach itself. It may well be possible to come up with an interprocess API to convey changes of configuration state across process boundaries. There are interesting challenges here.

Conclusion

While the Apache API is far from perfect, it has proved to be useful for many different purposes, and makes some unusual choices (e.g., in the approach it takes to resource management) which seem to have worked out well. Hopefully, the most useful of these features will appear in any future standards for server APIs.

References

[1] Apache server documentation, http://www.apache.org, 1995

[2] NCSA server documentation, http://hoohoo.ncsa.uiuc.edu, 1995

[3] Netscape server API documentation, http://home.netscape.com/newsref/std/server_api.html, 1995

[4] Netscape server API documentation, version 2.0, distributed with beta versions of server, ftp://ftp.netscape.com/server/fasttrack/unix/, 1996

[5] WN server documentation, http://hopf.math.nwu.edu, 1995

[6] Netscape Secure Sockets Layer (SSL) documentation, http://home.netscape.com/newsref/std/SSL.html, 1995

[7] ISAPI documentation, Microsoft Corporation, in ActiveX Alpha SDK, http://www.msn.com/download/sdk/msactivedk.zip, 1996

[8] Spinner server technical overview, http://spinner.infovav.se/overview.html, 1995

[9] OpenMarket server technical overview http://www.openmarket.com/library/WhitePapers/Server/index.html sometime in 1996.

[10] Microsoft IIS technical note Q142868, http://www.microsoft.com/kb/bussys/iis/q142868.htm, 1996

[11] Digicool ILU requester documentation, http://www.digicool.com/releases/ilurequester/, 1995



About the author

Robert Thau has been interested in computers from a very young age; while still in high school, he secured a summer job at Bell Laboratories writing a Lisp compiler. Amid more programming, he pursued a biology major at Harvard; after that, he took a research post at Thinking Machines. He is now a graduate student in the MIT department of Brain and Cognitive Sciences, working at the MIT Artificial Intelligence Lab. His interest in the Web began when he first saw X Mosaic. He soon set up a web server for the AI lab, and began to tinker with the software; this ultimately led to major contributions to design of the Apache web server.


Mon Jan 29 21:05:03 EST 1996