The Message Is The Medium

Dorée Duncan Seligmann, Cati Laporte, Stephan Vladimir Bugaj
Bell Laboratories
Lucent Technologies
Room 4F-605
101 Crawfords Corner Road
Holmdel, New Jersey 07733
tel: 908-949-4290
fax: 908-949-0399


We are exploring the use of visual imagery which simultaneously provides the content and control of Web-based interactive services. We describe four unconventional Web-based services we have implemented for: messaging, a bulletin board, broadcast messages, and browsing through a set of hyperlinked objects. We implement each service using a real-world metaphor which serves as the basis for the visual presentation as well as the service itself; thus form and function are tightly coupled. The use of universal imagery eliminates the need for wordy explanations and hence increases accessibility to an international audience. During the course of development, we devised techniques to enhance the shared experiences of the visitors to our sites, including automatically generated 2D animations. These approaches can be applied to a variety of Web sites and are also described.

Keywords:virtual environments, virtual worlds, animation, user interface, metaphors, accessibility, multimedia, telepresence

Table of Contents

1. Introduction

As the Web evolves from a hypertext system into a hypermedia system it is important to explore textless methods of information navigation and new paradigms of multimedia user interaction. The services described in this paper are part of the ongoing Metaphorium project which explores such models of multimedia service interactivity. By eliminating the reliance on text and instead using a more universal visual language based on real-world metaphors we can increase the international accessibility of the services. In our model of navigation beyond text the users interact with iconic representations of the objects. Goals associated with the service metaphor and feedback to the user are also delivered within the metaphoric context.

These real-world metaphors provide a framework which enable shared visual experiences among visitors to the sites. Within each of the Metaphorium's dynamically generated space and time narratives the users have context-based metaphorical clues not only to the operations of the services but also to the presence of others within our shared virtual environments. During the course of this experiment we have devised several techniques to enhance the shared experience of both simultaneous and successive visitors to the same site. Instantaneous feedback can be provided to facilitate direct or indirect interaction between simultaneous users, and durable feedback can be provided to allow users (whose visits to the same virtual space are temporally separated) the opportunity to interact with each other.

Our emphasis is on aesthetics and interactivity: the technology and procedures should serve as a means for creating engaging experiences and not as the ends themselves. To reach this goal we have incorporated the content directly into the interface. Customized interfaces which adapt standard user interface technologies such as point-and-click and map them onto new systems of representation will make the technology more transparent. This approach also helps users to have more efficient and enjoyable interaction within virtual spaces by streamlining the process of observation of, interpretation of, and action upon control elements in the environment. The familiar metaphors of industrial control systems (buttons, knobs, sliders, etc.) are effective in building generic interfaces but their uses must be clarified by accompanying text since their intuitive functions are merely to trigger actions of any kind.

By creating interactivity controls based on real-world metaphors we are closing the gap between form and function in multimedia and WWW experience design. Thus our use of icons (visuals that correspond to real-world objects because they resemble them [Pe31]) is designed to enable the user to map their intuitive notions of function in the real world into our virtual environments. In this pursuit we have created icons which can be placed into scenes in which their metaphorical context can become apparent. This combination of visual elements within contextual frameworks creates a universal imagery [Co83] constituting an accessible code for visual communication in which the graphic elements afford functionality as well as appearance [Ga95].

We describe four web-based services which implement general messaging, bulletin board messaging, broadcast messaging, and navigation of hyperlinked objects using real-world metaphors as the basis for the multimedia environment. Message in a Bottle is a general messaging system which implements a randomized messaging system based on the real-world metaphor of putting messages into bottles and throwing them into the sea. Sand Typewriter is a bulletin board system in which the metaphor of ocean tides washing away writings in the sand is used to implement the expiration of messages in the BBS system. Skywriter is a broadcast messaging system that allows the users to broadcast a message by writing in a virtual sky for all to see. SubwaySurface is a system for navigating hyperlinked objects in which the browsing technology is incorporated into the presentation. The subway travel metaphor is used in creating functional navigation controls which are also aesthetic elements. These four projects create environments for exploring different elements of human communication in virtual spaces by removing some of the technical interface barriers which typically distract from the communication.

2. Motivation

Our goals are to study how people interact on the web, create adaptive systems which can automatically adjust to facilitate this interactivity as transparently as possible, and integrate content and control in the visual presentation of multimedia interfaces to afford more natural interaction with the virtual environment through contextual metaphors. We are also experimenting with models of incidental communication on the Web. The servers for each of these projects compiles metrics about site access for both technical and human-interface tuning. Users can also submit feedback directly to the Webmasters. This data is being monitored and compiled to help us continue to refine these experiments and create new ones.

3. Shared Experiences in an Automatically Generated Space and Time Narrative: Message in a Bottle, Sand Typewriter, And Skywriting

The virtual environment described in the first three examples is a visual presentation of the real-world metaphors that serve as the basis of three unusual communication modes. The environment is represented exclusively with animated graphics accompanied by sound. For the most part, only the users' messages are presented using text. The visual presentation illustrates the communication methods and provides the interface for their use. The combination of animated graphics and audio create the atmosphere of the virtual environment and reinforce the real-world metaphors.

3.1 Message in a Bottle

We have implemented a messaging system with several unusual characteristics. Our system consists of a message pool into which users may at any time add messages. Messages are not addressed to anyone in particular; a server determines to which users different messages are accessible. After connecting to the service a user may or may not have a message to read. Once retrieved, a message can be discarded, edited, or appended to, and then put back into the pool. Thus, messages are written with no assurance that they will ever be read, and if read their authors may never see the responses to their own messages.

We have implmented this form of communication within a virtual environment consisting of a large uncharted sea spotted with uncharted islands. The sea is sometimes rough, the currents are strong, and if one could fly above, one would see that the currents are transporting glass bottles containing messages.

Visitors to the site land on an island, whereabouts unknown, out of their control. Their only form of communication is to place a message in a bottle and then throw it into the sea. Where it goes, and whether or not it can be retrieved, is determined by the conditions of the sea. At the same time, if, by chance, a bottle passes by the island the visitor can retrieve it and then: read the message, destroy it, add to it, and throw it back into the sea. Some bottles may spend days, perhaps months, in the sea without being found, while others may be found right away. Again, a message writer has no guarantee that his or her message will ever be read. Even if it is read and responded to the sender may never know.

Figure 1 shows the browser when we visited the site in the morning. There is a bottle in the sea. Below the seascape, there are two items: a pencil that can be used to write a new message and a bottle, a convenience for selecting to examine the bottle in the sea (which is difficult to click on as it moves through the water). Figure 2 shows the browser after we click on the bottle to read the message. Again, the pencil allows us to write. In Figure 3 we are writing on the message. Figures 4a-4c show the sequence as we send the message back into the sea.

Figure 1.
Animated Seascape: morning, clouds, bottle, and swimmers.

Figure 2.
Reading the message.

Figure 3.
Adding to the message.

Figure 4a.
Sending the message: The message disappears.

Figure 4b.
The message appears in the bottle.

Figure 4c.
The stopper appears and the bottle eventually disappears.

The association of a bottle with a location simply provides a real-world metaphor for the mechanism of messaging which enables the server to procedurally determine whether or not a message is accessible to a particular user. This general mechanism is simple and easy both to understand and represent visually. The details of how a bottle travels through the waters are unimportant because a visitor's location is arbitrary. This need not be visualized; thus, no maps are presented. However, our server maintains a representation of a sea with water currents, weather conditions, other objects, and of course the bottles. These conditions are updated every hour.

It is unlikely that a visitor to the site will visit the same island twice, or under the same conditions. We visually represent several natural phenomena which are combined to generate the animated background scene. The time of day at the user's real-world location determines the time of day depicted on the island (controlling the properties of the visual components: the sky, sea and land). In this way, the virtual environment is bound to the real world, and in this example provides a kind of visual clock. Figure 1 shows the sea at morning; Figure 5 in the evening, and Figure 6 at night. In [Se95] we argued that the visualization of virtual places can benefit if integrated with parts of the real world, but only represented real world devices and places. Weather conditions are also shown, but they are randomly chosen by the server. We have created animated sequences for a variety of types of rain storms, lightning, sea conditions, and skies. Lighting and coloring is changed based on these conditions in addition to the time of day.

Figure 5.
Sunset with clouds and Canada geese.

Figure 6.
Night with moon and Titanic sinking.

Other elements are added to create variety in the visual presentation each time the site is visited. These include swimmers (as in Figure 1), flocks of birds (as in Figure 5), airplanes, boats, and specific events, such as the Titanic sinking in Figure 6. In order to achieve the variety of effects, the animated graphics are composited by the client applet based on the scene condition parameters sent by the server. The main background animations are created by selecting from different layers of animation sequences which can be superimposed seamlessly to create different effects and greater variety. Other animated items are overlaid on these scenes and follow generated animation paths over the scene at varying speeds, described in 3.4 below. Scenes created with the same animation elements will rarely be composited identically, so although the animated loops are short sequences, the complete animation is not as repetitive. Similarly, the paper on which a message is written (as shown in Figure 2) is unlikely to be the same. A set of procedures randomly selects its color, and from a library of edges, interior wrinkles and tears.

3.2 Sand Typewriter

We have also implemented an electronic bulletin board on which postings have a limited life span and are bound to a particular location on the bulletin board's surface. This surface is divided into different areas. Users can place a posting on a vacant area on the bulletin board and can search the bulletin board's surface to read other postings. Vacant areas are available in a first-come-first-served order and the server maintains a queue of the users currently associated with each area. Once a message is posted, a timer is started; when the time period expires, the message will be removed. While each message has a relatively equal probability of being read, messages are posted with no assurance that anyone will see them before they are removed.

We have implemented this form of communication within a virtual environment consisting consisting of a narrow circular sandy coastline completely surrounding the virtual sea. The beach is the bulletin board's surface. Its shape limits the way it is browsed: visitors walk left or right. By walking continuously in one direction, a visitor can examine the entire coastline.

Visitors to the site are placed at random locations along the coastline. A visitor can use the sand typewriter to leave a message in any blank area. As visitors walk along the coast they can read the messages etched in the sand, but these messages are temporal, and eventually waves will sweep over them and wash them away. A visitor wishing to leave a new message can wait until another message is erased and then use the freed space. There may be other visitors in the same space, waiting to leave a message. Whoever starts typing on the sand typewriter goes first; subsequent writers are queued in the order that they attempt to type when the area is not available.

The bulletin board is represented by a shared space which is divided into discrete areas to give visitors some sense of place, enhancing telepresence. Although the visitors cannot see each other they may become aware of each other's telepresence when new messages appear as they are being written. Visitors can use this mechanism to communicate directly by typing in messages one after the other.

Figure 7 shows a sequence of frames as the short message "SAND" is erased. Sound files of waves crashing accompanies the animated seascape. But the aspect of the waves and the accompanying sounds provide cues signalling that the message is about to be removed, just before a wave illustrates the message's removal.

Figure 7. Sand message washed by the sea.

3.3 Skywriter

We have also implemented a type of broadcast message in which a message is displayed to all current users at the same time. This message is assigned a short lifespan and is erased after a few minutes. These messages are broadcast with no assurance that a user will not miss the short display of the message.

We have implemented this form of communication so that it is available from within all parts of the virtual environment. Virtual planes equipped with skywriting abilities write the messages in the sky with smoke. As soon as the message is posted, the letters of the message start to blur and eventually fade away. Figure 8 shows a sequence of frames as the message "SKY" starts to fade. An audio track of a plane begins before the plane is in view and signals the message's arrival.

Figure 8. Skywriting blurs.

3.4 Automatically Generated 2D Animation

In order to create a non-repetitive animation without the overhead of downloading lots of frames we have created an animation specification language. Objects are instantiated from among a selection of AnimationFX classes and are combined to create the animation objects that make up the scene. Each animation object is controlled by an Animator object that moves the object and a Flip object that selects the next frame to show. Our library of Animator and Flip objects enable us to succinctly specify a wide variety of random animations. Figure 9 shows the animation specification for the swimmer animation. There are six frames in the swimming animation sequence and three different swimmers to choose from. A range specifies how many different animation instances are allowed, in this case there can be two to twelve different swimmers. For each instance (each swimmer) one of the three separate sequences is randomly chosen and a random start time determines when that swimmer will first appears in the scene. A Flip object is created for each swimmer which it randomly choses a starting frame, and varies the rate at which the frames are changed. Thus swimmers will randomly speed up or slow down. A separate Animator object is created for each swimmer that varies the distance the swimmer travels each frame tick. Thus, some swimmers will appear stronger or weaker as the animation continues. The use of ranges of random numbers for every parameter creates an ever changing scene.

baseName=swimmer /* name of the animation file*/
numberOfFrames=6 /* number of frames in the animation */
numberOfSequences=3 /* number of different sequences */
numInstances=2-12 /* acceptable number of instances in the scene */
startAnimation=1-150 /* range for when an instance starts */
animationType=HORIZ_SEQ /* type of Animator and FLIP to create */
startX=-WIDTH /* starting x location */
startY=130-160 /* starting y location */
leftDirection=FALSE /* direction of animation */
variance=3-5 /* range for distance covered changed */
minimumIncrement=1-5 /* range of minumum distance moved */
maximumIncrement=5-15 /* range if maximum distance moved */
minimumSpeedDuration=6-10 /* range for minimum amount of time speed is constant */
maximumSpeedDuration=40-50 /* range for maximum amount of time speed is constant */
repeatPause=1-30 /* range for how long the animation is off-screen */
Figure 9. Animation Specification For Swimmers

4. A Shared Experience in a Living Map: SubwaySurface

4.1 SubwaySurface

We have implemented a technique for browsing through a collection of items based on a real-world metaphor for travel, in which the linked objects and the browsing technique are closely bound. Objects are associated with real world locations and users navigate through these locations in order to access these items. In effect, this site is a virtual gallery, but one which is not modelled after a real-world art gallery space with paintings on the walls or sculptures on stands. Instead, the browsing technique is the theme of the exhibition and the basis for the virtual environment. The art itself was created in conjunction with the browsing mechanism, and thus the experience was developed with a contextually integrated control interface from the start.

SubwaySurface is an exhibition of photographs taken by Alvaro Munoz. Visitors to the site travel (virtually) on the A-Line of the New York subway system. They can select which station to go to and upon arrival they are presented with one or more photographs taken of the street scenes outside that particular station.

The display is comprised of several components: a main viewer area which represents what the user can see, an information area on which notices are posted, a set of iconic controls, and a live map on which the subway car travels. Visitors browse through the photographs by selecting stations to visit or by taking the guided tour. These displays each change as the context of the visitor changes: the main viewer shows either the interior of the subway car or the scene outside the station, the information sign indicates the status of the ride (current destination, location, or status of the service), and the map shows the user's current location on the A-line. Figure 10 shows the site when the subway is at the 125th Street Station. Figure 11 shows the site as we travel downtown. Sound is used to enhance the experience: a version of "Take The A-Train" is played for ambiance while sound effects are added during subway travel. Travel is initiated by clicking on a subway station which causes the current photograph to be replaced by the user's view from within the subway car. As the next image is being downloaded from the server, the subway car travels to the next station on the map, the Information Sign is changed to indicate the to and from stations, and the noisy sound of subway travel is played over the music. Figure 11 shows the interior of the subway car with other anonymous passengers.

Figure 10. Arriving at the 125th Street Station in Harlem.

Figure 11. Traveling downtown. We are not alone.
Maps are an intuitive method for navigating through objects with spatial associations. In both Art+Com's T_Vision [Gr95] and [Gob96] users navigate a 3D terrain, thus enabling them to access location-based information and hyperlinks. In [Po97] a 2D map is generated and then augmented with layers of various natural phenomena as well as icons (sometimes animated) which provide access to hyperlinked objects. In these examples the map is the primary content element. In SubwaySurface the map area, which only takes up a part of the display, is not the primary content but serves two interaction purposes: it enables travel (i.e. it is the browsing interface) and it indicates the user's current location.

4.2 Animation and Synchronization

We use animation and sound to distract and entertain the user while images download. The animation paths for subway travel are automatically generated by the client applet. Information about the subway line is transmitted to the client along with the map. It includes a list of all stations on different routes, the textual names to be used for labels, and their location on the map. This information is used to generate the list of points used in the animation. The subway car varies in speed as it travels over the map. The path is formed by selecting points on the train line that are equidistant in time, not in physical distance. A PathPlanner object is passed all the known points between the two stations. Speed accumulates over time, and a stopping distance regulates the deceleration. Thus, the animation of the subway car is slightly different each time, based on the chosen start and stop stations. Because this use of animation is designed to distract the user while the photographs are downloaded, the frame rate of the animation is adjusted using the tracked times for transferring each image. The refresh rate of the animation is increased or decreased based on the measured transfer times.

5. Effects to Enhance Telepresence

Several effects were designed to enhance the sense of telepresence for each user, which we outline here.

6. Implementation

These applications were all implemented as Java [Gos96] applets with accompanying Java servers running on Silicon Graphics hardware with Netscape FastTrack to provide HTTP services. The graphics were created on a Mac with Adobe Photoshop, while testing of animation sequences were done in Macromind Director.

7. Conclusions and Future Work

We have created WWW-based virtual environments in which there is a unification of graphical elements for content and control. The content, though in some cases itself text-based, is integrated into the visual environment to create a coherent metaphorical context for user interaction with the data. Integration of feedback-loop elements directly into the content space will allow experience designers to create more natural feeling interactivity within their virtual worlds. For each user, a customized view of the digital methaphor is generated automatically[Se96].

We have also shown two mechanisms for creating contextual, adaptive interface elements designed specifically to address problems associated with delivering multimedia content on the Web. Unrepetitive scenes were created by dynamically combining layers of short animated graphics to create a more dynamic visual environment for the seascapes projects. Many Web animations are repetitive in an attempt to keep animation file sizes small, but by combining small animation loops in an adaptive manner we are giving users more variety of experience without having to create large, long pre-rendered animations. By using music and intermediary scenes on the Subway Surface project we engage the user with client-side activity while new content is transferred from the server. The interim scenes of people inside the subway car also informs the user of how many other users are currently connected, providing user feedback in a contextual manner.

For the most part there has been an emphasis to render electronic communication efficient (e-mail, subject-based newsgroups and bulletin boards, chat rooms), the idea being that users should be able to find what they are seeking and converse succinctly with known agents. The direct and narrowcast models of communication are concerned with delivering information to a clearly defined, narrow set of recipients. Broadcast and incidental models of communication are concerned with delivering information to anyone who happens upon it. Facilities for incidental communication, unexpected meetings and conversations with people we did not seek out and on subjects we did not necessarily select, can be used to address different communicative needs than the many direct communication facilities currently available. The services described enable this kind of interaction.

As we continue to explore these issues we will be adding more direct interactivity between users in the Metaphorium environments. Soon you will be able to chat with others riding the A train, and we are also developing new metaphorical environments to model different kinds of contextual, adaptive interactivity. We will also be refining our data-gathering methods within the servers and improving the ability of our experiences to respond to user interactions. New projects will continue to explore inefficient or hybrid communications, adaptive interactivity, contextual metaphors, and the general issues of interaction in a Web setting.

8. Acknowledgments

Abigail Joseph wrote the prototype version of SubwaySurface. Shiwon Choe wrote the prototype version of Message In The Bottle. The subway car animation was written by John Edmark.

9. References

[Co83] Cossette, Claude. Les Images démaquilles: approche scientifique de la communication par l'image, Éditions Riguil, Québec 1983. On-line version:

[Ga95] Gaver, W. W. "Oh What a Tangled Web We Weave: Metaphor and Mapping in Graphical Interfaces" In Proceedings of ACM SIGCHI '95 Human Factors in Computing Systems Conference Companion. Denver, Colorado, May 7-11, 1995. On-line version:

[Ga91] Gaver, W. W., Smith, R. B., and O'Shea, T. "Effective Sounds in Complex Systems: The ARKola Simulation." In Proceedings of ACM SIGCHI '91 Human Factors in Computing Systems. New Orleans, Louisiana, April 27-May 2, 1991.

[Gr95] Grueneis, G. et al. "The T_Vision Project." In Proceedings of ACM SIGGRAPH '95 Interactive Community. Los Angeles, California. August 6-11, 1995. Art+Com T_Vision site:

[Gob96] Gobbetti, E., Leone, A.O. "Virtual Sardinia: A large-scale hypermedia regional information system." In the Proceedings of the Fifth International World Wide Web Conference, Paris, France, May 6-10, 1996. pps. 1539-1546. Virtual Sardinia site:

[Gos96] Gosling, J., Joy, B., Steele, G. "The Java Language Specification." Addison-Wesley, Reading, Massacusetts. 1996.

[Ku96] Kurlander, D., Skelley, T., Salesin, D. "Comic Chat." In Proceedings of ACM SIGGRAPH '96, New Orleans, Louisiana, August 4-9, 1996. pps. 225-226. Comic Chat site:

[Pe31] Peirce, C. S. "Collected Papers." Harvard University Press. Cambridge, Ma. 1931.

[Po97] Potmesil, M., "Maps Alive: Viewing Geographical Information on the World Wide Web." In Proceedings of the Sixth International World Wide Web Conference. 1997.

[Se96] Seligmann, D.D. "Position Paper for Workshop on Virtual Environments." Fifth International World Wide Web Conference, Paris, France, May 6-10, 1996.

[Se95] Seligmann, D.D., Mercuri, R.T., Edmark, J.T., "Providing Assurances in a Multimedia Interactive Environment." In Proceedings of ACM SIGCHI '95, Denver, Colorado, May 7-11, 1995. pps. 250-256. On-line version:

9.1 URL References

The Metaphorium

Return to Top of Page
Return to Technical Papers Index