Dorée Duncan Seligmann, Cati Laporte, Stephan Vladimir Bugaj
101 Crawfords Corner Road
Holmdel, New Jersey 07733
We are exploring the use of visual imagery which simultaneously provides the content and control of Web-based interactive services. We describe four unconventional Web-based services we have implemented for: messaging, a bulletin board, broadcast messages, and browsing through a set of hyperlinked objects. We implement each service using a real-world metaphor which serves as the basis for the visual presentation as well as the service itself; thus form and function are tightly coupled. The use of universal imagery eliminates the need for wordy explanations and hence increases accessibility to an international audience. During the course of development, we devised techniques to enhance the shared experiences of the visitors to our sites, including automatically generated 2D animations. These approaches can be applied to a variety of Web sites and are also described.
Keywords:virtual environments, virtual worlds, animation, user interface, metaphors, accessibility, multimedia, telepresence
As the Web evolves from a hypertext system into a hypermedia system it is important to explore textless methods of information navigation and new paradigms of multimedia user interaction. The services described in this paper are part of the ongoing Metaphorium project which explores such models of multimedia service interactivity. By eliminating the reliance on text and instead using a more universal visual language based on real-world metaphors we can increase the international accessibility of the services. In our model of navigation beyond text the users interact with iconic representations of the objects. Goals associated with the service metaphor and feedback to the user are also delivered within the metaphoric context.
These real-world metaphors provide a framework which enable shared visual experiences among visitors to the sites. Within each of the Metaphorium's dynamically generated space and time narratives the users have context-based metaphorical clues not only to the operations of the services but also to the presence of others within our shared virtual environments. During the course of this experiment we have devised several techniques to enhance the shared experience of both simultaneous and successive visitors to the same site. Instantaneous feedback can be provided to facilitate direct or indirect interaction between simultaneous users, and durable feedback can be provided to allow users (whose visits to the same virtual space are temporally separated) the opportunity to interact with each other.
Our emphasis is on aesthetics and interactivity: the technology and procedures should serve as a means for creating engaging experiences and not as the ends themselves. To reach this goal we have incorporated the content directly into the interface. Customized interfaces which adapt standard user interface technologies such as point-and-click and map them onto new systems of representation will make the technology more transparent. This approach also helps users to have more efficient and enjoyable interaction within virtual spaces by streamlining the process of observation of, interpretation of, and action upon control elements in the environment. The familiar metaphors of industrial control systems (buttons, knobs, sliders, etc.) are effective in building generic interfaces but their uses must be clarified by accompanying text since their intuitive functions are merely to trigger actions of any kind.
By creating interactivity controls based on real-world metaphors we are closing the gap between form and function in multimedia and WWW experience design. Thus our use of icons (visuals that correspond to real-world objects because they resemble them [Pe31]) is designed to enable the user to map their intuitive notions of function in the real world into our virtual environments. In this pursuit we have created icons which can be placed into scenes in which their metaphorical context can become apparent. This combination of visual elements within contextual frameworks creates a universal imagery [Co83] constituting an accessible code for visual communication in which the graphic elements afford functionality as well as appearance [Ga95].
We describe four web-based services which implement general messaging, bulletin board messaging, broadcast messaging, and navigation of hyperlinked objects using real-world metaphors as the basis for the multimedia environment. Message in a Bottle is a general messaging system which implements a randomized messaging system based on the real-world metaphor of putting messages into bottles and throwing them into the sea. Sand Typewriter is a bulletin board system in which the metaphor of ocean tides washing away writings in the sand is used to implement the expiration of messages in the BBS system. Skywriter is a broadcast messaging system that allows the users to broadcast a message by writing in a virtual sky for all to see. SubwaySurface is a system for navigating hyperlinked objects in which the browsing technology is incorporated into the presentation. The subway travel metaphor is used in creating functional navigation controls which are also aesthetic elements. These four projects create environments for exploring different elements of human communication in virtual spaces by removing some of the technical interface barriers which typically distract from the communication.
We have implmented this form of communication within a virtual environment consisting of a large uncharted sea spotted with uncharted islands. The sea is sometimes rough, the currents are strong, and if one could fly above, one would see that the currents are transporting glass bottles containing messages.
Visitors to the site land on an island, whereabouts unknown, out of their control. Their only form of communication is to place a message in a bottle and then throw it into the sea. Where it goes, and whether or not it can be retrieved, is determined by the conditions of the sea. At the same time, if, by chance, a bottle passes by the island the visitor can retrieve it and then: read the message, destroy it, add to it, and throw it back into the sea. Some bottles may spend days, perhaps months, in the sea without being found, while others may be found right away. Again, a message writer has no guarantee that his or her message will ever be read. Even if it is read and responded to the sender may never know.
Figure 1 shows the browser when we visited the site in the morning. There is a bottle in the sea. Below the seascape, there are two items: a pencil that can be used to write a new message and a bottle, a convenience for selecting to examine the bottle in the sea (which is difficult to click on as it moves through the water). Figure 2 shows the browser after we click on the bottle to read the message. Again, the pencil allows us to write. In Figure 3 we are writing on the message. Figures 4a-4c show the sequence as we send the message back into the sea.
Animated Seascape: morning, clouds, bottle, and swimmers.
The association of a bottle with a location simply provides a real-world metaphor for the mechanism of messaging which enables the server to procedurally determine whether or not a message is accessible to a particular user. This general mechanism is simple and easy both to understand and represent visually. The details of how a bottle travels through the waters are unimportant because a visitor's location is arbitrary. This need not be visualized; thus, no maps are presented. However, our server maintains a representation of a sea with water currents, weather conditions, other objects, and of course the bottles. These conditions are updated every hour.
It is unlikely that a visitor to the site will visit the same island twice, or under the same conditions. We visually represent several natural phenomena which are combined to generate the animated background scene. The time of day at the user's real-world location determines the time of day depicted on the island (controlling the properties of the visual components: the sky, sea and land). In this way, the virtual environment is bound to the real world, and in this example provides a kind of visual clock. Figure 1 shows the sea at morning; Figure 5 in the evening, and Figure 6 at night. In [Se95] we argued that the visualization of virtual places can benefit if integrated with parts of the real world, but only represented real world devices and places. Weather conditions are also shown, but they are randomly chosen by the server. We have created animated sequences for a variety of types of rain storms, lightning, sea conditions, and skies. Lighting and coloring is changed based on these conditions in addition to the time of day.
Sunset with clouds and Canada geese.
Night with moon and Titanic sinking.
Other elements are added to create variety in the visual presentation each time the site is visited. These include swimmers (as in Figure 1), flocks of birds (as in Figure 5), airplanes, boats, and specific events, such as the Titanic sinking in Figure 6. In order to achieve the variety of effects, the animated graphics are composited by the client applet based on the scene condition parameters sent by the server. The main background animations are created by selecting from different layers of animation sequences which can be superimposed seamlessly to create different effects and greater variety. Other animated items are overlaid on these scenes and follow generated animation paths over the scene at varying speeds, described in 3.4 below. Scenes created with the same animation elements will rarely be composited identically, so although the animated loops are short sequences, the complete animation is not as repetitive. Similarly, the paper on which a message is written (as shown in Figure 2) is unlikely to be the same. A set of procedures randomly selects its color, and from a library of edges, interior wrinkles and tears.
We have also implemented an electronic bulletin board on which postings have a limited life span and are bound to a particular location on the bulletin board's surface. This surface is divided into different areas. Users can place a posting on a vacant area on the bulletin board and can search the bulletin board's surface to read other postings. Vacant areas are available in a first-come-first-served order and the server maintains a queue of the users currently associated with each area. Once a message is posted, a timer is started; when the time period expires, the message will be removed. While each message has a relatively equal probability of being read, messages are posted with no assurance that anyone will see them before they are removed.
We have implemented this form of communication within a virtual environment consisting consisting of a narrow circular sandy coastline completely surrounding the virtual sea. The beach is the bulletin board's surface. Its shape limits the way it is browsed: visitors walk left or right. By walking continuously in one direction, a visitor can examine the entire coastline.
Visitors to the site are placed at random locations along the coastline. A visitor can use the sand typewriter to leave a message in any blank area. As visitors walk along the coast they can read the messages etched in the sand, but these messages are temporal, and eventually waves will sweep over them and wash them away. A visitor wishing to leave a new message can wait until another message is erased and then use the freed space. There may be other visitors in the same space, waiting to leave a message. Whoever starts typing on the sand typewriter goes first; subsequent writers are queued in the order that they attempt to type when the area is not available.
The bulletin board is represented by a shared space which is divided into discrete areas to give visitors some sense of place, enhancing telepresence. Although the visitors cannot see each other they may become aware of each other's telepresence when new messages appear as they are being written. Visitors can use this mechanism to communicate directly by typing in messages one after the other.
Figure 7 shows a sequence of frames as the short message "SAND" is erased. Sound files of waves crashing accompanies the animated seascape. But the aspect of the waves and the accompanying sounds provide cues signalling that the message is about to be removed, just before a wave illustrates the message's removal.
We have also implemented a type of broadcast message in which a message is displayed to all current users at the same time. This message is assigned a short lifespan and is erased after a few minutes. These messages are broadcast with no assurance that a user will not miss the short display of the message.
We have implemented this form of communication so that it is available from within all parts of the virtual environment. Virtual planes equipped with skywriting abilities write the messages in the sky with smoke. As soon as the message is posted, the letters of the message start to blur and eventually fade away. Figure 8 shows a sequence of frames as the message "SKY" starts to fade. An audio track of a plane begins before the plane is in view and signals the message's arrival.
|/* name of the animation file*/
|/* number of frames in the animation */
|/* number of different sequences */
|/* acceptable number of instances in the scene */
|/* range for when an instance starts */
|/* type of Animator and FLIP to create */
|/* starting x location */
|/* starting y location */
|/* direction of animation */
|/* range for distance covered changed */
|/* range of minumum distance moved */
|/* range if maximum distance moved */
|/* range for minimum amount of time speed is constant */
|/* range for maximum amount of time speed is constant */
|/* range for how long the animation is off-screen */
SubwaySurface is an exhibition of photographs taken by Alvaro Munoz. Visitors to the site travel (virtually) on the A-Line of the New York subway system. They can select which station to go to and upon arrival they are presented with one or more photographs taken of the street scenes outside that particular station.
The display is comprised of several components: a main viewer area which represents what the user can see, an information area on which notices are posted, a set of iconic controls, and a live map on which the subway car travels. Visitors browse through the photographs by selecting stations to visit or by taking the guided tour. These displays each change as the context of the visitor changes: the main viewer shows either the interior of the subway car or the scene outside the station, the information sign indicates the status of the ride (current destination, location, or status of the service), and the map shows the user's current location on the A-line. Figure 10 shows the site when the subway is at the 125th Street Station. Figure 11 shows the site as we travel downtown. Sound is used to enhance the experience: a version of "Take The A-Train" is played for ambiance while sound effects are added during subway travel. Travel is initiated by clicking on a subway station which causes the current photograph to be replaced by the user's view from within the subway car. As the next image is being downloaded from the server, the subway car travels to the next station on the map, the Information Sign is changed to indicate the to and from stations, and the noisy sound of subway travel is played over the music. Figure 11 shows the interior of the subway car with other anonymous passengers.
Figure 10. Arriving at the 125th Street Station in Harlem.
Figure 11. Traveling downtown. We are not alone.
We have also shown two mechanisms for creating contextual, adaptive interface elements designed specifically to address problems associated with delivering multimedia content on the Web. Unrepetitive scenes were created by dynamically combining layers of short animated graphics to create a more dynamic visual environment for the seascapes projects. Many Web animations are repetitive in an attempt to keep animation file sizes small, but by combining small animation loops in an adaptive manner we are giving users more variety of experience without having to create large, long pre-rendered animations. By using music and intermediary scenes on the Subway Surface project we engage the user with client-side activity while new content is transferred from the server. The interim scenes of people inside the subway car also informs the user of how many other users are currently connected, providing user feedback in a contextual manner.
For the most part there has been an emphasis to render electronic communication efficient (e-mail, subject-based newsgroups and bulletin boards, chat rooms), the idea being that users should be able to find what they are seeking and converse succinctly with known agents. The direct and narrowcast models of communication are concerned with delivering information to a clearly defined, narrow set of recipients. Broadcast and incidental models of communication are concerned with delivering information to anyone who happens upon it. Facilities for incidental communication, unexpected meetings and conversations with people we did not seek out and on subjects we did not necessarily select, can be used to address different communicative needs than the many direct communication facilities currently available. The services described enable this kind of interaction.
As we continue to explore these issues we will be adding more direct interactivity between users in the Metaphorium environments. Soon you will be able to chat with others riding the A train, and we are also developing new metaphorical environments to model different kinds of contextual, adaptive interactivity. We will also be refining our data-gathering methods within the servers and improving the ability of our experiences to respond to user interactions. New projects will continue to explore inefficient or hybrid communications, adaptive interactivity, contextual metaphors, and the general issues of interaction in a Web setting.
[Ga95] Gaver, W. W. "Oh What a Tangled Web We Weave: Metaphor and Mapping in Graphical
Interfaces" In Proceedings of ACM SIGCHI '95 Human Factors in
Computing Systems Conference Companion. Denver, Colorado, May 7-11, 1995. On-line version: http://www.acm.org/sigchi/chi95/Electronic/documnts/shortppr/wwg2bdy.htm
[Ga91] Gaver, W. W., Smith, R. B., and O'Shea, T. "Effective Sounds in Complex Systems: The ARKola Simulation." In Proceedings of ACM SIGCHI '91 Human Factors in Computing Systems. New Orleans, Louisiana, April 27-May 2, 1991.
[Gr95] Grueneis, G. et al. "The T_Vision Project." In Proceedings of
ACM SIGGRAPH '95 Interactive Community. Los Angeles, California.
August 6-11, 1995. Art+Com T_Vision site: http://www.artcom.de/projects/t_vision
[Gob96] Gobbetti, E., Leone, A.O. "Virtual Sardinia: A large-scale hypermedia regional
information system." In the Proceedings of the Fifth International World Wide Web
Conference, Paris, France, May 6-10, 1996. pps. 1539-1546. Virtual Sardinia site: http://www.crs4.it/PRJ/VIRTSARD
[Gos96] Gosling, J., Joy, B., Steele, G. "The Java Language Specification." Addison-Wesley, Reading, Massacusetts. 1996.
[Ku96] Kurlander, D., Skelley, T., Salesin, D. "Comic Chat." In Proceedings of ACM
SIGGRAPH '96, New Orleans, Louisiana, August 4-9, 1996. pps. 225-226. Comic Chat site: http://www.microsoft.com/ie/comichat/
[Pe31] Peirce, C. S. "Collected Papers." Harvard University Press. Cambridge, Ma. 1931.
[Po97] Potmesil, M., "Maps Alive: Viewing Geographical Information on the World Wide Web." In Proceedings of the Sixth International World Wide Web Conference. 1997.
[Se96] Seligmann, D.D. "Position Paper for Workshop on Virtual Environments." Fifth International World Wide Web Conference, Paris, France, May 6-10, 1996.
[Se95] Seligmann, D.D., Mercuri, R.T., Edmark, J.T., "Providing Assurances in a Multimedia Interactive Environment." In Proceedings of ACM SIGCHI '95, Denver, Colorado, May 7-11, 1995. pps. 250-256. On-line version: http://www.acm.org/sigchi/chi95/proceedings/papers/dds_bdy.htm