2
An e-Market Framework for Informed Trading
John Debenham
Faculty of IT
University of Technology, Sydney
PO Box 123 Broadway, NSW 2007, Australia
Simeon Simoff
Faculty of IT
University of Technology, Sydney
PO Box 123 Broadway, NSW 2007, Australia
Date: 30 September 2005
simeon@it.uts.edu.au
Abstract:
Fully automated trading, such as e-procurement, using the Internet is virtually unheard of today. Three core technologies are needed to fully automate the trading process: data mining, intelligent trading agents and virtual institutions in which informed trading agents can trade securely both with each other and with human agents in a natural way.
This paper describes a demonstrable prototype e-trading system that integrates these three technologies and is available on the World Wide Web. This is part of a larger project that aims to make informed automated trading a reality.
H.5.1Information SystemsMultimedia Information Systems
H.4.mInformation SystemsMiscellaneous
I.2.11Artificial IntelligenceDistributed Artificial Intelligence
I.2.mArtificial IntelligenceMiscellaneous
Market reliability
The potential size of the electronic business market and the comparatively small amount of automated negotiation presently deployed provides a major incentive for research in automated trading. Fully automated trading, such as e-procurement, using the Internet is virtually unheard of today. Trading involves the maintenance of effective business relationships, and is the complete process of: need identification, product brokering, supplier brokering, offer-exchange, contract negotiation, and contract execution.
Three core technologies are needed to fully automate the trading process:
- data mining -- real-time data mining technology to tap information flows from the marketplace and the World Wide Web, and to deliver timely information at the right granularity.
- trading agents -- intelligent agents that are designed to operate in tandem with the real-time information flows received from the data mining systems.
- virtual institutions -- virtual places on the World Wide Web in which informed trading agents can trade securely both with each other and with human agents in a natural way -- not to be confused with the term ``virtual organisations'' as used in Grid computing.
This paper describes an e-trading system that integrates these three technologies. The e-Market Framework is available on the World Wide Web1. This project aims to make informed automated trading a reality, and develops further the ``Curious Negotiator'' framework [12]. This work does not address all of the issues in automated trading. For example, the work relies on developments in: XML and semantic web, secure data exchange, value chain management and financial services.
The data mining systems that have been developed for mining information both from the virtual institution and from general sources from the World Wide Web are described in Sec. 2.
Intelligent agents that are built on an architecture designed specifically to handle real-time information flows are described in Sec. 3.
Sec. 4 describes the work on virtual institutions -- this work has been carried out in collaboration with ``Institut d'Investigacio en Intel.ligencia Artificial2'', Spanish Scientific Research Council, UAB, Barcelona, Spain.
Sec. 5 concludes.
2 Data Mining
We have designed information discovery and delivery agents that utilise text and network data mining for supporting real-time negotiation. This work has addressed the central issues of extracting relevant information from different on-line repositories with different formats, with possible duplicative and erroneous data. That is, we have addressed the central issues in extracting information from the World Wide Web. Our mining agents understand the influence that extracted information has on the subject of negotiation and takes that in account.
Figure 1:
The information that impacts trading negotiation
|
Real-time embedded data mining is an essential component of the proposed framework. In this framework the trading agents make their informed decisions, based on utilising two types of information (as illustrated in Figure 1):
- information extracted from the negotiation process (i.e. from the exchange of offers), and;
- information from external sources, extracted and provided in condensed form.
The embedded data mining system provides the information extracted from the external sources. The system complements and services the information-based architecture developed in [5] and [11]. The information request and the information delivery format is defined by the interaction ontology. As these agents operate with negotiation parameters with a discrete set of feasible values, the information request is formulated in terms of these values. As agents proceed with negotiation they have a topic of negotiation and a shared ontology that describes that topic. For example, if the topic of negotiation is buying a number of digital cameras for a University, the shared ontology will include the product model of the camera, and some characteristics, like ``product reputation'' (which on their own can be a list of parameters), that are usually derived from additional sources (for example, from different opinions in a professional community of photographers or digital artists). As the information-based architecture assumes that negotiation parameters are discrete, the information request can be formulated as a subset of the range of values for a negotiation parameter. For example, if the negotiator is interested in cameras with 8 megapixel resolution, and the brand is a negotiation parameter, the information request can be formulated as a set of camera models, e.g. {``Canon Power Shot Pro 1'', ``Sony f828'', ``Nikon Coolpix 8400'', ``Olympus C-8080''} and a preference estimate based on the information in the different articles available. The collection of parameter sets of the negotiation topic constitutes the input to the data mining system. Continuous numerical values are replaced by finite number of ranges of interest.
The data mining system initially constructs data sets that are ``focused'' on requested information, as illustrated in Figure 2. From the vast amount of information available in electronic form, we need to filter the information that is relevant to the information request. In our example, this will be the news, opinions, comments, white papers related to the five models of digital cameras. Technically, the automatic retrieval of the information pieces utilises the universal news bot architecture presented in [13]. Developed originally for news sites only, the approach is currently being extended to discussion boards and company white papers.
Figure 2:
The pipeline of constructing ``focused'' data sets
|
The ``focused'' data set is dynamically constructed in an iterative process. The data mining agent constructs the news data set according to the concepts in the query. Each concept is represented as a cluster of key terms (a term can include one or more words), defined by the proximity position of the frequent key terms. On each iteration the most frequent (terms) from the retrieved data set are extracted and considered to be related to the same concept. The extracted keywords are resubmitted to the search engine. The process of query submission, data retrieval and keyword extraction is repeated until the search results start to derail from the given topic.
The set of topics in the original request is used as a set of class labels. In our example we are interested in the evidence in support of each particular model camera model. A simple solution is for each model to introduce two labels -- positive opinion and negative opinion, ending with ten labels. In the constructed ``focused'' data set, each news article is labelled with one of the values from this set of labels. An automated approach reported in [13] extends the tree-based approach proposed in [10].
The data sets required further automatic preprocessing, related to possible redundancies in the information encoded in the set that can bias the analysis algorithms. For example, identifying a set of opinions about the camera that most likely comes from the same author, though it has been retrieved from different ``opinion boards'' on the Internet.
Once the set is constructed, building the ``advising model'' is reduced to
a classification data mining problem. As the model is communicated back to the
information-based agent architecture, the classifier output should include all
the possible class labels with an attached probability estimates for each class.
Hence, we use probabilistic classifiers (e.g. Naïve Bayes, Bayesian Network
classifiers) [9] without the min-max selection
of the class output [e.g., in a classifier based on Naïve Bayes algorithm],
we calculate the posterior probability
of each class with respect to combinations of key terms and then return the
tuples
for all classes, not just the one
with maximum
. In the case when we deal with range variables the
data mining system returns the range within which is the estimated value. For
example, the response to a request for an estimate of the rate of change between
two currencies over specified period of time will be done in three steps: (i)
the relative focused news data set will be updated for the specified period;
(ii) the model that takes these news in account is updated, and; (iii) the output
of the model is compared with requested ranges and the matching one is returned.
The details of this part of the data mining system are presented in [14]. The currently used model
is a modified linear model with an additional term that incorporates a news
index Inews, which reflects the news effect on exchange rate. The current architecture
of the data mining system in the e-market environment is shown in Figure 3.
The
denote the output of the system to
the information-based agent architecture. In addition, the data mining system
provides parameters that define the ``quality of the information'', including:
- the time span of the ``focused'' data set, (defined by the eldest and the
latest information unit);
- estimates of the characteristics of the information sources, including reliability, trust and cost, that then are used by the information-based agent architecture.
Overall the parameters that will be estimated by the mining algorithms and provided to the negotiating agents are expected to allow information-based agents to devise more effective and better informed situated strategies. In addition to the data coming from external sources, the data mining component of the project will develop techniques for analysing agent behaviourist data with respect to the electronic institution setup.
Figure 3:
The architecture of the agent-based data mining system
|
3 Trading Agents
We have designed a new agent architecture founded on information theory. These ``information-based'' agents operate in real-time in response to market information flows. We have addressed the central issues of trust in the execution of contracts, and the reliability of information [11]. Our agents understand the value of building business relationships as a foundation for reliable trade.
An inherent difficulty in automated trading -- including e-procurement -- is that it is generally multi-issue. Even a simple trade, such as a quantity of steel, may involve: delivery date, settlement terms, as well as price and the quality of the steel. The ``information-based'' agent's reasoning is based on a first-order logic world model that manages multi-issue negotiation as easily as single-issue.
Most of the work on multi-issue negotiation has focussed on one-to-one bargaining -- for example [6]. There has been rather less interest in one-to-many, multi-issue auctions -- [4] analyzes some possibilities -- despite the size of the e-procurement market which typically attempts to extend single-issue, reverse auctions to the multi-issue case by post-auction haggling. There has been even less interest in many-to-many, multi-issue exchanges.
The generic architecture of our ``information-based'' agents is presented in Sec. 3.1. The agent's reasoning employs entropy-based inference and is described in Sec. 3.2. The integrity of the agent's information is in a permanent state of decay, Sec. 3.3 describes the agent's machinery for managing this decay leading to a characterization of the ``value'' of information. Sec. 3.4 describes metrics that bring order and structure to the agent's information with the aim of supporting its management.
1 Information-Based Agent Architecture
This section describes the essence of ``information-based agency''.
An agent observes events in its environment including what other agents actually do. It chooses to represent some of those observations in its world model as beliefs. As time passes, an agent may not be prepared to accept such beliefs as being ``true'', and qualifies those representations with epistemic probabilities. Those qualified representations of prior observations are the agent's information. This information is primitive -- it is the agent's representation of its beliefs about prior events in the environment and about the other agents prior actions. It is independent of what the agent is trying to achieve, or what the agent believes the other agents are trying to achieve. Given this information, an agent may then choose to adopt goals and strategies. Those strategies may be based on game theory, for example. To enable the agent's strategies to make good use of its information, tools from information theory are applied to summarize and process that information. Such an agent is called information-based.
Figure 4:
Basic architecture of agent
|
An agent called is the subject of this discussion.
engages in multi-issue negotiation with a set of other agents:
.
The foundation for 's operation is the information that is generated both by and because of its negotiation exchanges. Any message from one agent to another reveals information about the sender.
also acquires information from the environment -- including general information sources --Êto support its actions. uses ideas from information theory to process and summarize its information.
's aim may not be ``utility optimization'' -- it may not be aware of a utility function.
If does know its utility function and if it aims to optimize its utility then may apply the principles of game theory to achieve its aim.
The information-based approach does not to reject utility optimization -- in general, the selection of a goal and strategy is secondary to the processing and summarizing of the information.
In addition to the information derived from its opponents, has access to a set of information sources
that may include the marketplace in which trading takes place, and general information sources such as news-feeds accessed via the Internet.
Together, ,
and
make up a multiagent system.
The integrity of 's information, including information extracted from the Internet, will decay in time.
The way in which this decay occurs will depend on the type of information, and on the source from which
it was drawn.
Little appears to be known about how the integrity of real information, such as news-feeds, decays, although its validity can often be checked -- ``Is company X taking over company Y?'' -- by proactive action given a cooperative information source
. So has to consider how and when to refresh its decaying information.
has two languages:
and
.
is an illocutionary-based language for communication.
is a first-order language for internal representation -- precisely it is a first-order language with sentence probabilities optionally attached to each sentence representing 's epistemic belief in the truth of that sentence. Fig. 4 shows a high-level view of how operates.
Messages expressed in
from
and
are received, time-stamped, source-stamped and placed in an in-box
. The messages in
are then translated using an import function into sentences expressed in
that have integrity decay functions (usually of time) attached to each sentence, they are stored in a repository
. And that is all that happens until triggers a goal.
triggers a goal,
, in two ways: first in response to a message received from an opponent
``I offer you 1 in exchange for an apple'', and second in response to some need,
, ``goodness, we've run out of coffee''. In either case, is motivated by a need -- either a need to strike a deal with a particular feature (such as acquiring coffee) or a general need to trade. 's goals could be short-term such as obtaining some information ``what is the time?'', medium-term such as striking a deal with one of its opponents, or, rather longer-term such as building a (business) relationship with one of its opponents. So has a trigger mechanism where:
.
For each goal that commits to, it has a mechanism, , for selecting a strategy to achieve it where
where
is the strategy library. A strategy maps an information base into an action,
. Given a goal, , and the current state of the social model , a strategy:
. Each strategy, , consists of a plan, and a world model (construction and revision) function, , that constructs, and maintains the currency of, the strategy's world model that consists of a set of probability distributions.
A plan derives the agent's next action, , on the basis of the agent's world model for that strategy and the current state of the social model:
, and
. employs two forms of entropy-based inference:
- Maximum entropy inference, , first constructs an information base
as a set of sentences expressed in
derived from
, and then from
constructs the world model, , as a set of complete probability distributions [using Eqn. 2 in Sec. 3.2 below].
- Given a prior world model, , where , minimum relative entropy inference, , first constructs the incremental information base
of sentences derived from those in
that were received between time and time , and then from and
constructs a new world model, [using Eqn. 3 in Sec. 3.2 below].
2 's Reasoning
Once has selected a plan
it uses maximum entropy inference to derive the
[see Fig. 4] and minimum relative entropy inference to update those distributions as new data becomes available.
Entropy,
, is a measure of uncertainty [8] in a probability distribution for a discrete random variable :
where
.
Maximum entropy inference is used to derive sentence probabilities for that which is not known by constructing the ``maximally noncommittal'' [7] probability distribution, and is chosen for its ability to generate complete distributions from sparse data.
Let
be the set of all positive ground literals that can be constructed using 's language
.
A possible world, , is a valuation function:
.
is the set of all possible worlds that are consistent with 's knowledge base
that contains statements which believes are true.
A random world for
,
is a probability distribution over
,
where expresses 's degree of belief that each of the possible worlds, , is the actual world.
The derived sentence probability of any
, with respect to a random world
is:
|
(1) |
The agent's belief set
contains statements to which attaches a given sentence probability
.
A random world
is consistent with
if:
.
Let
be the ``maximum entropy probability distribution over
that is consistent with
''.
Given an agent with
and
, maximum entropy inference states that the derived sentence probability for any sentence,
, is:
|
(2) |
From Eqn. 2, each belief imposes a linear constraint on the .
The maximum entropy distribution:
,
,
subject to linear constraints:
where
if
is in and 0 otherwise, and
, is found by introducing Lagrange multipliers, and then obtaining a numerical solution using the multivariate Newton-Raphson method. In the subsequent subsections we'll see how an agent updates the sentence probabilities depending on the type of information used in the update.
Given a prior probability distribution
and a set of constraints , the principle of minimum relative entropy chooses the posterior probability distribution
that has the least relative entropy3 with respect to
:
and that satisfies the constraints. This may be found by introducing Lagrange multipliers as above.
Given a prior distribution
over -- the set of all possible worlds, and a set of constraints (that could have been derived as above from a set of new beliefs) minimum relative entropy inference states that the derived sentence probability for any sentence,
, is:
|
(3) |
where
.
The principle of minimum relative entropy is a generalization of the principle of maximum entropy.
If the prior distribution
is uniform, then the relative entropy of
with respect to
,
, differs from
only by a constant.
So the principle of maximum entropy is equivalent to the principle of minimum relative entropy with a uniform prior distribution.
3 The agent manages information
The illocutions in the communication language
include information,
. The information received from general information sources will be expressed in terms defined by 's ontology. We assume that makes at least part of that ontology public so that the other agents
may communicate
that can understand.
's reliability is an estimate of the extent to which this
is correct. For example, may send the
that ``the price of fish will go up by next week'', and it may actually go up by .
The only restriction on incoming
is that it is expressed in terms of the ontology -- this is very general. However, the way in which
is used is completely specific -- it will be represented as a set of linear constraints on one or more probability distributions. A chunk of
may not be directly related to one of 's chosen distributions or may not be expressed naturally as constraints, and so some inference machinery is required to derive these constraints -- this inference is performed by model building functions, , that have been activated by a plan chosen by .
denotes the set of constraints on distribution derived by from
.
1 Updating the world model with
The procedure for updating the world model as
is received follows. If at time , receives a message containing
it is time-stamped and source-stamped
, and placed in a repository
. If has an active plan, , with model building function, , then is applied to
to derive constraints on some, or none, of 's distributions. The extent to which those constraints are permitted to effect the distributions is determined by a value for the reliability of ,
, where
is the ontological context of
.
An agent may have models of integrity decay for some particular distributions, but general models of integrity decay for, say, a chunk of information taken at random from the World Wide Web are generally unknown. However the values to which decaying integrity should tend in time are often known. For example, a prior value for the truth of the proposition that a ``22 year-old male will default on credit card repayment'' is well known to banks. If attaches such prior values to a distribution they are called the decay limit distribution for ,
. No matter how integrity of
decays, in the absence of any other relevant information it should decay to the decay limit distribution. If a distribution with values has no decay limit distribution then integrity decays to the maximum entropy value
. In other words, the maximum entropy distribution is the default decay limit distribution.
In the absence of new
the integrity of distributions decays. If
then we use a geometric model of decay:
for |
(4) |
where
is the decay rate. This raises the question of how to determine . Just as an agent may know the decay limit distribution it may also know something about . In the case of an information-overfed agent there is no harm in conservatively setting ``a bit on the low side'' as the continually arriving
will sustain the estimate for .
We now describe how new
is imported to the distributions. A single chunk of
may effect a number of distributions. Suppose that a chunk of
is received from and that attaches the epistemic belief probability
to it. Each distribution models a facet of the world. Given a distribution
, is the probability that the possible world
for is the true world for . The effect that a chunk
has on distribution is to enforce the set of linear constraints on ,
. If the constraints
are taken by as valid then could update to the posterior distribution
that is the distribution with least relative entropy with respect to
satisfying the constraint:
|
(5) |
But
and should only treat the
as valid if . In general determines the extent to which the effect of
on is closer to
or to the prior
distribution by:
|
(6) |
But, we should only permit a new chunk of
to influence if doing so gives us new information. For example, if 5 minutes ago a trusted agent advises that the interest rate will go up by 1%, and 1 minute ago a very unreliable agent advises that the interest rate may go up by 0.5%, then the second unreliable chunk should not be permitted to `overwrite' the first. We capture this by only permitting a new chunk of
to be imported if the resulting distribution has more information relative to the decay limit distribution than the existing distribution has. Precisely, this is measured using the Kullback-Leibler distance measure4, and
is only used if:
|
(7) |
In addition, we have described in Eqn. 4 how the integrity of each distribution will decay in time. Combining these two into one result, distribution is revised to:
for
, and decay rate as before.
We have yet to estimate
-- that is described in Sec. 3.3.2 following.
2 Information reliability
We estimate
by measuring the error in information. 's plans will have constructed a set of distributions. We measure the `error' in information as the error in the effect that information has on each of 's distributions. Suppose that a chunk of
is received from agent at time and is verified at some later time . For example, a chunk of information could be ``the interest rate will rise by 0.5% next week'', and suppose that the interest rate actually rises by -- call that correct information
. What does all this tell agent about agent 's reliability? Consider one of 's distributions that is
at time . Let
be the minimum relative entropy distribution given that
has been received as calculated in Eqn. 5, and let
be that distribution if
had been received instead. Suppose that the reliability estimate for distribution was .
This section is concerned with what should have been in the light of knowing now, at time , that
should have been
, and how that knowledge effects our current reliability estimate for ,
.
The idea of Eqn. 6, is that the current value of should be such that, on average,
will be seen to be ``close to''
when we eventually discover
-- no matter whether or not
was used to update , as determined by the acceptability test in Eqn. 7 at time . That is, given
,
and the prior
, calculate
and
using Eqn. 5.
Then the observed reliability for distribution ,
, on the basis of the verification of
with
is the value of that minimises the Kullback-Leibler distance between
and
:
If
is the set of distributions that
effects, then the overall observed reliability on the basis of the verification of
with
is:
. Then for each ontological context , at time when, perhaps, a chunk of
, with
, may have been verified with
:
where
measures the semantic distance between two sections of the ontology, and is the learning rate. Over time, notes the ontological context of the various chunks of
received from and over the various ontological contexts calculates the relative frequency,
, of these contexts,
. This leads to a overall expectation of the reliability that agent has for agent :
4 Valuing Information
A chunk of information is valued first by the way that it enables to do something.
So information is valued in relation to the strategies that is executing.
A strategy, , is chosen for a particular goal in the context of a particular representation, or environment, .
One way in which a chunk of information assists is by altering 's world model -- see Fig. 4. A model consists of a set of probability distributions:
.
As a chunk of information could be ``good'' for one distribution and ``bad'' for another, we first value information by its effect on each distribution.
For a model , the value to of a message received at time is the resulting decrease in entropy in the distributions
.
In general, suppose that a set of stamped messages
is received in
. The information in at time with respect to a particular distribution
, strategy , goal and environment is:
for, where the argument of the
is the state of 's repository from which
was derived. The environment could be determined by a need (if the evaluation is made in the context of a particular negotiation) or a relationship (in a broader context). It is reasonable to aggregate the information in over the distributions used by . That is, the information in at time with respect to strategy , goal and environment is:
and to aggregate again over all strategies to obtain the value of the information in a statement. That is, the value of the information in with respect to goal and environment is:
where
is a distribution over the set of strategies for goal
,
, denoting the probability that strategy will be chosen for goal based on historic frequency data, and to aggregate again over all
goals to obtain the (potential) information in a statement. That is, the potential
information in with respect to environment is:
|
(8) |
where
is a distribution over
denoting the probability that strategy will be triggered based on historic frequency data.
4 Virtual Institutions
This work is done on collaboration with the Spanish GovernmentÕs IIIA Laboratory
in Barcelona. Electronic Institutions are software systems composed of
autonomous agents, that interact according to predefined conventions on language and protocol and that guarantee that certain norms of behaviour are enforced. Virtual Institutions enable rich interaction, based on natural language and embodiment of humans and software agents in a ``liveable'' vibrant environment. This view permits agents to behave autonomously and take their decisions freely up to the limits imposed by the set of norms of the institution. An important consequence of embedding agents in a virtual institution is that the predefined conventions on language and protocol greatly simplify the design of the agents. A Virtual Institution is in a sense a natural extension of the social concept of institutions as regulatory systems that shape human interactions [2].
Virtual Institutions are electronic environments designed to meet the following requirements towards their inhabitants:
- enable institutional commitments including structured language and norms of behaviour which enable reliable interaction between autonomous agents and between human and autonomous agents;
- enable rich interaction, based on natural language and embodiment of humans and software agents in a ``liveable'' vibrant environment.
The first requirement has been addressed to some extent by the Electronic Institutions (EI) methodology and technology for multi-agent systems, developed in the Spanish Government's IIIA Laboratory in Barcelona [2]. The EI environment is oriented towards the engineering of multiagent systems. The Electronic Institution is an environment populated by autonomous software agents that interact according to predefined conventions on language and protocol. Following the metaphor of social institutions, Electronic Institutions guarantee that certain norms of behaviour are enforced. This view permits that agents behave autonomously and make their decisions freely up to the limits imposed by the set of norms of the institution. The interaction in such environment is regulated for software agents. The human, however, is ``excluded'' from the electronic institution.
The second requirement is supported to some extent by the distributed 3D Virtual Worlds technology. Emulating and extending the physical world in which we live, Virtual Worlds offer rich environment for a variety of human activities and multi-mode interaction. Both humans and software agents are embedded and visualised in such 3D environments as avatars, through which they communicate. The inhabitants of virtual worlds are aware of where they are and who is there -- elements of the presence that are excluded from the current paradigm of e-Commerce environments. Following the metaphor of the physical world, these environments do not impose any regulations (in terms of language) on the interactions and any restrictions (in terms of norms of behaviour). When this encourages the social aspect of interactions and establishment of networks, these environments do not provide means for enabling some behavioural norms, for example, fulfilling commitments, penalisation for misbehaviour and others.
Virtual Institutions addressed both requirements, retaining the features and advantages of the above discussed approaches, as illustrated in Figure 5. They can be seen as the logical evolution and merger of the two streams of development of environments that can host electronic markets as mixed societies of humans and software agents.
Technologically, Virtual Institutions are implemented following a three-layered framework, which provides deep integration of Electronic Institution technology and Virtual Worlds technology [3]. The framework is illustrated in Figure 6. The Electronic Institution Layer hosts the environments that support the Electronic Institutions technological component: the graphical EI specification designer ISLANDER and the runtime component AMELI [1]. At runtime, the Electronic Institution layer loads the institution specification and mediates agents interactions while enforcing institutional rules and norms.
Figure 5:
Relations between the three concepts
|
Figure 6:
The three layer architecture and its implementation
|
The Communication Layer connects causally the Electronic Institutions layer with the 3D representation of the institution, which resides in the Social layer. The causal connection is the integrator. It enables the Electronic Institution layer to respond to changes in the 3D representation (for example, to respond to the human activities there), and passes back the response of the Electronic Institution layer in order to modify the corresponding 3D environment and maintain the consistency of the Virtual Institution.
Virtual Institution representation is a graph and its topology can structure the space of the virtual environment in different ways. This is the responsibility of the Social layer. In this implementation the layer is represented in terms of a 3D Virtual World technology, structured around rooms, avatars, doors (for transitions) and other graphical elements. Technically, the Social layer is currently utilising Adobe Atmosphere virtual world technology. The design of the 3D World of the Virtual Institution is developed with the Annotation Editor, which ideally should take as an input a specification of the Electronic Institution layer and produce an initial layout of the 3D space. Currently, part of the work is done manually by a designer.
The core technology -- the Causal Connection Server -- enables the Communication
Layer to act in two directions. Technically, in direction from the Electronic
Institution layer, messages uttered by an agent have immediate impact in the
Social layer. Transition of the agent between scenes in the Electronic Institution
layer, for example, must let the corresponding avatar move within the Virtual
World space accordingly. In the other direction, events caused by the actions
of the human avatar in the Virtual World are transferred to the Electronic Institution
layer and passed to an agent. This implies that actions forbidden to the agent
by the norms of the institution (encoded in the Electronic Institution layer),
cannot be performed by the human. For example, if a human needs to register
first before leaving for the auction space, the corresponding agent is not allowed
to leave the registration scene. Consequently, the avatar is not permitted to
open the corresponding door to the auction (see [3] for technical details of
the implementation of the Causal Connection Server).
Virtual Institutions are immersive environments and as such go beyond the catalogue-style markets with form-based interaction approaches currently dominating the World Wide Web. Embedding traders (whether humans or software agents) as avatars in the electronic market space on the Web positions them literally ``in'' the World Wide Web rather than ``on'' it.
5 Conclusions
A demonstrable prototype e-Market system permits both human and software agents to trade with each other on the World Wide Web. The main contributions described are: the broadly-based and ``focussed'' data mining systems, the intelligent agent architecture founded on information theory, and the abstract synthesis of the virtual worlds and the electronic institutions paradigms to form ``virtual institutions''. These three technologies combine to present our vision of the World Wide Web marketplaces of tomorrow.
-
- 1
-
Electronic institution development environment:
http://e-institutor.iiia.csic.es/ .
- 2
-
J. L. Arcos, M. Esteva, P. Noriega, J. A. Rodríguez, and C. Sierra.
Environment engineering for multiagent systems.
Journal on Engineering Applications of Artificial Intelligence,
18, 2005.
- 3
-
A. Bogdanovych, H. Berger, S. Simoff, and C. Sierra.
Narrowing the gap between humans and agents in e-commerce: 3D
electronic institutions.
In K. Bauknecht, B. Pröll, and H. Werthner, editors, E-Commerce and Web Technologies, Proceedings of the 6th International
Conference, EC-Web 2005, pages 128-137, Copenhagen, Denmark, August 2005.
Springer-Verlag: Heidelberg, Germany.
- 4
-
J. Debenham.
Auctions and bidding with information.
In P. Faratin and J. Rodriguez-Aguilar, editors, Proceedings
Agent-Mediated Electronic Commerce VI: AMEC, pages 15 - 28, July 2004.
- 5
-
J. Debenham.
Bargaining with information.
In N. Jennings, C. Sierra, L. Sonenberg, and M. Tambe, editors, Proceedings Third International Conference on Autonomous Agents and Multi
Agent Systems AAMAS-2004, pages 664 - 671. ACM, July 2004.
- 6
-
P. Faratin, C. Sierra, and N. Jennings.
Using similarity criteria to make issue trade-offs in automated
negotiation.
Journal of Artificial Intelligence, 142(2):205-237, 2003.
- 7
-
E. Jaynes.
Probability Theory -- The Logic of Science.
Cambridge University Press, 2003.
- 8
-
D. MacKay.
Information Theory, Inference and Learning Algorithms.
Cambridge University Press, 2003.
- 9
-
M. Ramoni and P. Sebastiani.
Intelligent Data Analysis, chapter Bayesian methods, pages
132-168.
Springer-Verlag: Heidelberg, Germany, 2003.
- 10
-
D. Reis, P. B. Golgher, A. Silva, and A. Laender.
Automatic web news extraction using tree edit distance.
In Proceedings of the 13th International Conference on the World Wide
Web, pages 502-511, New York, 2004.
- 11
-
C. Sierra and J. Debenham.
An information-based model for trust.
In F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. Singh, and
M. Wooldridge, editors, Proceedings Fourth International Conference on
Autonomous Agents and Multi Agent Systems AAMAS-2005, pages 497 - 504,
Utrecht, The Netherlands, July 2005. ACM Press, New York.
- 12
-
S. Simoff and J. Debenham.
Curious negotiator.
In S. O. M. Klusch and O. Shehory, editors, Proceedings
6th International Workshop Cooperative Information Agents VI CIA2002,
pages 104-111, Madrid, Spain, September 2002. Springer-Verlag: Heidelberg,
Germany.
- 13
-
D. Zhang and S. Simoff.
Informing the Curious Negotiator: Automatic news extraction from
the Internet.
In Proceedings 3rd Australasian Data Mining Conference, pages 55-72,
Cairns, Australia, December 2004.
- 14
-
D. Zhang, S. Simoff, and J. Debenham.
Exchange rate modelling using news articles and economic data.
In Proceedings of The 18th Australian Joint Conference on
Artificial Intelligence, Sydney, Australia, December 2005. Springer-Verlag:
Heidelberg, Germany.
Footnotes
- ... Web1
- http://e-markets.org.au
- ... Artificial2
- http://www.iiia.csic.es/
- ... entropy3
- Otherwise called cross entropy or the Kullback-Leibler distance between the two probability distributions.
- ... measure4
- This is just one criterion for determining whether the
should be used.