Next: Experimental Results Up: Collaborative Interface Agents Previous: A Single User's

A Framework For Collaboration

We propose a collaborative solution to the problems above. While a particular agent may not have any prior knowledge, there may exist a number of agents belonging to other users who do. Instead of each agent re-learning what other agents have already learned through experience, agents can simply ask for help in such cases. This gives each agent access to a potentially vast body of experience that already exists. Over time each agent builds up a trust relationship with each of its peers analogous to the way we consult different experts for help in particular domains and learn to trust or disregard the opinions of particular individuals.

Collaboration and communication between various agents can take many different forms. This paper is only concerned with those forms that aid an agent in making better predictions in the context of new situations. There are two general classes of such collaboration.

Desperation based communication is invoked when a particular agent has insufficient experience to make a confident prediction. For example, let us suppose that a particular agent has just been activated with no prior knowledge, and its user receives a set of new mail messages. As doesn't have any past experience to make predictions, it turns in desperation to other agents and asks them how their user would handle similar situations.

Exploratory communication, on the other hand, is initiated by agents in bids to find the best set of peer agents to ask for help in certain classes of situations. We envisage future computing environments to have multitudes of agents. As an agent has limited resources and can only have dealings with a small number of its peers at a given time, the issue of which ones to trust, and in what circumstances, becomes quite important. Exploratory communication is undertaken by agents to discover new (as yet untried) agents who are better predictors of their users' behaviors than the current set of peers they have previously tested.

Both forms of communication may occur at two orthogonal levels. At the situation level, desperation communication refers to an agent asking its peers for help in dealing with a new situation, while exploratory communication refers to an agent asking previously untested peers for how they would deal with old situations for which it knows the correct action, to determine whether these new agents are good predictors of its user's behavior. At the agent level, desperation communication refers to an agent asking trusted peers to recommend an agent that its peers trust, while exploratory communication refers to agents asking peers for their evaluation of a particular agent perhaps to see how well these peers' modelling of a particular agent corresponds with their own. Hence agents are not locked into having to turn for help to only a fixed set of agents, but can pick and choose the set of peers they find to be most reliable.

For agents to communicate and collaborate they must speak a common language as well as follow a common protocol. We assume the existence of a default ontology for situations in a given domain (such as electronic news, e-mail, meeting scheduling, etc). Our protocol does not preclude the existence of multiple ontologies for the same domain. This allows agent creators the freedom to decide which types of ontologies their agents will understand. As the primary task of an agent is to assist its particular user, the protocol for collaboration is designed to be flexible, efficient and non-binding. We briefly present the protocol below.

Agents model peers' abilities to predict their user's actions in different classes of situations by a trust value. For each class of situations an agent has a list of peers with associated trust values. Trust values vary between 0 and 1.

The trust values reflect the degree to which an agent is willing to trust a peer's prediction for a particular situation class. A trust value represents a probability that a peer's prediction will correspond with its user's action based on a prior history of predictions from the peer. Agents may start out by picking a set of peers at random or by following their user's suggestion as to which peer agents to try first. Each previously untested peer agent gets has its trust level set to an initial value. As a peer responds to a prediction request with a prediction , and an agent's user takes a particular action , the agent updates the trust value of its peer in the appropriate situation class as follows: where

and represents the trust level of a peer, represents the confidence the peer has in this particular prediction, is the trust learning rate, and ensures that the value of always lies in . The rationale behind the modelling above is as follows. An agent's trust in a peer rises when the peer makes a correct prediction and falls for incorrect predictions. The amount it rises and falls by depends on how confident the peer was in its prediction. That is, a peer who makes an incorrect prediction with a high confidence value should be penalized more heavily than one that makes an incorrect prediction but with a lower confidence value.

When an agent sends out a prediction request to more than one peer it is likely to receive many replies, each with a potentially different prediction and confidence value. In addition, the agent has a trust value associated with each peer. This gives rise to many possible strategies which an agent can use to choose a prediction and a confidence value for this prediction. We believe that both trust and peer confidence should play a role in determining which prediction gets selected and with what confidence. Each predicted action is assigned a trust-confidence sum value which is the trust weighted sum of the confidence values of all the peers predicting this action. The action with the highest trust-confidence sum is chosen. The confidence associated with the action chosen is currently that of the most confident peer suggesting this action. We are exploring more sophisticated trust-confidence combination strategies using decision theoretic and Bayesian strategies.



Next: Experimental Results Up: Collaborative Interface Agents Previous: A Single User's


MIT Media Lab - Autonomous Agents Group - agentmaster@media.mit.edu