next up previous
Next: Goal-Orientedness Up: HOW TO DO THE Previous: Example

Results

The algorithm presented in this paper can be modelled by a system of differential equations. This system is however too complicated to solve, so that exact predictions about the resulting action selection behavior are not possible. Nevertheless. important qualitative results can be obtained, for example on possible phase transitions with the growth of parameters, such as the size of the network, the mean fanout of a node, etc (Huberman & Hogg, 1987). We have evaluated the algorithm empirically by performing a wide series of experiments using several example applications. The networks had such diverse properties as being very `wide', very `long', containing cycles, local high concentrations of links, unlinked subnetworks, destructive modules, conflicting and mutually conflicting modules, etc. All of the problems presented were solved for large ranges of parameters.

The simulated societies cannot be said to show a `jump-first think-never' behavior. They do exhibit planning capabilities. They `consider' to some extent the effects of a sequence of actions before actually embarking on its execution. If a sequence of competence modules exists that transforms the current situation into the goal state, then this sequence becomes highly activated through the cumulative effect of the forward spreading (starting from the current state) and the backward spreading (starting from the goals). If this sequence potentially implies negative effects, it is weakened by the inhibition rules.

More specifically, goal-relevance of the selected action is obtained through the input from the goals and the backward spreading of activation. Situation relevance and opportunistic behavior are obtained through the input of the state and the spreading of activation forward. Conflicting and interacting goals are taken into account through inhibition by the protected goals and inhibition among conflicting modules. Further, local maxima in the action selection are avoided, provided that the spreading of activation can go on long enough (the threshold is high enough), so that the network can evolve towards the optimal activity pattern. And finally, the algorithm automatically biases towards ongoing plans, because these tend to have a shorter distance between state and goals and are favored by the remains of the past spreading activation patterns. Moreover, the global parameters serve as controls by which one can mediate smoothly among these different action selection characteristics.

The notion of a plan is here very different from the classical one existing in AI. A network does not construct an explicit representation of a single plan, but instead expresses its `intention' or `urge' to take certain actions by high activation levels of the corresponding modules. Another important difference is that there is no centralized preprogrammed search process. Instead, the operators (competence modules) themselves select the sequence of operators that are activated, and this in a non-hierarchical, highly distributed way. There is no search tree constructed, i.e., there is no explicit representation built of state changes after taking certain actions.

Consequently, the system does not suffer from the disadvantages of search trees such as: that information is duplicated in several parts of a tree; trees grow exponentially with the size of the problem; trees only make a strict representation of plans possible (impossible to work with uncertainties); etc. In addition, the spreading activation process is a much cheaper operation. Of course these advantages are not cost-free. The action selection produced is less `rational' than that of the sophisticated deliberative planners built in AI. On the other hand the latter systems, when applied in autonomous agents, suffer from brittleness and slowness. What is particularly interesting about the algorithm presented here is that it provides parameters to mediate between adaptivity, speed and reactivity on the one hand and thoughtfulness and rationality on the other hand.

The following subsections discuss the results observed in detail.





next up previous
Next: Goal-Orientedness Up: HOW TO DO THE Previous: Example



Alexandros Moukas
Wed Feb 7 14:24:19 EST 1996