While the system did survive the stress test for the most part, a number of
bugs were discovered causing numerous system crashes. Some data were lost as
a result of this. The system was used over a period of two weeks. The results
for six of the users who used the system for more than half the time they were
expected to are shown in figures
to
.
The results contain some noise for a variety of reasons. There was a mild learning
curve involved in the initial stages. People also experimented with ``what-if''
scenarios, to get a feel for the system. Some data were lost due to unexpected
crashes of the code, as mentioned earlier. Some users would read the interesting
articles and forget to provide positive feedback, or vice versa. Besides, not
all users were consistent in providing feedback, trying variations in the early
stages of the interaction before crystallizing their interests. However, the
results are fairly promising and are presented here nonetheless.
Each graph plots the performance of the whole system over time for each user.
For each session, the graph plots the proportion of articles that received positive
feedback, the proportion of articles that received negative feedback and the
difference of the two. For some users, there appears to be a definite pattern
of consistent and improving performance (figures ,
,
and, to a lesser extent, figure
).
For others, there is no observable pattern. Agent-based systems are highly interactive
systems and their performance greatly depends on how the user uses them, which
explains some of the variance in the the results.