Ten years ago – in 1995 – intelligent agents had known an astonishing increase in popularity. After higher programming languages, human-computer interfaces, networking and object oriented programming, the software industry seemed to have found a new paradigm, and AI people claimed to be at the roots of significant progress in complex software development.
Since the early 1990s already, the production of research papers addressing the subject of agents increased
significantly. Following this trend, an impressive number of implemented systems appeared and was
documented in the research community. Hence, despite all the promises for imminent release and availability
to every human being by means of putting intelligent agents on the World Wide Web, not many systems
really made it to reach the masses.
Hence, the term intelligent agent has become widely used today. There are in fact many systems that claim
to be agent-based. The software industry has turned its attention to distributed component technology and
has refined the notion of objects as units of software design that encapsulate as well as expose their state
and services through interfaces on the Internet. The equation “a more or less smart Web component is an
agent” has quickly made the round. It seems that the Internet business has forgotten some of the criteria that qualified the AI agents.
It is not easy to summarize the myriad of agent definitions. But there seems to be a consensus that an
intelligent agent must be situated in an environment, autonomous, adaptive and sociable.
Most Web components probably satisfy situatedness: they are embedded in Web application containers and can be accessed from distant nodes on the Internet. On an another hand, most AI agents might miss this
qualification, because the programs run in an isolated environment and can not directly be made available on
the Web.
Autonomy might be satisfied by Web components at a first glance: they sit in Web application containers
and wait for method invocations. However, if something unexpected reaches the component, its autonomy to
react is limited and in most of the cases the component dies.
As in evolution theory, survival is linked to adaptation: if a being is to survive in a hostile environment (and the Web is a hostile environment!), it must be able to react to events, learn from experience and adapt its future behavior respectively. Most Web components fall through these criteria. Most AI agents might fulfill them.
The last qualification (sociable) places the objectives even higher: to be sociable can be interpreted in two ways: if sociable means interaction in a network of peers, Web components brilliantly satisfy the criteria: the Web of objects is per se a nest of remote method invocations (RPC/RMI) and message exchanges (MOM). If sociable were to qualify behavior according to human, social theory, however, all Web components and most of the AI agents still fall through. Sociability then means reacting by deliberating on believes, desires and intentions (the so-called BDI architectures in agent theory). Recent developments also associate emotions and wisdom to sociability.