"Intentionality in Autonomous Robots"
University of Dundee (Scotland)
This paper discusses three views on representational states in
agents. These views concern representation of an environment.
It is argued that intentional autonomous agents can be embodied in
robots. I compare my view with that of Dretske. I also relate my view
to the logic of context. (Giunchiglia and Ghidini 1998)
the first view, and the third, learning is of crucial importance.
The first view is that of Dretske, particularly in his seminal paper,
Misrepresentation (Dretske, 1986). In that paper there is discussion
of how a relatively simple non-conceptual associative
learner has states to which the representation/misrepresentation
distinction may be applied. Dretske's discussion concerns organisms.
It can also be applied to artificial learning systems, for example
ones based upon genetic algorithms and classifier systems.
The second view is that of Evans (1984). I am concerned with his
so-called Generality Constraint, as a criterion for when an agent has
The third is my own view (Young 1993,1994,1996). I develop an
account of autonomous agents with concepts of, and intentional states
about, their environment An autonomous intentional agent
- can learn/develop its own autonomous concepts of the world,
- lacks concepts which others may have acquired,
- lacks a language of complete description of the world,
- can learn to extend its language by interaction with others.
- Dretske, F. 1986 in Belief: Form, Content and Function ed..Bogdan,
R.J., Oxford University Press
- Evans, G, 1982, Varieties of Reference, Oxford University Press
- KR'98 paper by Fausto Giunchigla and Chiara Ghidini
- R.A.Young, 1993, "Learning and Intentionality", Contributions to the
International Wittgenstein Symposium
- R.A.Young, 1994a, "Robots and Intentionality", AISB workshop series
- R.A.Young, 1994b, "Mentality of Robots", Proceedings of the
- R.A.Young, 1996, "Embodied agents as interactive agents". AAAI Fall
Symposium on Embodiment,
back to the page of events