Kondadadi, R., and S. Franklin . 2001. Deliberative Decision Making in “Conscious” Software Agents. In Proceedings Of Sixth International Symposium on Artificial Life and Robotics (AROB-01).
When we humans are faced with a problem to solve, we often create in our mind different strategies or possible solutions. We imagine the effects of executing each strategy or trial solution without actually doing so. It's a kind of internal virtual reality. Eventually, we decide upon one strategy or trial solution and try solving the problem using it. This whole process is called deliberation. During the deliberation process several, possibly conflicting, ideas compete to be selected as the strategy or solution of the problem. One such is chosen voluntarily. In 1890 William James proposed a model that describes this voluntary decision-making calling it the ideo-motor theory. In this theory the mind is considered to be the seat of many ideas related to each other either favorably or antagonistically. Whenever an idea prompts an action by becoming conscious, antagonistic ideas may object to it, also by becoming conscious, and try to block that action. Or, other favorable ideas may become conscious to support it, trying to push its selection. While this conflict is going on among several ideas we are said to be "deliberating". Software agents, so equipped, should be able to make voluntary decisions much as we humans do. This paper describes a computational mechanism for this deliberation process, including James' ideo-motor theory of voluntary action. It also describes an implementation of the mechanism in a software agent. Some preliminary experimentation is also reported.