|Fundamental Motivation and Perception for a Systems-Level Cognitive Architecture|
A comprehensive systems-level cognitive architecture attempts to provide a blueprint for generally capable intelligent software agents or cognitive robots. While such architectures can be conceived and studied at a high level of abstraction, this work focuses primarily on some of the low-level algorithms underlying the architecture. For instance, one might study logical reasoning, decision-making, etc., ignoring or making simplifying assumptions about the perceptual processes producing the representations involved in the higher-level processes. In contrast, critical aspects of cognitive architectures lie in the low-level details of the traditionally identified, abstract modules and processes.
Natural selection has imbued biological agents with motivations driving them to act for survival and reproduction. Likewise, artificial agents also require motivations to act in a goal-directed manner. In this context, I present a motivational extension to the LIDA cognitive architecture integrated within LIDA's cognitive cycle at a fundamental level. This motivational extension provides a repertoire of motivational capacities including alarms, feelings, affective valence, incentive salience, emotion, appraisal, reinforcement learning, and model-free and model-based learning. A LIDA-based agent implementing the proposed motivational extension replicates a reinforcer devaluation experiment testing its ability to learn, and later revise, the reward predicting attributes of stimuli that drive its behavior.
Intelligent software agents must also autonomously navigate complex, dynamic, uncertain environments with bounded resources. In my view, this requires that they continually update a hierarchical, dynamic, uncertain internal model of their current situation, via approximate Bayesian inference, incorporating both the sensory data and a generative model of its causes. To explicate my approach, I identify perceptual principles for cognitive architectures influencing perceptual representation, perceptual inference, and the associated learning processes. Guided by these, I propose a predictive coding extension to the HTM Cortical Learning Algorithms, termed PC-CLA, as a potential foundational building block for the systems-level LIDA cognitive architecture. PC-CLA fleshes out LIDA's internal representations, memory, learning and attentional processes; and takes an initial step towards the comprehensive use of distributed and probabilistic (uncertain) representation throughout the architecture. I conclude with reports on a battery of new tests of the original CLA as well as proof-of-concept tests of PC-CLA.