
Abstract This study integrates Hebbian learning and Q-learning within a unified cognitive framework to facilitate efficient decision-making in dynamic environments. By merging these learning paradigms, we emulate human cognitive processes and analyze how various cognitive mechanisms can enhance agent behavior. We compare our approach to the "Active Inference in Hebbian Learning Networks" study, which employs Hebbian learning within active inference (AIF) frameworks for controlling dynamic agents. Their study uses two Hebbian ensembles: a posterior network for inferring latent states from observations and a state transition network for predicting future states based on current state-action pairs. Experimental results in the Mountain Car environment demonstrate that Hebbian AIF outperforms Q-learning, highlighting the efficiency of Hebbian learning without replay buffers. In our approach, Hebbian learning is applied for memory encoding within a cognitive model, enhancing connections between frequently co-activated nodes and transforming sensory input into a storable format. Q-learning is implemented as a reinforcement learning mechanism using a traditional table-based method, integrated with memory retrieval and attentional selection. Our system architecture integrates multiple cognitive mechanisms, including among others memory systems, reinforcement learning, and attentional processes, aiming for adaptive intelligence and efficient decision-making based on feedback and learning. We will present experimental results demonstrating a strong foundation early in the process based on the effectiveness of the agent's performance within a randomized maze environment with dynamical objects. Performance in this environment shows that aspects of this approach may lead to improved computation handling and efficiency in learning and adaptation. We integrate a computational function using Robert Worden's Requirement Equation which is referred to here as the Worden RE Subsystem. We also discuss the impact of distress dynamics on memory encoding and retrieval, highlighting the importance of distress states in cognitive processes. The paper concludes by emphasizing the distinctions and similarities between our approach and the referenced study, highlighting the importance of unsupervised learning and biological plausibility. By incorporating distress states, belief dynamics, and other operations into the learning process, our model attempts a holistic representation of engineered active inference and means to enhance the overall performance and decision-making capabilities of cognitive computing models in complex, real-world environments.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
