publication . Preprint . 2017

Learning Based on CC1 and CC4 Neural Networks

Kak, Subhash;
Open Access English
  • Published: 22 Dec 2017
We propose that a general learning system should have three kinds of agents corresponding to sensory, short-term, and long-term memory that implicitly will facilitate context-free and context-sensitive aspects of learning. These three agents perform mututally complementary functions that capture aspects of the human cognition system. We investigate the use of CC1 and CC4 networks for use as models of short-term and sensory memory.
free text keywords: Computer Science - Neural and Evolutionary Computing
Download from
24 references, page 1 of 2

[1] M.M. Chun and Y. Jiang, Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology 36: 28-71, 1998.

[2] L.W. Barsalou, Context-independent and context-dependent information in concepts. Memory and Cognition 10: 82-93, 1982.

[3] R. Keller, The sociology of knowledge approach to discourse, Human Studies 34 (1), 43-65, 2011.

[4] S. Kak, On training feedforward neural networks. Pramana, vol. 40, pp. 35-42, 1993.

[5] S. Kak, New algorithms for training feedforward neural networks. Pattern Recognition Letters, 15, 295-298, 1994

[6] S. Kak, Three languages of the brain: quantum, reorganizational, and associative. In Learning as Self-Organization, K. Pribram and J. King, eds., Lawrence Erlbaum, Mahwah, N.J., 185-219, 1996.

[7] K.W. Tang, S. Kak, Fast classification networks for signal processing. Circuits, Systems, Signal Processing. 21, 207-224, 2002. [OpenAIRE]

[8] S. Kak, Faster web search and prediction using instantaneously trained neural networks. IEEE Intelligent Systems. 14, 79-82, November/December, 1999.

[9] Zhang, Z. et al., TextCC: New feedforward neural network for classifying documents instantly. Advances in Neural Networks ISNN 2005. Lecture Notes in Computer Science 3497: 232-237, 2005.

[10] Zhang, Z. et al., Document Classification Via TextCC Based on Stereographic Projection and for deep learning, International Conference on Machine Learning and Cybernetics, Dalin, 2006

[11] Zhu, J. and G. Milne, Implementing Kak Neural Networks on a Reconfigurable Computing Platform, Lecture Notes in Computer Science Volume 1896: 260-269, 2000.

[12] Shortt, A., J.G. Keating, L. Moulinier, C.N. Pannell, Optical implementation of the Kak neural network, Information Sciences 171: 273-287, 2005. [OpenAIRE]

[13] Y. Bengio, Learning deep architectures for AI. Machine Learning. 2 (1): 1-127, 2009.

[14] D. C. Ciresan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image Classification. IEEE Conf. on Computer Vision and Pattern Recognition CVPR, 2012. [OpenAIRE]

[15] Y. Bengio, Y. LeCun, G. Hinton, Deep learning. Nature. 521: 436-444, 2015.

24 references, page 1 of 2
Any information missing or wrong?Report an Issue