
handle: 11250/251546
Catastrophic Forgetting is a behavior seen in artificial neural networks (ANNs) when new information overwrites old in such a way that the old information is no longer usable. Since this happens very rapidly in ANNs, it leads to both major practical problems and problems using the artificial networks as models for the human brain. In this thesis I will approach the problem from the practical viewpoint and attempt to provide rules, guidelines, datasets and analysis methods that can aid researchers better analyze new ANN models in terms of catastrophic forgetting and thus lead to better solutions. I suggest two methods of analysis that measure the overlap between input patterns in the input space. I will show strong indications that these measurements can predict if a back-propagation network will retain information better or worse. I will also provide source code implemented in Matlab for analyzing datasets, both with the new suggested measurements and other existing ones, and for running experiments measuring the catastrophic forgetting.
SIF2 datateknikk, Intelligente systemer, ntnudaim
SIF2 datateknikk, Intelligente systemer, ntnudaim
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
