
doi: 10.2333/bhmk.26.107
Artificial Neural Networks (anns) are able, in general and in principle, to learn complex tasks. Interpretation of models induced by anns, however, is often extremely difficult due to the non linear and non-symbolic nature of the models. To enable better interpretation of the way knowledge is represented in anns, we present bp-som, a neural network architecture and learning algorithm. bp-som is a combination of a multi -layered feed-forward network (mfn) trained with the back-propagation learning rule (bp), and Kohonen’s self-organising maps (soms). The involvement of the som in learning leads to highly structured knowledge representations both at the hidden layer and on the soms. We focus on a particular phenomenon within trained bp-som networks, viz. that the som part acts as an organiser of the learning material into instance subsets that tend to be homogeneous with respect to both class labelling and subsets of attribute values. We show that the structured knowledge representation can either be exploited directly for rule extraction, or be used to explain a generic type of checksum solution found by the network for learning M-of-N tasks.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
