
Human learning frequently involves learning several tasks simultaneously; in particular, humans compare and contrast similar tasks for solving a problem. Nevertheless, most approaches to machine learning focus on the learning of a single isolated task, Single Task Learning (STL). Most of them can be formulated from learning several tasks related to the main task at the same time while using a shared representation, Multitask Learning (MTL). This type of learning improves generalization performance for a main task by using the information contained in other related tasks. In this article, we examine distinct schemes used in MTL, propose an new network architecture and test each scheme in two different problems. The proposed scheme makes use of private subnetworks to improve the performance of MTL.
Este trabajo está parcialmente financiado por el Ministerio de Educación y Ciencia a través del proyecto TIC2002-03033.
Aprendizaje multitarea, Teoría de la Señal y las Comunicaciones, Arquitectura asimétrica, Arquitecturas neuronales, Modelos duales
Aprendizaje multitarea, Teoría de la Señal y las Comunicaciones, Arquitectura asimétrica, Arquitecturas neuronales, Modelos duales
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
