
handle: 20.500.14243/222232 , 2158/920331
In this work we consider the problem of minimizing a sum of continuously differentiable functions. The vector of variables is partitioned into two blocks, and we assume that the objective function is convex with respect to a block-component. Problems with this structure arise, for instance, in machine learning. In order to advantageously exploit the structure of the objective function and to take into account that the number of terms of the objective function may be huge, we propose a decomposition algorithm combined with a gradient incremental strategy. Global convergence of the proposed algorithm is proved. The results of computational experiments performed on large-scale real problems show the effectiveness of the proposed approach with respect to existing algorithms. (C) 2014 Elsevier Inc. All rights reserved.
Decomposition, Gradient incremental methods, Large-scale unconstrained optimization
Decomposition, Gradient incremental methods, Large-scale unconstrained optimization
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
