
doi: 10.1007/bf02204858
An optimization problem \(\min f(x)\), \(x\in X\), is considered where \(f(x)\) may he evaluated with essential errors only. An approximation method is proposed where at each iteration an approximation \(f^ k(x)\) of \(f(x)\) is used. The combinations of the approximation method with reduced gradient, feasible direction and Lagrange multiplier algorithms are analyzed, and convergence theorems are formulated. The obtained results are applied to stochastic programs with recourse.
Numerical mathematical programming methods, Nonlinear programming, feasible direction, approximation method, Computational methods for problems pertaining to operations research and mathematical programming, Stochastic programming, Lagrange multiplier, reduced gradient, stochastic programs with recourse
Numerical mathematical programming methods, Nonlinear programming, feasible direction, approximation method, Computational methods for problems pertaining to operations research and mathematical programming, Stochastic programming, Lagrange multiplier, reduced gradient, stochastic programs with recourse
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
