
A new approach to exact penalization of a constrained, nonlinear optimization problem is introduced. This is motivated by the desire to deal with the following list of perceived failures of other exact penalty methods: 1. nonsmoothness is avoided; 2. the penalized objective remains bounded below under mild assumptions; 3. the method is flexible in its use of available Lagrangian multiplier estimates and 4. regularization techniques are explicitly included in the analysis of the penalized problem. The problem \[ \min \{ f(x) \mid F(x)=0 \text{ and } x \in [u,v]:=\{x\in \mathbb R^{n} \mid \underline{x} \leq x \leq \overline{x} \} , \] with \(F(x) = (F_{1} (x), \dots, F_{m} (x))\), is first reformulated as \[ \min \{ f(x) \mid F_{i} (x) = \varepsilon w_{i}, \quad i=1,\dots,m \quad x \in [u,v], \quad \varepsilon=0\}. \] This motivates the following penalty function \[ f_{\sigma} = \begin{cases} f(x) & \text{ if \(\varepsilon =\Delta(x,\varepsilon)=0\),} \\ f(x) + \frac{1}{2 \varepsilon}\frac{\Delta (x,\varepsilon ) }{1-q \Delta(x, \varepsilon ) } + \sigma \beta (\varepsilon) & \text{ if \(\varepsilon >0\), \( \Delta ( x, \varepsilon ) 0\) and \(\beta: [0,\overline{\varepsilon}] \to [0,\infty)\) is continuous and continuously differentiable on \((0,\overline{\varepsilon}]\) with \(\beta(0)=0\). This leads to the penalty problem \[ \min \{ f_{\sigma} (x, \varepsilon) \mid (x,\varepsilon ) \in [u,v] \times [0,\varepsilon] \}. \] The denominator term \(1-q \Delta ( x, \varepsilon ) \) is included to keep the level sets of \(f_{\sigma}\) in a bounded region within which it is differentiable and the penalty function remains bounded below. The term \(\sigma \beta (x)\) is used to allow an optimization over \((x, \varepsilon )\) simultaneously and appropriate choices of \(\beta (\cdot )\) make all local minimizers of \(f_{\sigma}\) occur at points with \(\varepsilon=0\). In particular it is shown that under mild assumptions, for \(\sigma \) sufficiently large, all local minimizers of \(f_{\sigma}\) are of the form \((x^{\ast},0)\) with \(x^{\ast}\) a local minimizer of the original problem. Suggestions are made as to how to use the dependence on \(\varepsilon\) to solve mildly non-smooth problems. Within the formulation of the problem elementary non-smooth function are replaced by approximating smooth functions (converging as \(\varepsilon \downarrow 0\)). The case when \(f\) and \(F\) satisfy Lipschitz continuity in \(x\) and a one sided Lipschitz continuity in \(\varepsilon\) at zero is considered. Under some more stringent assumptions regarding the properties of the minimizer \(x^{\ast}\) of this problem it is shown that \((x^{\ast},0)\) is a local minimizer of \(f_{\sigma}\). In the last section this method is extended to the problem \[ \min \{ f(x) \mid x \in [u,v], \quad F_{l} \leq F(x) \leq F_{u} \}, \] where \(F_{l} \leq F_{u}\) are vectors of both proper and infinite bounds for the constraint functions.
1010 Mathematics, penalty functions, constrained nonlinear optimization, Nonlinear programming, exact penalty, 1010 Mathematik, Other numerical methods in calculus of variations
1010 Mathematics, penalty functions, constrained nonlinear optimization, Nonlinear programming, exact penalty, 1010 Mathematik, Other numerical methods in calculus of variations
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 42 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
