background preloader

Optimization

Facebook Twitter

Lagrange-Multiplikator. Visualisierung der Methode der Lagrange-Multiplikatoren. Die rote Linie stellt die Menge dar, auf der erfüllt ist. Die blauen Linien sind Höhenlinien für verschiedene Werte von . An dem Punkt, an dem unter der Nebenbedingung maximal ist, verläuft tangential zur Höhenlinie und , dargestellt durch blaue bzw. rote Pfeile, kollinear. Dasselbe Problem wie oben, wobei die Funktionswerte von auf der Höhenachse abgetragen sind. Beschreibung[Bearbeiten] Zum Verständnis der Funktionsweise betrachten wir den zweidimensionalen Fall mit einer Nebenbedingung.

Maximieren, wobei für eine Konstante eine Nebenbedingung einzuhalten ist. Berühren oder kreuzen wir Höhenlinien von . Der Nebenbedingung und einer Höhenlinie kann nur dann Lösung des Optimierungsproblems sein, wenn unsere Bewegung auf der Höhenlinie tangential zu verläuft: Andernfalls könnten wir durch Vorwärts- oder Rückwärtsbewegung auf der vorgegebenen -Höhenlinie den Funktionswert von vergrößern oder verkleinern, ohne die Nebenbedingung zu verletzen. mit ist. .

Augmented Lagrangian method. Viewed differently, the unconstrained objective is the Lagrangian of the constrained problem, with an additional penalty term (the augmentation). The method was originally known as the method of multipliers, and was studied much in the 1970 and 1980s as a good alternative to penalty methods.

It was first discussed by Magnus Hestenes in 1969[1] and by Powell in 1969[2] The method was studied by R. Tyrrell Rockafellar in relation to Fenchel duality, particularly in relation to proximal-point methods, Moreau–Yosida regularization, and maximal monotone operators: These methods were used in structural optimization. The method was also studied and implemented by Dimitri Bertsekas, notably in his 1982 book,[3] and with respect to entropic regularization (which accelerate the rate of convergence for his "exponential method of multipliers").

General method[edit] Let us say we are solving the following constrained problem: subject to (and using the old solution as the initial guess or "warm-start"). Karush–Kuhn–Tucker conditions. The KKT conditions were originally named after Harold W. Kuhn, and Albert W. Tucker, who first published the conditions in 1951.[2] Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.[3][4] Nonlinear optimization problem[edit] Consider the following nonlinear optimization problem: where x is the optimization variable, is the objective or cost function, are the equality constraint functions.

Necessary conditions[edit] Suppose that the objective function and the constraint functions and are continuously differentiable at a point . Is a local minimum that satisfies some regularity conditions (see below), then there exist constants , called KKT multipliers, such that Inequality constraint diagram for optimization problems Stationarity For maximizing f(x): For minimizing f(x): Primal feasibility Dual feasibility Complementary slackness In the particular case Regularity conditions (or constraint qualifications)[edit] Minimize. Lagrange multiplier. Figure 1: Find x and y to maximize f(x, y) subject to a constraint (shown in red) g(x, y) = c. Figure 2: Contour map of Figure 1. The red line shows the constraint g(x, y) = c. The blue lines are contours of f(x, y). The point where the red line tangentially touches a blue contour is our solution. Since d1 > d2, the solution is a maximization of f(x, y). For instance (see Figure 1), consider the optimization problem maximize f(x, y) subject to g(x, y) = c.

We need both f and g to have continuous first partial derivatives. Where the λ term may be either added or subtracted. Introduction[edit] One of the most common problems in calculus is that of finding maxima or minima (in general, "extrema") of a function, but it is often difficult to find a closed form for the function being extremized. Consider the two-dimensional problem introduced above: subject to g(x, y) = c We can visualize contours of f given by f(x, y) = d for various values of d, and the contour of g given by g(x, y) = c. for some λ.

Optimization problem. Continuous optimization problem[edit] The standard form of a (continuous) optimization problem is[1] where is the objective function to be minimized over the variable , are called inequality constraints, and are called equality constraints. By convention, the standard form defines a minimization problem.

A maximization problem can be treated by negating the objective function. Combinatorial optimization problem[edit] Formally, a combinatorial optimization problem is a quadruple , where is a set of instances;given an instance , is the set of feasible solutions;given an instance and a feasible solution of , denotes the measure of , which is usually a positive real. is the goal function, and is either or . The goal is then to find for some instance an optimal solution, that is, a feasible solution with For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure which contains vertices and to.