Cutting-plane method

Cutting-plane method

In mathematical optimization, the cutting-plane method is an umbrella term for optimization methods which iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are popularly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory.

Cutting plane methods for MILP work by solving a non-integer linear program, the linear relaxation of the given integer program. The theory of Linear Programming dictates that under mild assumptions (if the linear program has an optimal solution, and if the feasible region does not contain a line), one can always find an extreme point or a corner point that is optimal. The obtained optimum is tested for being an integer solution. If it is not, there is guaranteed to exist a linear inequality that separates the optimum from the convex hull of the true feasible set. Finding such an inequality is the separation problem, and such an inequality is a cut. A cut can be added to the relaxed linear program. Then, the current non-integer solution is no longer feasible to the relaxation. This process is repeated until an optimal integer solution is found.

Cutting-plane methods for general convex continuous optimization and variants are known under various names: Kelley's method, Kelley-Cheney-Goldstein method, and bundle methods. They are popularly used for non-differentiable convex minimization, where a convex objective function and its subgradient can be evaluated efficiently but usual gradient methods for differentiable optimization can not be used. This situation is most typical for the concave maximization of Lagrangian dual functions. Another common situation is the application of the Dantzig-Wolfe decomposition to a structured optimization problem in which formulations with an exponential number of variables are obtained. Generating these variables on demand by means of delayed column generation is identical to performing a cutting plane on the respective dual problem.


Gomory's cut

Cutting planes were proposed by R. Gomory in the 1950s as a method for solving integer programming and mixed-integer programming problems. However most experts, including Gomory himself, considered them to be impractical due to numerical instability, as well as ineffective because many rounds of cuts were needed to make progress towards the solution. Things turned around when in the mid-1990s Cornuejols and co-workers showed them to be very effective in combination with branch-and-cut and ways to overcome numerical instabilities. Nowadays, all commercial MILP solvers use Gomory cuts in one way or another. Gomory cuts, however, are very efficiently generated from a simplex tableau, whereas many other types of cuts are either expensive or even NP-hard to separate. Among other general cuts for MILP, most notably lift-and-project dominates Gomory cuts.

Let an integer programming problem be formulated as

\mbox{Maximize  } & c^Tx \\
\mbox{Subject to  } & Ax = b, \\
 & x\geq 0,\, x_i \mbox{ all integers}. \\

The method proceeds by first dropping the requirement that the xi be integers and solving the associated linear programming problem to obtain a basic feasible solution. Geometrically, this solution will be a vertex of the convex polytope consisting of all feasible points. If this vertex is not an integer point then the method finds a hyperplane with the vertex on one side and all feasible integer points on the other. This is then added as an additional linear constraint to exclude the vertex found, creating a modified linear programming program. The new program is then solved and the process is repeated until an integer solution is found.

Using the simplex method to solve a linear program produces a set of equations of the form

x_i+\sum \bar a_{i,j}x_j=\bar b_i

Where xi is a basic variable and the xj's are the nonbasic variables. Rewrite this equation so that the integer parts are on the left side and the fractional parts are on the right side:

x_i+\sum \lfloor \bar a_{i,j} \rfloor x_j - \lfloor \bar b_i \rfloor  = \bar b_i - \lfloor \bar b_i \rfloor - \sum ( \bar a_{i,j} -\lfloor \bar a_{i,j} \rfloor) x_j.

For any integer point in the feasible region the right side of the this equation is less than 1 and the left side is an integer, therefore the common value must be less than or equal to 0. So the inequality

\bar b_i - \lfloor \bar b_i \rfloor - \sum ( \bar a_{i,j} -\lfloor \bar a_{i,j} \rfloor) x_j \le 0

must hold for any integer point in the feasible region. Furthermore, if ci is not an integer for the basic solution x,

\bar b_i - \lfloor \bar b_i \rfloor - \sum ( \bar a_{i,j} -\lfloor \bar a_{i,j} \rfloor) x_j = \bar b_i - \lfloor \bar b_i \rfloor > 0.

So the inequality above excludes the basic feasible solution and thus is a cut with the desired properties. Introducing a new slack variable xk for this inequality, a new constraint is added to the linear program, namely

x_k + \sum (\lfloor \bar a_{i,j} \rfloor - \bar a_{i,j}) x_j = \lfloor \bar b_i \rfloor - \bar b_i,\, x_k \ge 0,\, x_k \mbox{ an integer}.


Consider the integer optimization problem

Subject to
x_1+x_2 \le 1
x_1+x_3 \le 1
x_2+x_3 \le 1
x_i \geq 0, x_i \mbox{ an integer for } i=1,2,3

Introduce slack variables xi, i=4, 5, 6 to produce the standard form

Subject to
x1 + x2 + x4 = 1
x1 + x3 + x5 = 1
x2 + x3 + x6 = 1
x_i \geq 0, x_i \mbox{ an integer for } i=1,2,\ldots,6

Solving this with the simplex method gives the solution x1=x2=x3=1/2 and the equations

x_1 + \tfrac{1}{2} x_4 + \tfrac{1}{2} x_5 - \tfrac{1}{2} x_6 = \tfrac{1}{2}
x_2 + \tfrac{1}{2} x_4 - \tfrac{1}{2} x_5 + \tfrac{1}{2} x_6 = \tfrac{1}{2}
x_3 - \tfrac{1}{2} x_4 + \tfrac{1}{2} x_5 + \tfrac{1}{2} x_6 = \tfrac{1}{2}

Each of these equations produces the same cutting plane, and with the introduction of a new slack variable x7 it can be written as a new constraint

-\tfrac{1}{2} x_4 -\tfrac{1}{2} x_5 -\tfrac{1}{2} x_6 + x_7= -\tfrac{1}{2}.

An analysis of the updated linear program quickly shows that the original three constraints are now redundant and the corresponding slack variables can be eliminated, leaving the simplified problem

Subject to
x1 + x2 + x3 + x7 = 1
xi ≥ 0, xi an integer for i=1, 2, 3, 7.

This is easily solved giving three solutions (x1x2x3) = (1, 0, 0), (0, 1, 0), or (0, 0, 1).

Convex optimization

Cutting plane methods are also applicable in nonlinear programming. The underlying principle is to approximate the feasible region of a nonlinear (convex) program by a finite set of closed half spaces and to solve a sequence of approximating linear programs.

See also


Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publications. ISBN 0-486-43227-0

Cornuejols, Gerard (2008). Valid Inequalities for Mixed Integer Linear Programs. Mathematical Programming Ser. B, (2008) 112:3-44. [1]

Cornuejols, Gerard (2007). Revival of the Gomory Cuts in the 1990s. Annals of Operations Research, Vol. 149 (2007), pp. 63-66. [2]

External links

Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Newton's method — In numerical analysis, Newton s method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real valued function. The… …   Wikipedia

  • Nelder–Mead method — Nelder–Mead simplex search over the Rosenbrock banana function (above) and Himmelblau s function (below) See simplex algorithm for Dantzig s algorithm for the problem of linear opti …   Wikipedia

  • Iterative method — In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination… …   Wikipedia

  • Newton's method in optimization — A comparison of gradient descent (green) and Newton s method (red) for minimizing a function (with small step sizes). Newton s method uses curvature information to take a more direct route. In mathematics, Newton s method is an iterative method… …   Wikipedia

  • Nonlinear conjugate gradient method — In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function : The minimum of f is obtained when the gradient is 0: . Whereas linear conjugate… …   Wikipedia

  • Ellipsoid method — The ellipsoid method is an algorithm for solving convex optimization problems. It was introduced by Naum Z. Shor, Arkady Nemirovsky, and David B. Yudin in 1972, and used by Leonid Khachiyan to prove the polynomial time solvability of linear… …   Wikipedia

  • Oxy-fuel welding and cutting — Oxyacetylene redirects here. For the song, see Cubanate. Side of metal, cut by oxygen propane cutting torch …   Wikipedia

  • Paper plane — This article is about toy aircraft fashioned from paper. For other uses, see Paper plane (disambiguation). Instructions for a traditional paper plane. A paper plane, paper aeroplane (UK), paper airplane (US), paper glider, paper dart or dart is a …   Wikipedia

  • Linear programming relaxation — In mathematics, the linear programming relaxation of a 0 1 integer program is the problem that arises by replacing the constraint that each variable must be 0 or 1 by a weaker constraint, that each variable belong to the interval [0,1] .That is,… …   Wikipedia

  • Travelling salesman problem — The travelling salesman problem (TSP) is an NP hard problem in combinatorial optimization studied in operations research and theoretical computer science. Given a list of cities and their pairwise distances, the task is to find a shortest… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”