Linear programming Introduce by example: Maximizing profit Making hammers & rubber mallets, using steel & rubber. Profit: $2 per hammer, $1 per mallet maximize 2h + 1m Limited supply of materials: Hammer = 3 steel + 1 rubber Mallet = 1 steel + 2 rubber Have 9 steel, 8 rubber Can't make negative things How to represent these constraints? 3h + 1m <= 9 1h + 2m <= 8 h,m >= 0 We'll ignore the real-world issue of integrality. E.g., consider h,m to be millions of units each, instead. Integer linear programming is much harder (NP-complete). Emphasize linear equation form Diagram as n-dimensional polytope (n=#vars) Intersection of each constraint's half-plane "feasible" space Diagram for above example Suggestions on how to solve above example? Polytope has at most m faces (m=#inequalities) Some inequalities may be redundant How many vertices are possible? Under-constrained -> polytope not closed Over-constrained -> polytope empty Diagram gradiants of goal formula Optimum, if any, at some vertex. Check vertices for above example Example: Network flow What values are we trying to find? variables = edge flows How to describe what we are trying to maximize? Total flow into sink, or equivalently, total flow out of source. Flow constraints already in appropriate form. General form Maximize sum of c.x subject to Ax <= b and x >= 0 What are A,b for hammer/mallet problem? How to represent minimization problem? How to represent equality constraints? How to represent strict inequality constraints? What if want x>=3 instead of x>=0? What if want x>=-3 instead of x>=0? Integral values only? E.g., can't make/sell fractional items. Discuss later. How to solve? Simplex -- Danzig 1947 Move vertex to vertex along edges Fairly simple linear algebra Can be optimized a lot Potentially exponential (#vertices), but good in practice Ellipsoid method (the first interior point method) -- Khachian 1979 Complicated Polynomial time, but generally impractical Karmarkar's alg. (an interior point method) -- Karmarkar 1984 Complicated Competitive with simplex in practice O(n^3.5 L^2 log L log log L), where L = #bits in input logarithmic factors are for the efficiency of L-bit multiplication. Other interior point methods since then. In practice, simplex & interior point algs used, often in same package. They tend to work best on different problems. We'll look at simplex. Also numerous parallel algorithms, still an active research area. Importance Many real-world optimization problems can be presented as LP problems Intro examples are good, albeit tiny, examples Real-world problems often have thousands of variables & constraints Other real-world optimization problems can be approximated as LP problems Non-linear or integer linear problems are hard to solve Present simplex by example... Use above example. Slack variables 3h + 1m + x1 = 9 1h + 2m + x2 = 8 Standard form Basic & nonbasic variables x1 = 9 - 3h - 1m x2 = 8 - 1h - 2m ---------------- z = 2h + 1m Represents feasible solution h=0,m=0, with goal=0. Pivot Entering & leaving variables Entering variables Pick variables that, by increasing them, increases z. Equivalent to picking polytope edge to move along. Which can enter? h,m Choose h Heuristic: largest constant in goal -- increasing h increases goal fastest. Leaving variables Pick variable(s) that place greatest restriction on how much can increase entering variable. Equivalent to picking polytope vertex at end of edge moving along. Mathematically, b_eqn/a_var,eqn is minimally positive. Often multiple variables, as multiple faces can intersect at vertex. Which can leave? x1,x2 Choose x1 Heuristic: provides tightest bound on h. x1 = 9 - 3h - 1m -> h = 1/3 (9 - 1m - x1) = 3 - 1/3 m - 1/3 x1 x2 = 8 - 1(3 - 1/3 m - 1/3 x1) - 2m = 5 - 5/3 m + 1/3 x1 z = 2(3 - 1/3 m - 1/3 x1) + 1m = 6 + 1/3 m - 2/3 x1 h = 3 - 1/3 m - 1/3 x1 x2 = 5 - 5/3 m + 1/3 x1 ----------------------- z = 6 + 1/3 m - 2/3 x1 Represents feasible solution m=0,h=3, goal=6. Only m can enter. Either h,x2 can leave. Choose x2 Heuristic: provides tightest bound on h. x2 = 5 - 5/3 m + 1/3 x1 -> m = 3/5 (5 + 1/3 x1 - x2) = 3 + 1/5 x1 - 3/5 x2 h = 3 - 1/3 (3 + 1/5 x1 - 3/5 x2) - 1/3 x1 = 2 - 1/15 x1 - 5/15 x1 + 1/5 x2 = 2 - 2/5 x1 + 1/5 x1 z = 6 + 1/3 (3 + 1/5 x1 - 3/5 x2) - 2/3 x1 = 7 + 1/15 x1 - 10/15 x1 - 1/5 x2 = 7 - 3/5 x1 - 1/5 x2 m = 3 + 1/5 x1 - 3/5 x2 h = 2 - 2/5 x1 + 1/5 x2 ------------------------- z = 7 - 3/5 x1 - 1/5 x2 Represents feasible solution m=3,h=2, goal=7. Nothing can enter. Optimal solution. Finds optimal solution No local minima when following vertices Since polygon is convex and goal formula is linear Algorithm keeps increasing goal formula's value ...well, not quite, only doesn't decrease it... When can pivot not increase goal? How to avoid cycling, with goal never increasing? In practice, rare. Simple heuristics of choosing entering and/or leaving variables can avoid. Example: Pick smallest index of tied variables (Proof omitted.) How can algorithm detect if there is no optimum? If an entering variable is unbounded. Example x2 = 5 + 2x3 - x4 - 3x1 x5 = 7 - 3x4 - 4x1 ------------------------ z = 5 + x3 - x4 - x1 Only x3 can enter. A bit more about worst case When can worst case happen? Note that algorithm depends on a couple heuristics for pivoting. For Danzig's original heuristics Klee & Minty, 1972, showed a n-var LP problem with a 2^n-vertex polytope such that every vertex is visited For many other heuristics Similar examples But, unknown whether such examples exist for all pivoting heuristics What if origin isn't feasible? -- Initialization Explain by example original problem max c.x, Ax<=b, x>=0 max x1 - x2 + x3 2x1 - x2 + 3x3 <= 4 2x1 - 3x2 + x3 <= -5 -x1 + x2 - 2x3 <= -1 x1,x2,x3 >= 0 Note that x1=x2=x3=0 doesn't satisfy constraints. Create "auxiliary problem" min x0, Ax-x0<=b, x,x0>=0 with new variable x0 Auxiliary problem is feasible: set x0 large enough. Original problem feasible iff (aux. problem feasible with x0=0). max -x0 -x0 + 2x1 - x2 + 3x3 <= 4 -x0 + 2x1 - 3x2 + x3 <= -5 -x0 - x1 + x2 - 2x3 <= -1 x0,x1,x2,x3 >= 0 x4 = 4 + x0 - 2x1 + x2 - 3x3 x5 = -5 + x0 - 2x1 + 3x2 - x3 x6 = -1 + x0 + x1 - x2 + 2x3 ------------------------------ w = - x0 Initial dictionary of aux. problem is infeasible, but one pivot will always make it feasible. (Proof omitted.) Only x0 can enter Choose x5 ("most infeasible") to leave x4 = 9 - 2x2 - 2x3 + x5 x0 = 5 + 2x1 - 3x2 + x3 + x5 x6 = 4 + 3x1 - 4x2 + 3x3 + x5 ------------------------------ w = -5 - 2x1 + 3x2 - x3 - x5 Only x2 can enter Choose x6 (tightest bound on x2) to leave x4 = 7 - 1.5 x1 - 2 x3 + 0.5 x5 + 0.5 x6 x0 = 2 - 0.25 x1 - 1.25 x3 + 0.25 x5 + 0.75 x6 x2 = 1 + 0.75 x1 + 0.75 x3 + 0.25 x5 - 0.25 x6 ----------------------------------------------- w = -2 + 0.25 x1 + 1.25 x3 - 0.25 x5 - 0.75 x6 Choose x3 (largest coefficient) to enter Choose x0 (tightest bound) to leave x4 = 3.8 + 1.6 x0 - 1.1 x1 + 0.1 x5 - 0.7 x6 x3 = 1.6 - 0.8 x0 - 0.2 x1 + 0.2 x5 + 0.6 x6 x2 = 2.2 - 0.6 x0 + 0.6 x1 + 0.4 x5 + 0.2 x6 -------------------------------------------- w = - x0 Done, x0=0 is optimal. Furthermore, x0=0 is feasible, so original problem feasible at x1=0, x2=2.2, x3=1.6. return to original problem For initial dictionary, could plug those values into original problem and simplify. Easier and equivalent, just modify last dictionary of aux. problem. x4 = 3.8 - 1.1 x1 + 0.1 x5 - 0.7 x6 x3 = 1.6 - 0.2 x1 + 0.2 x5 + 0.6 x6 x2 = 2.2 + 0.6 x1 + 0.4 x5 + 0.2 x6 ----------------------------------- z = ??? z = x1 - x2 + x3 = x1 - (2.2 + 0.6x1 + 0.4x5 + 0.2x6) + (1.6 - 0.2x1 + 0.2x5 +0.6x6) = -0.6 + 0.2x1 - 0.2x5 + 0.4x6 x4 = 3.8 - 1.1 x1 + 0.1 x5 - 0.7 x6 x3 = 1.6 - 0.2 x1 + 0.2 x5 + 0.6 x6 x2 = 2.2 + 0.6 x1 + 0.4 x5 + 0.2 x6 ------------------------------------ z = -0.6 + 0.2 x1 - 0.2 x5 + 0.4 x6 And proceed as usual... (details omitted) Duality Integer LP Often want only integral results. Special case: 0/1 (binary) results only Unfortunately, problem is then NP-complete, as we'll see later. Demo Excel solver speed. How can we come close to a solution? Simplest: Ignore integrality, then round. Problem? Resulting solution might not be feasible. Often close enough in practice. If feasible region oddly shaped, nearest feasible integral point may be distant and hard to find. Non-linear Programming Generalize constraints and optimization function to non-linear polynomials. Much harder, and simplex no longer applies. But, one technique is to approximate program linearly to narrow feasible region, and repeat.