Graphical Method of LP
January 28, 2023 | Author: Anonymous | Category: N/A
Short Description
Download Graphical Method of LP...
Description
OPERATIONS RESEARCH College of Business & Economics Bahir Dar University
Lecture Notes: Graphical Technique for Linear Optimization Instructor: Anteneh E.
Date: March 2017
Overview There are two major techniques for solving a LP problem: the simplex method and the interior point method. Of the two, we will focus in this course on the simplex method. However, in the case of simple models with two decision variables, we can also use the graphic method to solve linear programs. Though the graphic method doesn’t help solve real life problems with several variables, the method illustrates how the algebraic simplex method works to find a solution. Therefore, a brief study of the method sharpens your understanding of the underlying logic of the simplex method. This section introduces the graphic technique and illustrates its applications for simple two variable problems. Several useful concepts will also be discussed in the process. The graphic method
To see how we can use the graphic method let’s consider a simple example. Suppose a company produces metal doors and windows using three plants (1,2,3). For a given week the company can operate plant 1 up to 4 hours, plant 2 up to 12 hours, and plant 3 up to 18 hours. Producing a metal door requires 1 hour of processing in plant 1, 0 hours in plant 2 and 3 hours in plant 3. Producing a window requires no processing in plant 1, but 2 hours of o f processing in each of o f the other plants. The profits per unit of door and window are 3 thousand and 5 thousand birr respectively. Table below shows a summary of the relevant data. Table: data for the problem Plants doors Plant 1 1 Plant 2 0 Plant 3 3 Unit profit/thousands 3
Hours per unit produced windows 0 2 2 5
hours available 4 12 18
We first need to develop the mathematical LP model of the problem based on the data provided above. We proceed as follows.
Decision variables
The decision to be made regards the number of doors and windows to be produced. pro duced. So, let 1
= number of doors
2
= number of windows
The Objective function
The objective is to maximize the profit from the sale of the two products: doors and windows. Mathematically, Maximize
Z = 3x1 + 5x2
Constraints
We have constraints on the available a vailable machine hours for each of the three plants. These constraints are: x1 ≤ 4
Plant I capacity constraint
2x2 ≤ 12
Plant 2 capacity constraint
3x1 + 2x2 ≤ 100
Plant 3 capacity constraint
Non-negativity Restrictions
Let’s impose a non-negativity requirement on the decision variables. Thus, we have the additional constraints:
x1 ≥ 0, and x2 ≥ 0, Model summary
The LP model for this problem looks the following Maximize Subject to
Z = 10x1 + 8x2
x1 ≤ 4
Plant I capacity constraint
2x2 ≤ 12
Plant 2 capacity constraint
3x1 + 2x2 ≤ 100
Plant 3 capacity constraint
and
x1 ≥ 0, x2 ≥ 0,
Solving the problem using the graphic method gives the results: x 1 = 2, x2 = 6, Z=36. This very small problem has only two decision variables and therefore only two dimensions, so a graphical procedure can be used to solve it. This procedure involves constructing a graph with x 1 and x 2 as the axes. The first step is to identify the values of ( x x1 , x 2 ) that are permitted by the restrictions. This is done by drawing each line that borders the range of permissible values for one restriction. To begin, note that the non-negativity restrictions x 1 ≥ 0, and x 2 ≥ 0 require ( x x1 , x 2 ) to lie on the positive side of the axes (including actually on either axis), i.e., in the first quadrant. Similarly, we can draw the graphs for the other constraints in i n the model. The resulting region of permissible x1 , x 2 ), called the feasible region. values of ( x Steps of the graphic method
We can summarize the steps in using the graphic method to solve simple LP problems as a s follows Step 1: Draw the constraint boundary
First, treat all constraints as equations. Then draw the line for each constraint equation, including non negativity constraint equations. Definition: A constraint boundary is a line that forms the boundary of what is permitted by the corresponding constraint
The constraint boundary equation for any constraint constrain t is obtained by replacing its ≤, or ≥, signs by an = sign. The form of a constraint boundary equation is
for functional constraints and x j =0
for nonnegativity constraints
Each such equation defines a line in two-dimensional space. This line forms the constraint boundary for the corresponding constraint. When the constraint has an = sign, only the points on the constraint boundary satisfy the constraint. Next, take these lines as the boundaries of the respective constraints and shade the relevant regions as suggested by the direction of the inequalities of the constraints. Step 2: Identify the feasible region
When we shade the relevant region for each of the constraints cons traints in the problem, we will find a region that satisfies all the constraints in the problem. This region is called the feasible region . It is from from those points in this region that we can choose to optimize our objective function. In a passing, note that i t is possible for a problem to have no feasible region at all. The feasible region consists of feasible solutions. A feasible solution is defined as follows. Definition: A feasible solution is a solution for which all the constraints are satisfied. An infeasible solution is a solution for which at least one constraint is violated.
In other words, a feasible solution is a solution that is possible given the restrictions imposed in the model. The feasible region consists of all possible (feasible) solutions to the model. Definition: The feasible region is the collection of all feasible solutions.
The boundary of the feasible region contains just those feasible solutions solutio ns that satisfy one or more of the constraint boundary equations. Step 3: Determine the optimal point
Finally, we will choose the point in the feasible region that will give us the optimal value for the objective function. Definition: An optimal solution is a feasible solution that has the most favorable value of the objective function. The most favorable value is the largest value if the objective function is to be maximized, whereas it is the smallest value if the objective function is to be minimized.
Determination of the optimal point can be done in two alternative ways. Alternative 1: The first alternative involves the following two steps 1. Plot the objective function by assigning an arbitrary value to Z; then move this line in the direction that Z increases (or decreases, if the objective is minimization) to locate the optimal solution point. The optimal solution point in this case is the last point the objective function touches as it leaves the feasible solution area. 2. Solve the simultaneous equations at the solution point to find the optimal solution values
Alternative 2: The second alternative involves the following two steps 1. Solve the simultaneous equations at each corner point to find the solution values at each corner point 2. Substitute these values into the objective function to find the set of values that results in the optimum Z value What the second alternative suggests is that in the process of finding an optimal solution for a LP problem, one can simply check the corner points in the feasible region. This is because of three fundamental properties that we will discuss below. The first property is that for any LP problem with bounded feasible region, if there is an optimal solution, it must be a corner point feasible solution. Second, there are a finite number of coroner point feasible solutions. The third property serves as an optimality check and says that for any linear programming problem that possesses at least one optimal solution, if a CPF solution has no adjacent CPF solutions that are better (as measured by Z ), then it must be an optimal solution. Corner points
The points of intersection of the constraint equations in a LP problem are the corner-point solutions of the problem. Definition: A corner point in the two dimensional space is a point that is obtained as a solution for the simultaneous equation of two constraint equations.
We can generalize this definition to models with more than two decision variables. For any linear programming problem with n decision variables, each CP solution lies at the intersection of n constraint boundaries; i.e., it is the simultaneous solution of a system of n constraint boundary equations. For instance, if the problem consists of three decision variables, then a corner point in this model is obtained as the simultaneous solution of three constraint equations. Corner-point feasible (CPF) solutions
Note that a corner point may be either in the feasible region or outside of it (infeasible). The corner points that lie in the feasible region are called corner point feasible solutions (CPF) solutions. A corner-point feasible (CPF) solution is a feasible solution that does not lie on any line segment connecting two other feasible solutions. Formally we define a CPF solution as follows. Definition: A CPF is a solution that lies at a corner of the feasible region. It is a point that can’t be determined as a convex combination of two other points in the feasible region .
As this definition implies, a feasible solution that does lie on a line segment connecting two other feasible solutions is not a CPF solution. In this relation it may be useful to give a formal formal definition of convex combination
Definition: A A point x in a convex set C (the feasible region) is i s said to be an extreme point (corner point feasible solution) of C if there are no two distinct points x 1 and x 2 in C such that x = αx 1 +(1−α)x 2 (Corner point feasible solution) is thus a point that does not lie for some 0
View more...
Comments