Solving Linear Programming Problems
This chapter introduces fundamental methodologies for solving Linear Programming Problems (LPPs), essential for optimizing resource allocation. Mastery of graphical and simplex methods is crucial for CUET PG, as these techniques frequently appear in problem-solving and theoretical questions.
---
Chapter Contents
| # | Topic |
|---|-------|
| 1 | Graphical Method |
| 2 | Basic Feasible Solutions |
| 3 | Simplex Method |
---
We begin with Graphical Method.
Part 1: Graphical Method
The graphical method provides an intuitive approach to solving Linear Programming Problems (LPPs) involving two decision variables. We employ this technique to visualize the feasible region defined by the constraints and to identify the optimal solution that maximizes or minimizes the objective function. This method is fundamental for understanding the geometric interpretation of LPPs, which is crucial for higher-dimensional solution techniques.
---
Core Concepts
1. Fundamentals of Linear Programming Problems (LPPs)
An LPP seeks to optimize (maximize or minimize) a linear objective function subject to a set of linear constraints. These constraints typically include non-negativity restrictions on the decision variables.
The linear function
that we aim to maximize or minimize.
The variables, denoted as , representing the quantities to be determined. In the graphical method, we typically consider two variables, and (or and ).
A set of linear inequalities or equalities that limit the possible values of the decision variables. These include structural constraints and non-negativity constraints ().
The set of all points that satisfy all the constraints of the LPP. We construct this region by plotting each constraint as a line and identifying the area that satisfies the inequality.
Quick Example: Plotting a Feasible Region
Consider the constraints:
Step 1: Plot the lines corresponding to the equality of the constraints.
> For : Points are and .
> For : Points are and .
Step 2: Determine the feasible region for each inequality.
> For : Test , which is true. Shade below the line.
> For : Test , which is true. Shade below the line.
> For : Shade to the right of the y-axis.
> For : Shade above the x-axis.
Step 3: Identify the intersection of all shaded regions. This common region is the feasible region.
:::question type="MCQ" question="Which of the following points is NOT part of the feasible region defined by , , and ?" options=["","","",""] answer="" hint="Plot the constraints and test each point against all inequalities." solution="Step 1: We evaluate each given point against the constraints: , , .
* (True)
* (False)
Since is false, does not satisfy all constraints and is therefore not in the feasible region.
* (True)
* (True)
* (True)
This point is in the feasible region.
* (True)
* (True)
* (True)
This point is in the feasible region.
* (True)
* (True)
* (True)
This point is in the feasible region.
Step 2: Identify the point that is not in the feasible region.
Therefore, the point is not part of the feasible region.
Answer: "
:::
2. The Corner Point Method
The fundamental theorem of LPP states that if an optimal solution to an LPP exists, it must occur at a corner point (or vertex) of the feasible region. The corner point method systematically evaluates the objective function at each vertex.
If an LPP has an optimal solution, it will always be found at one of the corner points of the feasible region. If the feasible region is bounded, an optimal solution always exists.
Steps for Solving an LPP Graphically:
Worked Example (Standard Maximization - similar to PYQ 1, 6):
Maximize
Subject to:
Step 1: Plot the constraint lines.
> For : Points are and .
> For : Points are and .
Step 2: Identify the feasible region.
The feasible region is bounded by , , , and . It is the area satisfying all inequalities and .
Step 3: Determine the corner points.
>
> (intersection of and )
> (intersection of and )
> Subtracting from yields . Substituting into yields . So .
> (intersection of and )
Step 4: Evaluate at each corner point.
> At :
> At :
> At :
> At :
Step 5: Identify the maximum value.
> The maximum value of is at point .
Answer:
:::question type="MCQ" question="Maximize
* : Points
* : Points
* (axes)
Step 2: Identify the feasible region.
The feasible region is bounded by , , and . It's the area satisfying (below the line), (below , or ), and .
Step 3: Determine the corner points.
* (intersection of )
* (intersection of and )
* (intersection of and )
* Substitute into . So . Thus .
Step 4: Evaluate at each corner point.
* At :
* At :
* At :
Step 5: Identify the maximum value.
The maximum value of is at point .
Answer: "
:::
3. Minimization Problems
The graphical method for minimization problems follows the same steps as maximization. The key difference lies in identifying the optimal solution: we select the corner point that yields the smallest value of the objective function. Minimization problems often lead to unbounded feasible regions.
Worked Example (Standard Minimization - similar to PYQ 7):
Minimize
Subject to:
Step 1: Plot the constraint lines.
> For : Points and .
> For : Points and .
Step 2: Identify the feasible region.
The feasible region is bounded by , , , and . It is the area satisfying all inequalities and . This region is unbounded.
Step 3: Determine the corner points.
> (intersection of and )
> (intersection of and )
> Subtracting from yields . Substituting into yields . So .
> (intersection of and )
Step 4: Evaluate at each corner point.
> At :
> At :
> At :
Step 5: Identify the minimum value.
> The minimum value of is at point .
> Note: For unbounded feasible regions, we must ensure that the objective function does not go to (for minimization) or (for maximization) within the feasible region. We can check this by drawing an isocost line (e.g., ) and seeing if it can be moved further into the feasible region while decreasing . Here, moving it further to the left/down would reduce , but it would leave the feasible region.
Answer:
:::question type="NAT" question="Minimize
What is the minimum value of ?" answer="27" hint="Plot the lines, identify the unbounded feasible region, find corner points, and evaluate the objective function." solution="Step 1: Plot the constraint lines.
* : Points
* : Points
* (axes)
Step 2: Identify the feasible region.
This is an unbounded region above and to the right of the constraint lines.
Step 3: Determine the corner points.
* (intersection of and )
* (intersection of and )
* Multiply by 3: .
* Subtract from : .
* Substitute into : . So .
* (intersection of and )
Step 4: Evaluate at each corner point.
* At :
* At :
* At :
Step 5: Identify the minimum value.
The minimum value of is at point .
Answer: "
:::
---
Advanced Applications and Special Cases
1. Unbounded Solutions
An LPP has an unbounded solution if the feasible region is unbounded and the objective function can be increased (for maximization) or decreased (for minimization) indefinitely within this region without violating any constraints.
Worked Example (Unbounded Maximization - similar to PYQ 3, 5):
Maximize
Subject to:
Step 1: Plot the constraint lines.
> For : Points .
> For : Points .
Step 2: Identify the feasible region.
The feasible region is defined by , , and . This region is unbounded in the positive and directions.
Step 3: Determine the corner points.
> (intersection of and )
> (intersection of and , also and )
Step 4: Evaluate at the corner points.
> At :
> At :
Step 5: Check for unboundedness.
The feasible region extends infinitely in the direction where both and increase. Consider a point like in the feasible region: . Consider : . As we move further into the feasible region, continues to increase without bound.
Answer:
:::question type="MCQ" question="Consider the LPP:
Maximize
Subject to:
Which of the following describes the solution?" options=["Unique optimal solution","Multiple optimal solutions","Unbounded solution","No feasible solution"] answer="Unbounded solution" hint="Plot the feasible region. If it is unbounded and the objective function's direction of increase points into the unbounded region, the solution is unbounded." solution="Step 1: Plot the constraint lines.
* : Points , etc.
* : A vertical line at .
* : The x-axis.
Step 2: Identify the feasible region.
* : Below or on the line .
* : To the right of or on the line .
* : Above or on the x-axis.
The feasible region is unbounded, extending to infinity in the positive and directions, starting from and .
Step 3: Determine the corner points.
* (intersection of and )
* (intersection of and )
Step 4: Evaluate at the corner points.
* At :
* At :
Step 5: Check for unboundedness.
The feasible region extends indefinitely towards positive and . For example, take point (which is feasible: , , ). . As and increase, will also increase indefinitely.
Answer: "
:::
2. No Feasible Solution (Infeasible LPP)
An LPP is infeasible if there is no point that satisfies all the constraints simultaneously. Graphically, this means the feasible region is empty; the shaded regions for the constraints do not overlap.
Worked Example (Infeasible LPP - similar to PYQ 2):
Maximize
Subject to:
Step 1: Plot the constraint lines.
> For : Points .
> For : Points .
Step 2: Identify the feasible region.
For , we shade below the line .
For , we shade above the line .
Since these two regions are on opposite sides of parallel lines, they do not overlap. There is no common region that satisfies both and .
Answer:
:::question type="MCQ" question="The LPP:
Maximize
Subject to:
has:" options=["A unique optimal solution","Multiple optimal solutions","An unbounded solution","No feasible solution"] answer="No feasible solution" hint="Check if there is any region that satisfies all constraints simultaneously." solution="Step 1: Plot the constraint lines.
* : A vertical line at .
* : A vertical line at .
* : The x-axis.
Step 2: Identify the feasible region.
* : To the left of or on the line .
* : To the right of or on the line .
* : Above or on the x-axis.
There is no value that can be simultaneously less than or equal to AND greater than or equal to . Therefore, the constraints and are contradictory.
There is no common region that satisfies all constraints.
Answer: "
:::
3. Multiple Optimal Solutions
Multiple optimal solutions occur when the objective function is parallel to one of the binding constraints (a constraint that forms part of the boundary of the feasible region at the optimal solution). In this case, every point on the line segment formed by that binding constraint within the feasible region yields the same optimal value.
Worked Example (Multiple Optimal Solutions - similar to PYQ 4):
Maximize
Subject to:
Step 1: Plot the constraint lines.
> For : Points .
> For : Points .
Step 2: Identify the feasible region.
The feasible region is bounded by , , , and .
Step 3: Determine the corner points.
>
> (intersection of and )
> (intersection of and )
> Subtracting from yields . Substituting into yields . So .
> (intersection of and )
Step 4: Evaluate at each corner point.
> At :
> At :
> At :
> At :
Step 5: Identify the maximum value.
We observe that both corner points and yield the maximum value of . Since the objective function has a slope of , which is the same as the slope of the constraint (also ), the entire line segment represents optimal solutions.
Answer:
:::question type="MSQ" question="Which of the following LPPs can exhibit multiple optimal solutions?
* Objective function slope: . Slope is .
* Constraint . Slope is .
Since the objective function's slope is identical to the slope of a constraint, this LPP is a candidate for multiple optimal solutions. Upon plotting, the feasible region's corner points are , , , .
* At , .
* At , .
* At , . (Intersection of and )
* At , .
Both and yield . Thus, all points on the line segment connecting and are optimal. This LPP has multiple optimal solutions.
* Objective function slope: . Slope is .
* Constraint : Slope is .
* Constraint : Slope is .
None of the constraint slopes match the objective function slope. This LPP will have a unique optimal solution. (Intersection , ).
* Objective function slope: . Slope is .
* Constraint : Slope is .
* Constraint : Slope is .
The objective function's slope matches the constraint. This is a candidate for multiple optimal solutions.
Corner points are , , .
* At , .
* At , . (Intersection of and )
* At , .
Both and yield . Thus, all points on the line segment connecting and are optimal. This LPP has multiple optimal solutions.
* Objective function slope: . Slope is .
* Constraint : Slope is .
* Constraint : Vertical line, undefined slope.
No matching slopes. This LPP will have a unique optimal solution. (Corner points: , , , . Max at ).
Step 2: Identify LPPs with multiple optimal solutions.
Based on our analysis, both LPP 1 and LPP 3 can exhibit multiple optimal solutions.
Answer: "
:::
4. Redundant Constraints
A constraint is considered redundant if its removal does not alter the feasible region of the LPP. This means the constraint does not bind the optimal solution and does not affect the set of feasible points. Redundant constraints do not affect the optimal solution but can be identified graphically.
Worked Example (Redundant Constraint - similar to PYQ 7):
Minimize
Subject to:
Consider adding a redundant constraint: .
Step 1: Plot the primary constraint lines.
> For : Points .
> For : Points .
Step 2: Identify the feasible region (without ).
The corner points are , , .
(intersection of and )
Step 3: Evaluate at these corner points.
> At :
> At :
> At :
The minimum is at .
Step 4: Now, consider the constraint .
The line passes through and .
The original feasible region (above and ) already satisfies because all points in the region have . Since , any point satisfying also satisfies .
Thus, the constraint is redundant. It does not cut off any part of the original feasible region.
Answer:
:::question type="MCQ" question="For the LPP:
Minimize
Subject to:
Which of the following constraints is redundant?" options=["","","",""] answer="" hint="A constraint is redundant if its removal does not change the feasible region. Visually, it means the constraint's boundary line does not 'cut into' the existing feasible region." solution="Step 1: Plot the binding constraints to determine the initial feasible region.
* : Points
* : Points
* (axes)
Step 2: Identify the feasible region's corner points.
* (intersection of and )
* (intersection of and )
* Subtracting from yields .
* Substitute into . So .
* (intersection of and )
The feasible region is unbounded, starting from these corner points.
Step 3: Test each option for redundancy.
* : This is a non-negativity constraint. Removing it would extend the feasible region into negative values, so it's not redundant.
* : This is a non-negativity constraint. Removing it would extend the feasible region into negative values, so it's not redundant.
* : The line passes through and . Our feasible region is defined by and .
Any point satisfying will also satisfy (since ).
Any point satisfying will also have (e.g., if , ; if , ).
Thus, the region completely encompasses the existing feasible region. Adding does not restrict the feasible region further. This constraint is redundant.
* : The line . Our feasible region includes points like and . The point has , which is less than . If we add , it would cut off parts of the feasible region (e.g., the segment from to on ). So, this is not redundant.
Answer: "
:::
---
Problem-Solving Strategies
- Sketch Quickly: For MCQs, a rough sketch of the feasible region and objective function's slope can often quickly eliminate options or point to the correct type of solution (bounded, unbounded, infeasible). Precise plotting is only needed for exact corner points.
- Isoprofit/Isocost Line (Alternative to Corner Point): Draw a line representing an arbitrary value of the objective function (e.g., ). Then, slide this line parallel to itself in the direction of increasing (for maximization) or decreasing (for minimization). The last point(s) it touches in the feasible region before leaving it will be the optimal solution. This can be faster for identifying unboundedness or multiple solutions.
- Focus on Binding Constraints: The optimal solution (if unique and bounded) always occurs at the intersection of two binding constraints. For multiple optimal solutions, the objective function is parallel to one binding constraint.
- Check All Corner Points: Even if you suspect an answer early, always evaluate the objective function at all identified corner points of the feasible region to avoid errors, especially with complex regions or tricky slopes.
---
Common Mistakes
❌ Incorrect Shading: Students often shade the wrong side of a constraint line, especially with inequalities or when variables are negative (e.g., ).
✅ Correct Approach: Always test a point (like if it's not on the line) in the inequality. If it satisfies the inequality, shade the region containing that point. Otherwise, shade the opposite side. For , shade right of y-axis; for , shade above x-axis.
❌ Missing Corner Points: Failing to identify all vertices of the feasible region, particularly those on the axes or intersections of multiple lines.
✅ Correct Approach: Systematically list all intersection points of the constraint lines with each other and with the axes. Then, verify which of these points actually lie on the boundary of the feasible region.
❌ Misinterpreting Unbounded Regions: Assuming an unbounded feasible region always implies an unbounded solution.
✅ Correct Approach: An unbounded feasible region can still have a bounded optimal solution (especially in minimization problems). For a maximum/minimum to be unbounded, the objective function must be able to increase/decrease indefinitely within the feasible region. Always check the values of at the corner points and consider the direction of the objective function's optimization.
❌ Confusing Maximization and Minimization: Selecting the minimum value for a maximization problem or vice-versa.
✅ Correct Approach: Clearly note whether the problem is maximization or minimization and select the appropriate optimal value from the corner point evaluations.
---
Practice Questions
:::question type="MCQ" question="Maximize subject to:
" options=["","","",""] answer="" hint="Plot the feasible region, identify all corner points, and evaluate the objective function at each." solution="Step 1: Plot the constraint lines:
* : Points
* : Points
* : A vertical line at
* (axes)
Step 2: Identify the feasible region and its corner points:
The feasible region is bounded by , , , , and .
*
* (intersection of and )
* (intersection of and )
* (intersection of and )
* Subtract from .
* Substitute into . So .
* (intersection of and )
Step 3: Evaluate at each corner point:
* At :
* At :
* At :
* At :
* At :
Step 4: Determine the maximum value.
The maximum value of is at point .
Answer: "
:::question type="NAT" question="Minimize subject to:
What is the minimum value of ?" answer="18" hint="This is a minimization problem with an unbounded feasible region. Find the corner points and evaluate ." solution="Step 1: Plot the constraint lines:
* : Points
* : Points
* (axes)
Step 2: Identify the feasible region and its corner points:
The feasible region is unbounded, defined by , , and .
* (intersection of and )
* (intersection of and )
* Subtract from .
* Substitute into . So .
* (intersection of and )
Step 3: Evaluate at each corner point:
* At :
* At :
* At :
Step 4: Determine the minimum value.
The minimum value of is at point .
Answer: "
:::question type="MCQ" question="The feasible region for an LPP is defined by:
Which of the following points is a corner point of this feasible region?" options=["","","",""] answer="" hint="Plot the lines and identify their intersections. Then, check if these intersections satisfy all constraints to be a corner point." solution="Step 1: Plot the constraint lines:
* : Points
* : A vertical line at
* : A horizontal line at
* (axes)
Step 2: Identify the feasible region.
The feasible region is bounded by , , and .
The non-negativity constraints are implied by .
Step 3: Find the intersection points of these lines:
* Intersection of and : .
* Check: (True). So is a corner point.
* Intersection of and :
* Substitute . So .
* Check: (True). So is a corner point.
* Intersection of and :
* Substitute . So .
* Check: (True). So is a corner point.
Step 4: Compare with the given options.
* : Violates . Not a corner point.
* : Violates . Not a corner point.
* : This is one of the corner points we identified.
* : Violates (). Not a corner point.
Answer: "
:::question type="MCQ" question="An LPP has the objective function Maximize . The feasible region is a polygon with corner points , , , . The optimal value of is:" options=["","","",""] answer="" hint="Evaluate the objective function at each given corner point and select the maximum." solution="Step 1: List the given corner points.
*
*
*
*
Step 2: Evaluate at each corner point.
* At :
* At :
* At :
* At :
Step 3: Determine the maximum value.
The maximum value of is at point .
Answer: "
:::question type="MSQ" question="Consider the LPP:
Maximize
Subject to:
Which of the following statements are correct?
* : Points
* : Points
* (axes)
Step 2: Identify the feasible region.
* : Above or on the line .
* : Above or on the line .
* : In the first quadrant.
The feasible region starts from the intersection of and .
Add the two equations: .
Substitute into .
So, is a corner point.
The feasible region is bounded by the lines (from -axis to ), (from to -axis), and then extends unboundedly outwards.
The corner points are , , .
Step 3: Evaluate the statements.
* Graphically, the region extends indefinitely in the positive and directions. For example, satisfies (True), (True), (True). This point is in the feasible region. As increase, the region continues. Thus, the feasible region is unbounded. (Statement 1 is incorrect).
* Objective function: .
* At corner point : .
* At corner point : .
* At corner point : .
* Consider a point further in the unbounded region, e.g., . .
* Since the objective function can increase indefinitely as and increase within the feasible region, the LPP has an unbounded solution. (Statement 2 is correct).
* Our identified corner points are , , .
* is on , but , so it's not feasible.
* is on , but , so it's not feasible.
* Thus, statement 3 is incorrect.
* For an unbounded maximization problem, the maximum value tends to infinity. In such cases, a finite optimal solution does not exist. (Statement 4 is correct).
Answer: "
---
Summary
| # | Formula/Concept | Expression |
|---|----------------|------------|
| 1 | LPP Standard Form | |
| 2 | Corner Point Theorem | Optimal solution, if it exists, occurs at a vertex of the feasible region. |
| 3 | Feasible Region | The set of points satisfying all constraints. |
| 4 | Types of Solutions | Unique Optimal: One corner point yields the best .
Multiple Optimal: Objective function parallel to a binding constraint, all points on a segment are optimal.
Unbounded: Feasible region is open, and can increase/decrease infinitely.
No Feasible Solution: Constraints are contradictory, empty feasible region. |
---
What's Next?
This topic connects to:
- Simplex Method: A more general algebraic algorithm for solving LPPs with more than two variables, which extends the principles observed in the graphical method.
- Duality in LPP: Understanding how every LPP (primal problem) has a corresponding dual problem, and how their optimal solutions are related. This is crucial for theoretical understanding and advanced solution techniques.
- Sensitivity Analysis: Investigating how changes in the objective function coefficients or constraint values affect the optimal solution.
---
Proceeding to Basic Feasible Solutions.
---
Part 2: Basic Feasible Solutions
In linear programming, we seek to optimize an objective function subject to linear constraints. The concept of a Basic Feasible Solution (BFS) is fundamental to solving Linear Programming Problems (LPPs), particularly through the Simplex method. We define and explore the properties of these solutions, which correspond to the corner points of the feasible region.
---
Core Concepts
1. Feasible Solution
A feasible solution to an LPP is a set of values for the decision variables that satisfies all the problem's constraints simultaneously, including the non-negativity restrictions. The collection of all feasible solutions forms the feasible region.
A vector is a feasible solution if it satisfies all constraints (or for standard form) and .
Quick Example:
Consider the constraints:
Is a feasible solution?
Step 1: Check constraints
Step 2: Check non-negativity
Answer: Yes, is a feasible solution.
:::question type="MCQ" question="Given the constraints: , , . Which of the following points is a feasible solution?" options=["","","",""] answer="" hint="Substitute each point into all constraints and non-negativity conditions." solution="Let's check each option:
For :
Thus, is a feasible solution.
For : violates . Not feasible.
For : . Not feasible.
For : . Not feasible."
:::
---
2. Basic Solution
For a system of linear equations with variables, , where is an matrix and (assuming ), a basic solution is obtained by setting variables to zero and solving for the remaining variables. These variables are called basic variables, and the variables set to zero are called non-basic variables.
Given with and . A basic solution is formed by:
- Choosing linearly independent columns from . The corresponding variables are basic variables.
- Setting the remaining variables (non-basic variables) to zero.
- Solving the resulting system for the basic variables.
Quick Example:
Consider the system:
Here . We need to set variables to zero.
Let (non-basic variables). The system becomes:
Step 1: Solve the reduced system
Multiply the second equation by 2:
>
Subtract the first equation from this:
>
Step 2: Substitute back into
>
Answer: A basic solution is
Here, are basic variables and are non-basic variables.
:::question type="MCQ" question="For the system:
If and are chosen as basic variables, what is the value of ?" options=["3","2","1","4"] answer="3" hint="Set non-basic variables to zero and solve the resulting system for and ." solution="The system has equations and variables. If are basic variables, then are non-basic variables, so we set and .
The system becomes:
Add equation (1) and (2):
Substitute into (1):
The basic solution is . Thus, .
Answer: \boxed{3}"
:::
---
3. Number of Basic Solutions
The maximum number of basic solutions for a system of equations and variables () is given by the combination formula . This represents the number of ways to choose variables out of to be basic variables.
Quick Example:
For a system with equations and variables, what is the maximum number of basic solutions?
Step 1: Identify and
>
>
Step 2: Apply the formula
>
Answer: There can be a maximum of 10 basic solutions.
❌ Assuming all combinations will result in a unique basic solution.
✅ A basic solution only exists if the submatrix corresponding to the chosen basic variables is non-singular (i.e., its determinant is non-zero). If it is singular, then the chosen variables cannot form a basis, and no unique solution exists for that particular combination.
:::question type="MCQ" question="A linear programming problem has 4 constraints and 6 variables. Assuming the rank of the constraint matrix is equal to the number of constraints, what is the maximum possible number of basic solutions?" options=["15","24","30","60"] answer="15" hint="Use the combination formula where is the number of variables and is the number of constraints." solution="We are given (number of constraints) and (number of variables).
The maximum number of basic solutions is given by .
Therefore, the maximum possible number of basic solutions is 15.
Answer: \boxed{15}"
:::
---
4. Basic Feasible Solution (BFS)
A basic solution is a Basic Feasible Solution (BFS) if all the basic variables satisfy the non-negativity constraint (i.e., they are ). BFS are crucial because they correspond to the corner points (vertices) of the feasible region. For any LPP with a finite optimal solution, at least one optimal solution occurs at a BFS.
A basic solution for is a Basic Feasible Solution if all its components .
Quick Example:
Consider the system:
From a previous example, we found a basic solution by setting :
Step 1: Check non-negativity of all components
>
>
>
>
Answer: Since all components are non-negative, this is a Basic Feasible Solution.
:::question type="MCQ" question="Consider the system:
with for all . Which of the following is a Basic Feasible Solution?" options=["","","",""] answer="" hint="For each option, first verify if it's a basic solution (by identifying non-basic variables and checking consistency), then check non-negativity." solution="Let's analyze each option:
This is not a solution.
This is not a solution.
All components are non-negative. This is a basic solution and a BFS.
Therefore, is a Basic Feasible Solution.
Answer: \boxed{(0, 0, 3, 2)}}"
:::
---
5. Degenerate and Non-Degenerate BFS
A Basic Feasible Solution can be further classified based on the values of its basic variables. This distinction is important for understanding the behavior of algorithms like the Simplex method.
A Basic Feasible Solution is degenerate if at least one of its basic variables has a value of zero.
A Basic Feasible Solution is non-degenerate if all its basic variables have positive values (i.e., strictly greater than zero).
Quick Example:
Consider the system:
Let's find a basic solution by setting .
The system becomes:
The basic solution is
Here, are basic variables, and are non-basic. Since all components are , this is a BFS.
Are any basic variables zero? No, .
Therefore, is a non-degenerate BFS.
Now, let's find another basic solution by setting .
The system becomes:
Substitute into the first equation:
The basic solution is
Here, are basic variables, and are non-basic. All components are , so this is a BFS.
Are any basic variables zero? Yes, .
Therefore, is a degenerate BFS.
Degeneracy means that a basic feasible solution has fewer than strictly positive variables. This can lead to issues like cycling in the Simplex method, where the algorithm might repeatedly visit the same degenerate BFS without improving the objective function.
:::question type="MCQ" question="For the system:
with . Which of the following basic feasible solutions is degenerate?" options=["","","",""] answer="" hint="Identify the basic variables for each BFS and check if any of them are zero." solution="The system has equations and variables. For a basic solution, variables are non-basic (set to 0), and variables are basic.
Therefore, is a degenerate BFS.
Answer: \boxed{(2, 0, 0, 0)}}"
:::
---
Advanced Applications
We now consider a more complex system to find all basic feasible solutions and identify their properties.
Worked Example:
Find all basic feasible solutions for the system:
Here . The maximum number of basic solutions is
We need to set variables to zero.
Case 1: (non-basic)
BFS: . Basic variables . Both positive. Non-degenerate BFS.
Case 2: (non-basic)
Substitute :
This solution is not feasible because . Not a BFS.
Case 3: (non-basic)
Substitute :
BFS: . Basic variables . Both positive. Non-degenerate BFS.
Case 4: (non-basic)
Substitute :
This solution is not feasible because . Not a BFS.
Case 5: (non-basic)
Substitute :
BFS: . Basic variables . Both positive. Non-degenerate BFS.
Case 6: (non-basic)
Multiply the first equation by 2:
Subtract this from the second equation:
This solution is not feasible because . Not a BFS.
Answer: The basic feasible solutions are , , and . All are non-degenerate.
:::question type="NAT" question="Consider the system:
Set non-basic variables and :
Add equation (1) and (2):
Substitute into equation (1):
The basic solution is . All components are non-negative, so this is a BFS.
We need to calculate :
Answer: \boxed{5}"
:::
---
Problem-Solving Strategies
To efficiently identify a Basic Feasible Solution (BFS) in multiple-choice questions:
- Check Feasibility First: Ensure all components of the given solution are non-negative () and satisfy all original constraints. If not, it's not feasible.
- Check Basicity: Count the number of non-zero variables. If this count is equal to (number of equations), and the columns corresponding to these variables are linearly independent, it is a basic solution. Alternatively, identify the variables that are zero (non-basic variables). Substitute these zeros into the original equations and solve for the remaining variables. If the given values match and the submatrix is non-singular, it's a basic solution.
- Combine: If both feasibility and basicity conditions are met, it is a BFS.
When a basic solution is found, immediately check if any of the basic variables are zero. If so, the BFS is degenerate. This is a common trap, as non-basic variables are always zero by definition; degeneracy refers specifically to basic variables.
---
Common Mistakes
❌ A common error is to assume any solution satisfying and is a BFS.
✅ A solution must also satisfy the basicity condition (exactly non-zero variables corresponding to linearly independent columns, or variables explicitly set to zero) to be a basic solution. Only then can it be classified as a BFS if all components are non-negative.
❌ Simply calculating and assuming all combinations yield a valid basic solution.
✅ gives the maximum possible number. A particular choice of variables forms a basis only if the corresponding submatrix is non-singular. If it is singular, no unique basic solution exists for that choice.
❌ Believing a solution is degenerate if any variable is zero.
✅ Degeneracy specifically refers to a situation where one or more basic variables are zero. Non-basic variables are always zero by definition.
---
Practice Questions
:::question type="MCQ" question="Given the system of equations:
The maximum number of distinct basic solutions is given by .
Thus, there are a maximum of 6 distinct basic solutions."
:::
:::question type="MSQ" question="For the system
Let's find the BFS:
* Case 1: (non-basic)
Solution . Basic variables . Since a basic variable () is zero, this is a degenerate BFS. So, 'The solution is a degenerate BFS' is Correct.
* Case 2: (non-basic)
Solution . Basic variables . Both positive. Non-degenerate BFS.
* Case 3: (non-basic)
Solution . This is the same as Case 1.
* Case 4: (non-basic)
* Case 5: (non-basic)
Solution . This is the same as Case 1.
* Case 6: (non-basic)
Substitute into the first equation:
Solution . Basic variables . Both positive. Non-degenerate BFS.
Now evaluate the other options:
* 'The solution is a BFS.'
Check if it's a solution:
(True)
(True)
All components are . So it is a feasible solution.
Check basicity: It has 3 non-zero variables (). For a basic solution, there must be exactly non-zero (basic) variables. Since there are 3 non-zero variables, this is not a basic solution, and thus not a BFS. So this statement is Incorrect.
* 'All basic feasible solutions for this system are non-degenerate.'
We found to be a degenerate BFS. So this statement is Incorrect.
Therefore, the correct statements are 'The solution is a degenerate BFS.' and 'There are at most 6 basic solutions.' "
:::
:::question type="MCQ" question="A feasible solution to a linear programming problem must satisfy which of the following conditions?" options=["It must optimize the objective function.","It must be a basic solution.","It must satisfy all constraints and non-negativity conditions.","It must be unique."] answer="It must satisfy all constraints and non-negativity conditions." hint="Recall the fundamental definition of a feasible solution in LPP." solution="A feasible solution is any point within the feasible region, meaning it satisfies all the problem's constraints (including equality/inequality constraints and non-negativity conditions) simultaneously.
- Optimizing the objective function describes an optimal solution, not just any feasible solution.
- Being a basic solution describes a specific type of feasible solution (a BFS), not all feasible solutions.
- Feasible solutions are not necessarily unique; there can be infinitely many feasible solutions."
:::question type="NAT" question="Consider the system:
We have equations and variables. If is chosen as a non-basic variable, we set . The remaining variables will be basic variables.
Substitute into the equations:
Substitute from (2) into (1):
The basic solution is . Since all components are non-negative, this is a Basic Feasible Solution. The value of is 3.
Answer: \boxed{3}"
:::
---
Summary
| # | Formula/Concept | Expression |
|---|----------------|------------|
| 1 | Feasible Solution | and satisfies (or constraints) |
| 2 | Basic Solution | variables are zero (non-basic), variables solved from (basic) |
| 3 | Max Basic Solutions | |
| 4 | Basic Feasible Solution (BFS) | A basic solution where all components |
| 5 | Degenerate BFS | A BFS where at least one basic variable is zero |
| 6 | Non-Degenerate BFS | A BFS where all basic variables are positive () |
---
What's Next?
This topic connects to:
- Simplex Method: BFS are the corner points of the feasible region, and the Simplex method systematically moves from one BFS to an adjacent one to find the optimal solution.
- Graphical Method: For LPPs with two variables, BFS correspond directly to the vertices of the feasible region, which can be visualized.
- Duality Theory: Understanding BFS is crucial for formulating and solving the dual problem and interpreting sensitivity analysis in LPP.
---
Proceeding to Simplex Method.
---
Part 3: Simplex Method
The Simplex Method is a fundamental algebraic procedure for solving Linear Programming Problems (LPPs). We employ an iterative approach to move from one basic feasible solution to another, systematically improving the objective function value until an optimal solution is attained.
---
---
Core Concepts
1. Standard Form of a Linear Programming Problem
For application of the Simplex Method, an LPP must first be transformed into its standard form. This involves expressing the objective function as a maximization problem and all constraints as equalities, while ensuring all variables are non-negative.
An LPP is in standard form if:
- The objective function is to be maximized.
- All constraints are equality constraints.
- All decision variables are non-negative.
- The right-hand side (RHS) of each constraint is non-negative.
Conversion Rules:
* Minimization to Maximization: To minimize , maximize .
* Constraints: Introduce a non-negative slack variable () to convert to equality:
Slack variables represent unused resources.
* Constraints: Introduce a non-negative surplus variable (or excess variable, ) to convert to equality:
Surplus variables represent the amount by which the LHS exceeds the RHS.
* Unrestricted Variables: If is unrestricted in sign, substitute , where .
Quick Example: Convert the following LPP to standard form:
Subject to:
, unrestricted
Step 1: Convert minimization to maximization.
Step 2: Convert to non-negative variables.
Let , where .
Step 3: Convert constraints to equalities.
For , introduce slack variable :
For , introduce surplus variable :
Answer:
Subject to:
:::question type="MCQ" question="Convert the following LPP into standard form for maximization:
Subject to:
How many slack variables and surplus variables are introduced, respectively?" options=["1 slack, 2 surplus","2 slack, 1 surplus","1 slack, 1 surplus","2 slack, 2 surplus"] answer="1 slack, 2 surplus" hint="Identify the type of inequality for each constraint and the corresponding variable type." solution="Step 1: Convert minimization to maximization.
Step 2: Convert constraints to equalities.
For , introduce surplus variable :
(1 surplus variable)
For , introduce surplus variable :
(1 surplus variable)
For , introduce slack variable :
(1 slack variable)
Thus, 1 slack variable and 2 surplus variables are introduced.
Final Standard Form:
Subject to:
:::
---
2. Basic Feasible Solutions (BFS)
A basic feasible solution is a fundamental concept in the Simplex Method. It represents a corner point of the feasible region.
Given a system of linear equations with variables () in standard form, a solution obtained by setting variables to zero and solving for the remaining variables is called a basic solution. If all variables are non-negative, it is a basic feasible solution (BFS). The non-zero variables are called basic variables, and the zero variables are non-basic variables.
For the Simplex Method, we begin with an initial BFS. Often, this is achieved by using slack variables as initial basic variables, as they naturally form an identity matrix in the constraint coefficients.
Quick Example: Consider the system:
where .
Step 1: Identify and . Here (variables ) and (equations). We need to set variables to zero.
Step 2: Find an initial BFS by setting decision variables to zero.
Set .
Then, and .
Since , this is a BFS.
Basic variables: . Non-basic variables: .
Answer: A BFS is .
:::question type="MCQ" question="Given the system of equations from an LPP in standard form:
Which of the following is a basic feasible solution?" options=["","","",""] answer="" hint="A BFS must have variables set to zero and all basic variables must be non-negative. Here , so 2 variables must be zero." solution="We have variables () and equations. A BFS requires variables to be zero.
Let's check each option:
* : . . All non-negative. This is a BFS.
* : .
So this solution is , not . This is not a correct solution.
* : .
Substitute into the first equation: .
So this solution is . All non-negative. This is a BFS.
* : .
Multiply the first equation by 3: .
Subtract the second equation from this:
Substitute into : .
So this solution is . All non-negative. This is a BFS, but the option given is , which is incorrect.
The only correct BFS from the options is .
"
:::
---
3. Simplex Tableau
The Simplex Method is efficiently implemented using a tabular representation known as the Simplex Tableau. This matrix format systematizes the iterative process of moving between BFS.
A Simplex Tableau organizes all relevant information of an LPP in standard form:
Here, are coefficients of variables in the objective function, and , where are objective function coefficients of basic variables.
Initial Tableau Construction:
Quick Example: Construct the initial simplex tableau for:
Subject to:
Step 1: Convert to standard form.
Introduce slack variables .
Subject to:
Step 2: Identify initial BFS.
Set . Then . Basic variables: .
Step 3: Construct the tableau.
The objective coefficients are for .
The basic variables have .
Answer: The tableau above is the initial simplex tableau.
:::question type="MCQ" question="Construct the initial simplex tableau for the LPP:
Subject to:
What are the values in the row for respectively?" options=["","","",""] answer="" hint="For the initial tableau with slack variables as basic, values are typically zero, so equals ." solution="Step 1: Convert to standard form.
Introduce slack variables .
Subject to:
Step 2: Identify initial BFS.
Set . Then . Basic variables: .
The objective coefficients for basic variables are .
Step 3: Calculate and for the initial tableau.
For each column : .
Since are all 0 for the initial basic variables (), all values will be 0.
Therefore, .
The values for are respectively.
So, the row will be .
"
:::
---
4. Optimality Condition (Maximization)
The Simplex Method iterates by moving from one BFS to an adjacent one, improving the objective function value at each step. The optimality condition determines when this process should stop.
For a maximization problem, an optimal solution is reached when all values in the row (also known as the net evaluation row or relative profit row) are less than or equal to zero () for all non-basic variables.
If there are positive values in the row, the current BFS is not optimal.
The entering variable (or pivot column) is chosen as the non-basic variable with the most positive value. This variable will enter the basis, as it offers the largest potential increase in the objective function per unit.
Quick Example: Consider the row of a simplex tableau:
for variables .
Step 1: Examine the values.
We observe values of for .
Step 2: Identify the most positive value.
The largest positive value is , corresponding to variable .
Answer: The current solution is not optimal, and is the entering variable.
:::question type="MCQ" question="Given the row of a simplex tableau for a maximization problem:
corresponding to variables .
Which variable should enter the basis?" options=["","","",""] answer="" hint="The entering variable is the non-basic variable with the largest positive value." solution="We are looking for the most positive value in the row.
The values are:
The positive values are (for ) and (for ).
The largest positive value is , which corresponds to the variable .
Therefore, should enter the basis."
:::
---
5. Feasibility Condition (Min-Ratio Test)
Once the entering variable is determined, we must identify which basic variable will leave the basis to maintain feasibility. This is done using the min-ratio test.
To determine the leaving variable (or pivot row), we calculate the ratio of the Right-Hand Side (RHS) values to the corresponding coefficients in the pivot column for each constraint.
The leaving variable corresponds to the row with the smallest non-negative ratio.
The ratio is calculated as , for all rows where the pivot column coefficient is positive. If all coefficients in the pivot column are zero or negative, the solution is unbounded.
The element at the intersection of the pivot row and pivot column is called the pivot element.
Quick Example: Consider a tableau snippet:
Entering variable is .
Step 1: Identify the pivot column.
The pivot column is . Its coefficients are (for ) and (for ).
Step 2: Calculate ratios.
For row : .
For row : .
Step 3: Identify the smallest non-negative ratio.
The ratios are and . The smallest is .
Answer: The leaving variable is (corresponding to the row with ratio ), and the pivot element is (at the intersection of row and column).
:::question type="MCQ" question="In a simplex tableau for a maximization problem, is the entering variable. The relevant part of the tableau is:
Which variable leaves the basis?" options=["","","",""] answer="" hint="Perform the min-ratio test by dividing the RHS by the positive coefficients in the pivot column ( column)." solution="Step 1: Identify the pivot column.
The entering variable is , so the pivot column is the column under . The coefficients are (in row) and (in row).
Step 2: Calculate ratios of RHS to pivot column coefficients.
For row : .
For row : .
Step 3: Identify the smallest non-negative ratio.
The ratios are and . The smallest non-negative ratio is .
Step 4: Determine the leaving variable.
The smallest ratio () corresponds to the basic variable .
Therefore, is the leaving variable.
Answer: "
:::
---
6. Pivoting
Pivoting is the core algebraic operation in each Simplex iteration, transforming the tableau to a new BFS. It involves row operations to make the pivot element 1 and other elements in the pivot column 0.
- Pivot Row Normalization: Divide the entire pivot row by the pivot element to make the pivot element 1.
- Other Row Operations: For every other row (including the row), perform the operation:
New Row = Old Row - (Coefficient of pivot column in Old Row ) New Pivot Row.
This makes all other elements in the pivot column zero.
These operations are equivalent to Gaussian elimination steps, ensuring that the new basic variable has a coefficient of 1 in its row and 0 in all other constraint rows, maintaining the identity matrix structure for basic variables.
Corrected Example: Perform a pivot operation given the tableau:
Pivot element is (in row, column). enters, leaves.
Step 1: Normalize the pivot row ( row).
New row (now row) = Old row / .
Step 2: Update other rows.
* For row:
Coefficient of in row is .
New row = Old row - New row.
Old row:
New row:
New row:
* For row:
Coefficient of in row is .
New row = Old row - New row.
Old row: (Note: the last is for the value in the row, not )
For calculation, the value is .
New row:
(The last element is the new value of , so ).
Answer: The updated tableau after one pivot operation is:
:::question type="NAT" question="Given the following simplex tableau for a maximization problem, where is the entering variable and is the leaving variable (pivot element = 2):
What is the new value in the row under the column after the pivot operation?" answer="2.5" hint="Normalize the pivot row first, then use it to zero out the coefficient in the row." solution="Step 1: Normalize the pivot row ( row, which becomes row).
New row = Old row / .
Old row:
New row:
Step 2: Update the row.
The coefficient of the pivot column () in the row is .
New row = Old row - New row.
Old row:
New row:
New row:
The new value in the row under the column is .
Answer: "
:::
---
7. Two-Phase Method and Big M Method
When an LPP contains or constraints, slack variables alone do not provide an initial BFS (as surplus variables have -1 coefficients and equality constraints have no slack/surplus). We introduce artificial variables to obtain an initial identity matrix for the basis. These artificial variables must be driven to zero in the optimal solution.
Non-negative variables introduced into or constraints to obtain an initial basic feasible solution (identity matrix in the basis). They have no physical meaning and must be eliminated from the basis in the final optimal solution.
7.1. Big M Method
In the Big M method, a very large penalty (denoted by ) is assigned to artificial variables in the objective function to force them out of the basis.
For maximization: Maximize
For minimization: Minimize
Where are artificial variables and is a very large positive number.
Quick Example: Set up the initial tableau for Big M method:
Maximize
Subject to:
Step 1: Convert to standard form and introduce artificial variables.
(Introduce surplus and artificial )
(Introduce artificial )
.
Step 2: Formulate the Big M objective function.
Maximize .
Step 3: Construct the initial tableau.
Basic variables: . Their objective coefficients are .
Objective coefficients .
Answer: The tableau above is the initial Big M tableau.
:::question type="MCQ" question="Consider the LPP:
Minimize
Subject to:
When converting this to standard form for the Big M method, how many artificial variables are required?" options=["1","2","3","0"] answer="2" hint="Artificial variables are introduced for '=' constraints and '' constraints after introducing surplus variables." solution="Step 1: Convert minimization to maximization.
Maximize .
Step 2: Convert constraints:
* : This is an equality constraint. It requires an artificial variable. Let's call it .
Equation: .
* : This is a '' constraint. It requires a surplus variable () and an artificial variable ().
Equation: .
* : This is a '' constraint. It requires a slack variable ().
Equation: .
Step 3: Count artificial variables.
We introduced and . Thus, 2 artificial variables are required.
Answer: "
:::
7.2. Two-Phase Method
The Two-Phase method separates the problem into two distinct phases.
Phase I: Optimize a new objective function that minimizes the sum of all artificial variables. If the minimum value of this objective function is zero, a feasible solution to the original LPP is found, and we proceed to Phase II. If it is positive, the original LPP has no feasible solution.
Phase II: Use the final tableau from Phase I (after removing artificial variable columns and replacing the objective function with the original one) to optimize the original LPP.
Quick Example: Formulate the Phase I objective for the LPP:
Maximize
Subject to:
Step 1: Convert to standard form and introduce artificial variables.
.
Step 2: Formulate the Phase I objective function.
The objective of Phase I is to minimize the sum of artificial variables.
Minimize .
Convert to maximization for simplex: Maximize .
Answer: The Phase I objective function for maximization is .
:::question type="MCQ" question="For an LPP with artificial variables , what is the objective function for Phase I of the Two-Phase Simplex method, when formulated for maximization?" options=["Maximize ","Maximize ","Minimize ","Maximize "] answer="Maximize " hint="Phase I minimizes the sum of artificial variables. For Simplex, this is converted to maximization." solution="Phase I of the Two-Phase method aims to minimize the sum of artificial variables.
So, the objective is: Minimize .
For the Simplex method, we always work with maximization problems. Therefore, we convert the minimization problem into a maximization problem by multiplying the objective function by -1.
Maximize .
The coefficients of the original decision variables and slack/surplus variables in the Phase I objective are zero, as they do not affect the feasibility of the artificial variables.
Thus, the correct formulation for maximization is: Maximize .
The option 'Maximize ' is technically correct but 'Maximize ' explicitly shows all variables in the objective function, which is the standard way to represent it in the tableau.
Answer: "
:::
---
Advanced Applications
8. Special Cases in Simplex Method
The Simplex Method can reveal specific characteristics of an LPP beyond just an optimal solution.
8.1. Degeneracy
Degeneracy occurs when a basic feasible solution has fewer than positive basic variables (i.e., one or more basic variables have a value of zero). In the Simplex Tableau, this happens if there is a tie in the min-ratio test, and one of the tied rows (which becomes the leaving variable) has a zero basic variable.
Degeneracy can lead to cycling, where the Simplex method might endlessly cycle through a sequence of non-improving BFS without reaching an optimal solution. While rare in practice, methods like Bland's Rule can prevent cycling.
Quick Example: Identify degeneracy.
Consider a tableau after a min-ratio test:
Entering variable . Ratios: , .
The smallest ratio is .
Answer: Since the smallest non-negative ratio is , and it corresponds to a basic variable () that will leave, the next BFS will be degenerate.
:::question type="MCQ" question="Degeneracy in a Simplex tableau is typically indicated by which of the following conditions?" options=["All for non-basic variables.","A tie in the min-ratio test, where one of the tied ratios is zero.","All coefficients in the pivot column are negative or zero.","Multiple optimal solutions exist."] answer="A tie in the min-ratio test, where one of the tied ratios is zero." hint="Degeneracy relates to basic variables having zero values. The min-ratio test reveals this." solution="Let's analyze the options:
* 'All for non-basic variables.' This indicates an optimal solution, not degeneracy.
'A tie in the min-ratio test, where one of the tied ratios is zero.' Degeneracy occurs when a basic variable takes a zero value. If a basic variable's RHS is zero, and it's chosen to leave (smallest ratio of ), the new BFS will have a basic variable at zero, indicating degeneracy. A tie in the min-ratio test can* lead to degeneracy, especially if one of the tied ratios is zero, or if any basic variable in the current BFS is zero. The most direct indication is a basic variable having a value of zero, which often arises from a zero ratio in the min-ratio test.
* 'All coefficients in the pivot column are negative or zero.' This indicates an unbounded solution, not degeneracy.
* 'Multiple optimal solutions exist.' This occurs when a non-basic variable has at optimality, not degeneracy.
The most precise answer related to the process is when a basic variable becomes zero, which can happen if the min-ratio test results in a zero ratio.
Answer: "
:::
8.2. Unbounded Solution
An LPP has an unbounded solution if the objective function can be increased indefinitely without violating any constraints. In the Simplex Tableau, this is identified when, for an entering variable (i.e., ), all coefficients in its pivot column are negative or zero.
If all coefficients in the pivot column are , no finite positive ratio can be computed, meaning no variable can leave the basis. The entering variable can be increased indefinitely, leading to an unbounded objective function.
Quick Example: Identify an unbounded solution.
Consider a tableau where is the entering variable ().
Pivot column :
Step 1: Identify the pivot column.
The pivot column is . Its coefficients are (for ) and (for ).
Step 2: Check coefficients in the pivot column.
All coefficients in the column ( and ) are negative.
Answer: Since is the entering variable and all coefficients in its column are negative, the problem has an unbounded solution.
:::question type="MCQ" question="In a maximization LPP, an unbounded solution is indicated in the Simplex tableau when:" options=["All for non-basic variables.","There is a tie in the min-ratio test.","For an entering variable, all coefficients in its column are negative or zero.","An artificial variable remains in the basis at a positive level after Phase I."] answer="For an entering variable, all coefficients in its column are negative or zero." hint="An unbounded solution means the objective can increase infinitely. This occurs when no basic variable can limit the increase of the entering variable." solution="Let's analyze the options:
* 'All for non-basic variables.' This indicates an optimal solution.
* 'There is a tie in the min-ratio test.' This can indicate degeneracy or multiple optimal solutions in certain contexts, but not directly unboundedness.
* 'For an entering variable, all coefficients in its column are negative or zero.' If an entering variable () has no positive coefficients in its column, then no basic variable can be driven to zero by increasing the entering variable. This means the entering variable can be increased infinitely without violating feasibility, leading to an unbounded objective function. This is the definition of an unbounded solution.
* 'An artificial variable remains in the basis at a positive level after Phase I.' This indicates that the original LPP has no feasible solution.
Answer: "
:::
8.3. Multiple Optimal Solutions
Multiple optimal solutions exist when the objective function can achieve the same maximum value at more than one basic feasible solution. In the Simplex Tableau, this is indicated when, at optimality (all ), there is at least one non-basic variable with a value equal to zero.
If such a non-basic variable enters the basis, the objective function value will not change, leading to an alternative optimal BFS. Any convex combination of these optimal BFS will also be optimal.
Quick Example: Identify multiple optimal solutions.
Consider a final simplex tableau for a maximization problem:
for variables .
The current basic variables are .
Step 1: Check for optimality.
All . The tableau is optimal.
Step 2: Look for zero values for non-basic variables.
The variable is non-basic and has .
Answer: Since is a non-basic variable with a value of at optimality, there are multiple optimal solutions.
:::question type="MCQ" question="In a maximization LPP, the existence of multiple optimal solutions is indicated by which condition in the final Simplex tableau?" options=["All for all non-basic variables.","An artificial variable remains in the basis at a positive level.","A non-basic variable has a value at optimality.","The objective function value is zero."] answer="A non-basic variable has a value at optimality." hint="A zero for a non-basic variable at optimality means it can enter the basis without changing the objective value." solution="Let's analyze the options:
* 'All for all non-basic variables.' This indicates a unique optimal solution.
* 'An artificial variable remains in the basis at a positive level.' This indicates no feasible solution.
* 'A non-basic variable has a value at optimality.' If a non-basic variable has a value in an optimal tableau, it means bringing this variable into the basis will not change the optimal objective function value, thus leading to an alternative optimal solution. This is the correct indicator for multiple optimal solutions.
* 'The objective function value is zero.' This simply means the optimal value of is zero, which is possible but doesn't, by itself, indicate multiple solutions.
Answer: "
:::
8.4. No Feasible Solution
An LPP has no feasible solution if there is no set of values for the variables that satisfies all constraints and non-negativity conditions. In the Simplex Method (especially using Big M or Two-Phase), this is indicated if, at the end of Phase I or Big M method, at least one artificial variable remains in the basis with a positive value.
In the Big M method, if an artificial variable remains in the basis at a positive level in the optimal tableau, it implies that no feasible solution exists for the original problem (assuming is sufficiently large). In the Two-Phase method, if the minimum value of (sum of artificial variables) at the end of Phase I is positive, then no feasible solution exists.
Quick Example: Identify no feasible solution.
Consider the final tableau of Phase I for a minimization problem (which sought to minimize ):
The objective value (sum of artificials) is .
Artificial variable is in the basis with value . is non-basic.
Answer: Since the minimum value of is (which is positive) and (an artificial variable) remains in the basis at a positive level, the original LPP has no feasible solution.
:::question type="MCQ" question="Which of the following conditions in a final Simplex tableau (after applying Big M or Two-Phase method) indicates that the LPP has no feasible solution?" options=["All for non-basic variables.","An artificial variable remains in the basis at a positive level.","A non-basic variable has a value at optimality.","All coefficients in the pivot column are negative or zero for an entering variable."] answer="An artificial variable remains in the basis at a positive level." hint="Artificial variables are introduced to aid in finding an initial BFS. If they cannot be driven out or to zero, it means no valid BFS exists for the original problem." solution="Let's analyze the options:
* 'All for non-basic variables.' This indicates an optimal solution (unique or multiple).
* 'An artificial variable remains in the basis at a positive level.' This is the definitive indicator of no feasible solution. If an artificial variable, which has no physical meaning, cannot be driven out of the basis or reduced to zero, it means the original constraints cannot be satisfied.
* 'A non-basic variable has a value at optimality.' This indicates multiple optimal solutions.
* 'All coefficients in the pivot column are negative or zero for an entering variable.' This indicates an unbounded solution.
Answer: "
:::
9. Shadow Prices (Dual Prices)
The concept of shadow price is crucial for sensitivity analysis and economic interpretation of the optimal solution. It is directly related to the dual problem in linear programming.
The shadow price (or dual price) for a constraint represents the change in the optimal value of the objective function for a one-unit increase in the right-hand side (RHS) of that constraint, assuming all other parameters remain constant.
Reading Shadow Prices from the Simplex Tableau (Maximization LPP):
* For a constraint (with a slack variable ): The shadow price is the absolute value of the value corresponding to the slack variable in the final optimal tableau. Alternatively, it is the value of the slack variable column.
* For a constraint (with a surplus variable and an artificial variable ): The shadow price is the absolute value of the value corresponding to the surplus variable in the final optimal tableau (or the value of the surplus variable column).
* For an constraint (with an artificial variable ): The shadow price is the absolute value of the value corresponding to the artificial variable in the final optimal tableau (or the value of the artificial variable column).
Quick Example: Interpret shadow price.
Suppose the optimal tableau for a maximization LPP shows:
for (slack for constraint 1: ) is .
The objective value is .
Step 1: Identify the shadow price for .
The shadow price for constraint 1 is .
Step 2: Interpret the meaning.
If the RHS of constraint 1 increases by one unit (from 10 to 11), the optimal objective function value will increase by .
Answer: The shadow price of for constraint 1 indicates that an additional unit of the resource associated with this constraint would increase the maximum profit by .
:::question type="MCQ" question="The term 'shadow price' in linear programming refers to:" options=["The cost of adding one unit to objective function.","The value of non-negativity constraint.","The cost of adding one unit to the right-hand side of a constraint.","The cost of remaining constraint."] answer="The cost of adding one unit to the right-hand side of a constraint." hint="Shadow price measures the marginal value of a resource or constraint." solution="The shadow price (or dual price) in linear programming quantifies the change in the optimal objective function value for a one-unit increase in the right-hand side of a constraint, assuming all other parameters remain constant. It represents the marginal value of relaxing a constraint or the marginal cost of tightening it.
* 'The cost of adding one unit to objective function.' This is not the definition of shadow price.
* 'The value of non-negativity constraint.' Non-negativity constraints usually have shadow prices of zero if the variable is positive, or reflect the cost of forcing a variable to be zero if it would naturally be positive. This option is too vague.
* 'The cost of adding one unit to the right-hand side of a constraint.' This is precisely the definition of a shadow price. It tells us how much the objective function's optimal value would change if we had one more unit of a particular resource or relaxed a constraint by one unit. For a maximization problem, it's the marginal profit; for minimization, it's the marginal cost.
* 'The cost of remaining constraint.' This is not a standard term in linear programming.
Therefore, the most accurate description is 'The cost of adding one unit to the right-hand side of a constraint.'
Answer: "
:::
---
Problem-Solving Strategies
When solving full Simplex problems in CUET PG, focus on accuracy in arithmetic. Even a small calculation error can propagate. For multiple-choice questions, sometimes you only need to perform one or two iterations to identify the entering/leaving variable or to check for special cases like unboundedness. For questions asking for the final solution, work systematically.
The initial row for a maximization problem with only slack variables as basic variables is simply the row of the objective function itself. This is a common shortcut for the first tableau.
---
Common Mistakes
❌ Mistake: Calculating ratios with negative or zero coefficients in the pivot column.
✅ Correct Approach: Only calculate ratios for positive coefficients in the pivot column. If all coefficients in the pivot column are negative or zero, the solution is unbounded (for maximization).
❌ Mistake: Forgetting to include the '' objective coefficients for artificial variables when calculating and rows. This often leads to incorrect entering variables.
✅ Correct Approach: Remember is a very large number. Terms with dominate. For maximization, should be positive (and include ) for entering variables. For minimization, should be positive (and include ).
---
Practice Questions
:::question type="MCQ" question="Consider the Simplex tableau for a maximization problem:
What is the pivot element for the next iteration?" options=["1 (in row, col)","2 (in row, col)","3 (in row, col)","1 (in row, col)"] answer="2 (in row, col)" hint="First, identify the entering variable (most positive ). Then, apply the min-ratio test to find the leaving variable." solution="Step 1: Identify the entering variable.
The values are for and for . The most positive is , so is the entering variable (pivot column).
Step 2: Identify the leaving variable using the min-ratio test.
Ratios are calculated as RHS / (Pivot column coefficient).
For row: .
For row: .
The smallest non-negative ratio is , which corresponds to the row. So, is the leaving variable (pivot row).
Step 3: Identify the pivot element.
The pivot element is at the intersection of the pivot row ( row) and the pivot column ( column). This value is .
Answer: "
:::
:::question type="NAT" question="A company produces two products, A and B. Product A requires 2 hours of labor and 1 unit of raw material. Product B requires 1 hour of labor and 2 units of raw material. The company has 100 hours of labor and 120 units of raw material available. The profit per unit of A is 2. If is the number of units of A and is the number of units of B, what is the initial value of the objective function (profit) when the initial BFS is formed using slack variables?" answer="0" hint="The initial BFS in Simplex typically assumes decision variables are zero, and slack variables take the RHS values. The objective function is calculated based on these values." solution="Step 1: Formulate the LPP.
Maximize
Subject to:
(Labor constraint)
(Raw material constraint)
Step 2: Convert to standard form.
Introduce slack variables .
Maximize
Subject to:
Step 3: Determine the initial BFS.
In the initial Simplex tableau, we typically set the decision variables to zero.
So, .
From the constraints:
The initial BFS is .
Step 4: Calculate the initial objective function value.
Substitute the BFS values into the objective function:
.
Answer: "
:::
:::question type="MSQ" question="Which of the following scenarios indicate that an LPP has no feasible solution when using the Two-Phase Simplex method?" options=["The optimal value of the original objective function is zero.","At the end of Phase I, the minimum value of the artificial objective function is positive.","An artificial variable remains in the basis with a positive value at the end of Phase I.","All for non-basic variables in the final tableau of Phase II."] answer="At the end of Phase I, the minimum value of the artificial objective function is positive.,An artificial variable remains in the basis with a positive value at the end of Phase I." hint="The purpose of Phase I is to eliminate artificial variables. If this cannot be achieved, it means the original problem is infeasible." solution="Let's analyze each option:
* 'The optimal value of the original objective function is zero.' This simply means the maximum (or minimum) profit/cost is zero, which is a valid feasible solution. It does not indicate infeasibility.
* 'At the end of Phase I, the minimum value of the artificial objective function is positive.' The objective of Phase I is to minimize the sum of artificial variables. If this minimum value is positive, it means that at least one artificial variable could not be driven to zero, implying no feasible solution for the original LPP. This is a correct indicator.
* 'An artificial variable remains in the basis with a positive value at the end of Phase I.' This is an equivalent statement to the previous point. If an artificial variable is still basic and positive, its value contributes to a positive sum of artificial variables, indicating infeasibility. This is also a correct indicator.
* 'All for non-basic variables in the final tableau of Phase II.' This indicates that an optimal solution has been found for the original LPP in Phase II. It does not indicate infeasibility.
Answer: "
:::
:::question type="MCQ" question="In the final optimal tableau of a maximization LPP, if a non-basic variable has a value of zero, what does this imply?" options=["The solution is degenerate.","The solution is unbounded.","There are multiple optimal solutions.","The problem has no feasible solution."] answer="There are multiple optimal solutions." hint="A zero for a non-basic variable at optimality means it can enter the basis without changing the objective value." solution="Let's analyze the implications:
* Degeneracy: Occurs when a basic variable has a value of zero, often identified by a zero ratio in the min-ratio test.
* Unbounded Solution: Occurs when an entering variable has all negative or zero coefficients in its column, meaning it can be increased indefinitely.
* Multiple Optimal Solutions: If, at optimality (all ), a non-basic variable has a value of zero, it means that this variable can be introduced into the basis without changing the optimal objective function value. This leads to an alternative optimal basic feasible solution, hence multiple optimal solutions.
* No Feasible Solution: Indicated by an artificial variable remaining in the basis with a positive value.
Therefore, a non-basic variable with at optimality implies multiple optimal solutions.
Answer: "
:::
:::question type="NAT" question="A final simplex tableau for a maximization LPP shows that the shadow price for the first constraint (a constraint) is . If the RHS of this constraint increases by 4 units, by how much will the optimal objective function value increase?" answer="3" hint="The shadow price indicates the change in the objective function per unit increase in the RHS. Multiply the shadow price by the change in RHS." solution="Step 1: Understand the definition of shadow price.
The shadow price of for the first constraint means that for every one-unit increase in the RHS of that constraint, the optimal objective function value will increase by .
Step 2: Calculate the total increase.
The RHS of the constraint increases by units.
Total increase in objective function = Shadow price Change in RHS
Total increase = .
Answer: "
:::
---
Summary
| # | Formula/Concept | Expression |
|---|----------------|------------|
| 1 | Standard Form Objective (Max) | Maximize |
| 2 | Optimality Condition (Max) | for all non-basic variables |
| 3 | Entering Variable (Max) | Non-basic variable with most positive |
| 4 | Leaving Variable (Min-Ratio) | Basic variable corresponding to for positive coefficients |
| 5 | Big M Method Objective | Max (for max) |
| 6 | Unbounded Solution | Entering variable has all coefficients in its column |
| 7 | Multiple Optimal Solutions | Optimal tableau has non-basic variable with |
| 8 | No Feasible Solution | Artificial variable remains basic with positive value (Big M/Phase I) |
| 9 | Shadow Price | Change in optimal objective for one-unit increase in constraint RHS |
---
What's Next?
This topic connects to:
- Duality Theory: The Simplex method provides direct insights into the dual problem, and shadow prices are the optimal dual variable values.
- Sensitivity Analysis: Understanding how changes in objective coefficients or RHS values affect the optimal solution, building upon the concept of shadow prices and ranges of optimality/feasibility.
- Integer Programming: For problems where variables must take integer values, the Simplex method forms the basis for algorithms like Branch and Bound.
Chapter Summary
LPP Formulation: Linear Programming Problems (LPPs) involve optimizing a linear objective function (maximization or minimization) subject to a set of linear equality or inequality constraints, along with non-negativity restrictions on decision variables.
Graphical Method: Applicable for LPPs with two decision variables, this method involves plotting constraints to define a feasible region (a convex polygon). The optimal solution always lies at one of the corner points of this region (Corner Point Theorem).
Basic Feasible Solutions (BFS): These correspond to the corner points of the feasible region in the context of the algebraic solution. A BFS is obtained by setting variables to zero (non-basic variables) and solving for the remaining variables (basic variables), where is the number of variables and is the number of constraints.
Simplex Method: An iterative algebraic algorithm for solving LPPs with more than two variables. It systematically moves from one BFS to an adjacent, improved BFS until an optimal solution is identified, or it's determined that the problem is unbounded or infeasible.
Simplex Tableau & Operations: The Simplex method utilizes a tableau to organize coefficients, slack, surplus, and artificial variables. Key steps involve identifying the entering variable (most negative coefficient in the objective row for maximization), the leaving variable (using the minimum ratio test), and performing pivot operations to update the tableau.
Duality Theory: Every LPP (primal problem) has a corresponding dual problem. Solving the dual can sometimes be computationally easier, and the optimal solutions of the primal and dual problems are closely related (Strong Duality Theorem), providing valuable insights into sensitivity analysis.
---
Chapter Review Questions
:::question type="MCQ" question="Consider the LPP: Maximize subject to , , . Using the graphical method, what is the maximum value of ?" options=["14", "17", "21", "24"] answer="17" hint="Identify the feasible region and evaluate the objective function at its corner points." solution="The corner points of the feasible region are (0,0), (5,0), (3,4), and (0,7). Evaluating Z at these points:
Z(0,0) = 0
Z(5,0) = 3(5) + 2(0) = 15
Z(3,4) = 3(3) + 2(4) = 9 + 8 = 17
Z(0,7) = 3(0) + 2(7) = 14
The maximum value of Z is 17.
Answer: "
:::
:::question type="NAT" question="To convert the constraint into an equality for the Simplex method, how many surplus variables are required?" answer="1" hint="Surplus variables are subtracted from constraints to convert them into equalities." solution="A surplus variable is subtracted from a 'greater than or equal to' () constraint to convert it into an equality. Therefore, one surplus variable is required: .
Answer: "
:::
:::question type="MCQ" question="Which of the following statements about Basic Feasible Solutions (BFS) in Linear Programming is FALSE?" options=["Every corner point of the feasible region corresponds to a BFS.", "A degenerate BFS has at least one basic variable equal to zero.", "The Simplex method always moves from one BFS to an adjacent, improved BFS.", "An optimal solution must always be a non-degenerate BFS."] answer="An optimal solution must always be a non-degenerate BFS." hint="Consider cases where an optimal solution might be degenerate or lie on an edge." solution="An optimal solution does not necessarily have to be a non-degenerate BFS. It can be a degenerate BFS, or in cases of multiple optima, it can lie on an edge connecting two BFSs, meaning any point on that edge is optimal.
Answer: "
:::
:::question type="NAT" question="Consider an LPP with 3 decision variables and 2 constraints (excluding non-negativity). If the problem has a unique optimal solution, how many non-basic variables will be at zero in the optimal BFS?" answer="2" hint="In a non-degenerate optimal BFS, the number of binding constraints equals the number of basic variables. If the 2 constraints are binding, their corresponding slack/surplus variables will be zero and thus non-basic." solution="In a standard Simplex tableau, for an LPP with constraints, if the optimal solution lies at the intersection of binding constraints, then the corresponding slack/surplus variables will be non-basic (i.e., equal to zero).
Here, we have 2 constraints. If both of these constraints are binding at the optimal solution, then their corresponding slack/surplus variables will be zero. These zero-valued slack/surplus variables are considered non-basic variables.
Therefore, 2 non-basic variables (specifically, the slack/surplus variables for the binding constraints) will be at zero in the optimal BFS.
Answer: "
:::