Sensitivity Analysis


After having found an optimal solution to a linear-programming formulation, it is often the case that one has to solve slightly different versions of the original problem repeatedly. In practice, such scenarios could occur as a result of parameter changes, of the introduction of new technologies, of the availability of updated information on the input parameters, or of "what-if" questions.

Consider, for example, the formulations discussed in earlier sections. In the product-mix problem, the technology for making the product may change over time, so that the material requirements can become different; the availability of resources may vary from week to week; or the profit per unit sold may be subject to revisions. In the production-planning problem, we may have a "rolling" planning horizon of twelve months, so that the same problem needs to be solved every month with new demand data; or the production capacity may be subject to fluctuations. In the investment problem, the structure of the investment scheme may change over time, different investment opportunities may arise, or the interest rate may vary from day to day.

Of course, one can always resolve the revised problems from scratch, for each new scenario; but this is clearly undesirable. Therefore, an extremely important question is: Can we avoid solving the entire problem again after having made minor changes in the original problem parameters? The answer, it turns out, is that for most scenarios, only minor revisions in the original solution are necessary.

The ability to avoid having to solve the entire problem from scratch is particularly important for large-scaled problems and for problems that have to be solved repeatedly.

At other times, we may be interested in obtaining quick answers to what-if questions such as: What would be the impact of a reduction or an increase in the availability of a resource? In other words, we want to know how valuable each resource is to a given model. This gives rise to the concept of "shadow prices" of resources. Analysis of questions like these, regarding the impact on the optimal solution as a result of small perturbations to the input problem parameters, is called sensitivity analysis.

The purpose of this section is to uncover a powerful insight regarding the Simplex method that can be used to answer such questions in an efficient manner, i.e., one that does not involve a complete solution from scratch.

We will undertake a careful reexamination of the algebraic structure of the Simplex algorithm. Conceptually, we can think of the Simplex algorithm as a "black box" that accepts as its input the parameters that describe a linear program and produces as its output the optimal solution to the input problem. This is visualized in Figure LP-11 below.

What we will develop is a very compact description of the operations that are "hidden" inside this black box. Such a description will enable us to adapt the output solution in response to revisions in the input in a systematic manner.

This compact description of what the Simplex algorithm does is called the fundamental insight, and it will be presented at the next link.

The Fundamental Insight

At the next link, we will demonstrate the usefulness of the fundamental insight by working out the details of the sensitivity analysis of a simple example. This example is taken from Exercise 6.7-1 in our text.

Sensitivity Analysis: An Example

In conducting sensitivity analysis for the example above, we observed that the matrix P is a powerful tool for calculating necessary revisions in the final tableau, in response to a given revision in the initial tableau. This observation, in fact, applies to the tableau generated by every Simplex pivot, as noted in the earlier discussion on the fundamental insight. An important consequence of this latter statement is that the Simplex tableau at the end of each iteration is determined completely by the initial tableau and a matrix P that is associated with that particular tableau. In other words, we can produce any column in a Simplex tableau by mutiplying its associated P matrix to the corresponding column in the initial tableau. It follows that if we updated only the matrix P as we proceed from iteration to iteration, then other entries in the "current" tableau could be easily produced on a need-to-know basis. Clearly, doing so will result in a more-efficient procedure; and in fact, all commercial implementations of the Simplex algorithm are based on this idea.

This improved version of the Simplex algorithm is called the revised Simplex method. In the interest of brevity, we will skip the details (which can be found in Section 5.2 of the text) of the revised Simplex method.

An important fact regarding the revised Simplex method is that the total amount of computational effort for a given problem is proportional to the size of the matrix P, which, in turn, is determined by the number of functional constraints. Therefore, the primary determinant of the required computational effort for the solution of a linear program is the number of functional constraints.

Finally, we will use LINDO to generate a sensitivity-analysis output, and we will discuss how to interpret this output. This is given at the next link.

Sensitivity Analysis: Interpretation of A Sample LINDO Output

Assignment No. 5

Chapter 6: 6.7-2, 6.7-6.

For problem 6.7-6, do not reoptimize. In addition, you might find the following two Excel files helpful: Matrix Multiplication, Fundamental Insight.