Purpose of Sensitivity Analysis
Purpose of Sensitivity Analysis
All the coefficient in this model (the c , a , and b ) represent numeric constants. So,
when we formulate and solve an LP problem, we implicitly assume that we can specify the
next values for the coefficients. However, in the real world these coefficients might change
from day to day or minute to minute. For example, the price a company charges for its
products can change on a daily, weekly or monthly basis. Similarly, if a skilled machinist calls
in sick, a manufacturer might have less capacity to produce items on a given machine than
was originally planned.
Realizing that such uncertainties exist, a manager should consider how sensitive an
LP model’s solution is to changes or estimation errors that might occur in: (1) the objective
function coefficients (the c ), (2) the constraint coefficient
(the a ), and (3) the RHS values for constraints (the b ). A manager also might ask a number
of “What if?” questions about these values. For example, What if the cost of a product
increases by 7%? What if a reduction in setup time allows for additional capacity on a given
machine? What if a worker’s suggestion results in a product requiring only two hours of labor
rather than three? Sensitivity analysis addresses these issues by assessing the sensitivity of the
solution to uncertainty or errors in the model coefficients, as well as the solution’s sensitivity
to changes in model coefficients that might occur because of human intervention.
• The range of values the objective function coefficients can assume without changing
the optimal solution.
• The impact on the optimal objective function value of increases or decreases in the
availability of various constrained resources.
• The impact on the optimal objective function value of forcing changes in the values
of certain decision variables away from their optimal values
SAMPLE PROBLEMS ON SENSITIVITY ANALYSIS:
Max P: 2x + 4x Min C: 5x + 3x + 4x
Subject to: - x + 2x ≤ 8 Subject to: x + x + 2x ≥ 2
x + 2x ≤ 12 5x + 3x + 2x ≥ 1
x + x ≥ 2 x ,x ,x ≥ 0
x ,x ≥ 0
SOLUTIONS:
Max P: 2x + 4x Min C: 5x + 3x + 4x
Subject to: - x + 2x ≤ 8 Subject to: x + x + 2x ≥ 2
x + 2x ≤ 12 5x + 3x + 2x ≥ 1
- x - x ≤ -2 x ,x ,x ≥ 0
x,x ≥ 0
Dual:
GOAL PROGRAMMING:
The optimization techniques presented have always assumed that the constraints in
the model are hard constraints, or constraints that cannot be violated. For example, labor
constraints indicated that the amount of labor used to produce a variety of products could not
exceed some fixed amount (such as 1,566 hours). As another example, monetary constraints
indicated that the amount of money invested in a number of projects could not exceed some
budgeted amount (such as $850,000)
Hard constraints are appropriate in many situations; however, these constraints might
be too restrictive in others. For example, when you buy a new car you probably have in mind
a maximum purchase price that you do not want to exceed. We might call this your goal.
However, you will probably find a way to spend more than this amount if it is impossible to
acquire the car you really want for your goal amount. So the goal you have in mind is not a
hard constraint that cannot be violated. We might view it more accurately as a soft constraint
representing a target you would like to achieve.
Numerous managerial decision-making problems can be modeled more accurately
using goals rather than hard constraints. Often, such problems do not have one explicit
objective function to be maximized or minimized over a constraint set but, instead, can be
stated as a collection of goals that might also include hard constraints. These types of
problems are known as goal programming (GP) problems.
The technique of linear programming can help a decision maker analyze and solve
GP problem. The following example illustrates the concepts and modeling techniques used in
GP problems.
In this problem, the fundamental decision facing the hotel owner is how many small,
medium, and large conference rooms to include in the conference center expansion. These
quantities are represented by X , X , and X , respectively.
Rather than one specific objective, this problem involves a number of goals, which
are stated (in no particular order) as:
Notice that the word “approximately” appears in each goal. This word underscores
the fact that these are soft goals rather than hard constraints. For example, if the first four
goals could be achieved at a cost of $1,001,000, it is very likely that the hotel owner would
not mind paying an extra $1,000 to achieve such a solution. However, we must determine if
we can find a solution that exactly meets all of the goals in this problem and if not, what
trade-offs can be made among the goals to determine an acceptable solution. We can
formulate an LP model for this GP problem to help us make this determination.
LP MODEL PROBLEMS:
(2) The Springer Dog Food Company makes dry dog food from two ingredients. The two
ingredient (A and B) provide different amounts of protein and vitamins. Ingredient A provides
16 units of protein and 4 units of vitamins per pound. Ingredient B provides 8 units of protein
and 8 units of vitamins per pound. Ingredient A and B cost $0.50 and $0.20 per pound,
respectively. The company wants it’s dog food to contain at least 12 units of protein and 6
units of vitamins per pound and be economical to produce.
SOLUTIONS:
Information:
Generators (X) Alternators (Y) Time Available
Hrs of Wiring 2 3 260
Hrs of Testing 1 2 140
Profit 250 150
Information:
Ingredient A (X) Ingredient B (Y) Unit Available
Protein 16 8 12
Vitamins 4 8 6
Cost 0.50 0.20
The major drawback of Markov methods is that Markov diagrams for large systems are
generally exceedingly large and complicated and difficult to construct. However, Markov
models may be used to analyse smaller systems with strong dependencies requiring accurate
evaluation. Other analysis techniques, such as fault tree analysis, may be used to evaluate
large systems using simpler probabilistic calculation techniques. Large systems which exhibit
strong component dependencies in isolated and critical parts of the system may be analysed
using a combination of Markov analysis and simpler quantitative models.
The state transition diagram identifies all the discrete states of the system and the possible
transitions between those states. In a Markov process the transition frequencies between
states depends only on the current state probabilities and the constant transition rates between
states. In this way the Markov model does not need to know about the history of how the
state probabilities have evolved in time in order to calculate future state probabilities.
Although a true Markovian process would only consider constant transition rates, computer
programs such as FaultTree+ and MKV allow time-varying transition rates to be defined.
These time-varying rates must be defined with respect to absolute time or phase time (the
time elapsed since the beginning of the current phase).
As the size of the Markov diagram increases the task of evaluating the expressions for time-
dependent unavailability by hand becomes impractical. Computerised numerical methods
may be employed, however, to provide a fast solution to large and complicated Markov
systems. In addition these numerical methods may be extended to allow the modelling of
phased behaviour and time-dependent transition rates.