0% found this document useful (0 votes)
49 views42 pages

Linear Programinf New

Linear programming was first developed in the 1930s and 1940s to help with military planning problems. It aims to optimize an objective function subject to linear constraints. Some early contributors include Leonid Kantorovich, George Dantzig, and Paul Samuelson. Dantzig developed the simplex algorithm in 1947, which helped popularize linear programming. It is commonly used to allocate limited resources in an optimal way to maximize profits or minimize costs.

Uploaded by

Parfait Fotsing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views42 pages

Linear Programinf New

Linear programming was first developed in the 1930s and 1940s to help with military planning problems. It aims to optimize an objective function subject to linear constraints. Some early contributors include Leonid Kantorovich, George Dantzig, and Paul Samuelson. Dantzig developed the simplex algorithm in 1947, which helped popularize linear programming. It is commonly used to allocate limited resources in an optimal way to maximize profits or minimize costs.

Uploaded by

Parfait Fotsing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

REPUBLIC OF CAMEROUN

REPUBLIQUE DU CAMEROUN Peace – Work – Fatherland


Paix-travail-patrie
MINISTERE DE MINISTRY OF HIGHER EDUCATION
L’ENSEIGNEMENT SUPERIEUR
THE UNIVERSITY OF
UNIVERSITE DE NGAOUNDERE
NGAOUNDERE

ECOLE NATIONALE SUPERIEURE DES SCIENCES AGRO-


INDUSTRIELLES
NATIONAL ADVANCED SCHOOL OF AGRO-INDUSTRIAL
SCIENCES

TPE DE GESTION SCIENTIFIQUE DE LA


PRODUCTION, LOGISTIQUE ET ORDONANCEMENT

topic: : LINEAR PROGRAMING

Presented by: AGBOR Elizabeth EGBE 18I001EN


ATEMBEH Gilzelle NKEMATAH 18I005EN
BINUI VERA TAKU 18I009EN
BOUHARI IBRAHIM 18I011EN
KEUMEJIO TSOBENG LORAINA JOYCE 181020EN

Lecturer: Pr MFOPAIN

Academic year 2020-2021


1
TABLE OF CONTENTS
INTRODUCTION ....................................................................................................................................................... 2

GENESIS OF LINEAR PROGRAMMING ..................................................................................................................... 3

THE PRINCIPLES OF LINEAR PROGRAMMING ......................................................................................................... 5

METHODS AND THEIR IMPLEMENTATION .............................................................................................................. 9

1-The graphical method ................................................................................................................................... 10

1.1 Enumerative method or vertices of points ............................................................................................ 10

1.2 Methods of parallel lines ....................................................................................................................... 10

1.3 Application case by graphic resolution .................................................................................................. 10

2-Simplex method ............................................................................................................................................ 13

2.1 Economic interpretation ........................................................................................................................ 13

2.2 Marginal cost .......................................................................................................................................... 13

2.3. Application of the simplex method ...................................................................................................... 14

Formulation of the problem in a linear programming ................................................................................ 14

3-The Dual method ........................................................................................................................................... 21

3.1 Practical rules for switching from primal to dual x ............................................................................... 22

3.2 Case of application of the DUAL method............................................................................................... 23

1.3 APPLICATION EXAMPLE ...................................................................................................................... 24

4-The Two phase simplex method ................................................................................................................... 26

ADVANTAGES AND LIMITATIONS OF LINEAR PROGRAMMING ............................................................................ 36

CONCLUSION ......................................................................................................................................................... 37

REFERENCE ............................................................................................................................................................ 40
FIGURE LIST
Figure 1: method graphic figure ............................................................................................................ 11

1
INTRODUCTION
Linear programming is not a programming language like C++, Java, or visual basic.
According to William J. BAUMAUL, linear programming is a mathematical technique of
optimization (maximization or minimization) of function with linear objective under
constraints having the form of linear inequalities. It aims to select from among different
actions the one that will most likely achieve the target.

Robert DORFMAN and Paul Samuelson, add that linear programming is a method of
determining the best course of action to achieve given objectives in a situation where
resources are limited. It is therefore a method of solving the economic problem, either in the
context of a global economy, or in that of the public sector, or in a particular enterprise.

Recently, mathematical optimization methods are widely used in the fields of industrial
techniques and management. These methods make it possible to take into account the
constraints given in the form of inequalities, unlike classical approaches such as those of
LaGrange multipliers. Generally we call mathematical programming the search for the
optimum of a function of several variables linked by constraints in the form of equality or
inequality. There are many problems that boil down to a mathematical programming model.
Here, we are interested in the case where the function to be optimized and the constraints are
linear. We will then have to deal with a linear programming problem, the applications which
are made of this linear programming are very varied as are many methods of operational
research. It is often considered a standard computer technique, so understanding the epitome
of linear programming will be the sole focus of our work.

2
GENESIS OF LINEAR PROGRAMMING

The first mathematicians who dealt with problems, which were not yet called “linear
programs” (PL) at the time, are: LAPLACE (1749-1827) and Baron FOURIER.
The theory of linear programming was first developed by Leonid Kantorovich in 1939 during
World War II to plan the sending and returning of the army with the aim of reducing the cost
and increasing the defeat of the enemy. The development of the theory and tools of linear
programming has really took off from the 1940s even if the structures underlying
mathematics as well as some algorithmic elements saw day light during the period 1870 -
1930 with the work of J.B. Fourier (foundations linear programming and the simplex
method), T. Motzkin (theory of elimination, duality) Farkas (duality), Minkowski (duality),
Caratheodory (polyhedra and polytopes), the Vallée Poussin (method of elimination of
Motzkin). The foundation of linear programming as a field of study is mainly credited to D.B.
Danzig, author of the simplex algorithm in 1947 in the context of the SCOOP project
(Scientific Computation of Optimal Programs) and the military-industrial complex installed
within the US Air Force at the Pentagon. The algorithm had to meet the planning needs of
transport during military operations modeled as a problem of linear programming. Note that
L.V. Kantorovich, mathematician and Soviet economist, proposed 10 Introduction
programming models linear for industrial applications in planned economy as well as an
experimental method of resolution by dual variables, unfortunately not supported by a formal
theory. This earned him to share "the Nobel Prize Economics with T. Koopmans in 1975. The
importance of optimization and need for a simple tool to model decision problems whether
economic, military or other have made linear programming one of the most active research
fields in the middle of the previous century. The first which worked in 1947 are those of
George B. Danzig and his associates of the department of United States of America air force.

Linear programming problems are generally linked to problems of allocating limited


resources, in the best possible way, to maximize profit and minimize cost. The term best
refers the possibility of having a set of possible decisions that achieve the same profit. These
decisions are usually the results of a mathematical problem. As powerful and widely used as
the simplex algorithm can be, the fact that it belongs to the class of non-polynomial
algorithms prompted the researchers to propose other algorithms whose polynomiality was
proven. The first polynomial algorithm for linear programming is derived from the general
method of the ellipsoid defined by A. Nemirovski (John von Neumann 2003), D. B. Yudin
and N. Shor in 1970. L. Khachiyan thus built an ellipsoid algorithm adapted to linear
programming in 1979 whose merit is due more to the contribution to complexity theory and
thus to openness performed towards polynomial methods rather than in its practical efficiency

3
judged poor. A further decisive advance was made in 1984 by N. Karmarkar, IBM researcher
who proposed, for the first time, an interiors point method whose polynomial complexity he
demonstrated in the worst case. The majority of current software solutions are built around
the algorithm of the simplex.

4
THE PRINCIPLES OF LINEAR PROGRAMMING

The two general principles of linear programming are;

• Relaxation. This is often useful for "relaxing" a minimization problem, looking for
a measure of probability over an appropriate phase space.
• Duality. Once that has relaxed the problem is found, we can sometimes apply the
theory of duality to linear programming. Associated with all linear programming is
another linear programming to be called the double (Dual). The given linear
programming is called the primitive, If the primitive is maximal problem, then the
doubles will be a minimization problem and vice versa.

A linear program consists of a set of variables, a linear objective function indicating the
contribution of each variable to the desired outcome, and a set of linear constraints
describing the limits on the values of the variables. The “answer” to a linear program is a set
of values for the problem variables that results in the best — largest or smallest — value of
the objective function and yet is consistent with all the constraints.

Linear programming as a model admits assumptions (conditions) that the decision maker
must validate before being able to use them to model his problem. These assumptions are:
1. The decision variables of the problem are positive
2. The criterion for selecting the best decision is described by a linear function of these
variables, ie, the function cannot contain for example a cross product of two of these
variables. The function which represents the selection criterion is called the objective
function (economic function).
3. Restrictions on decision variables (eg resource limitations) can be expressed by a set of
linear equations. These equations form the set of constraints.
4. The parameters of the problem outside the decision variables have a value known with
certainty.

To use the linear programming model, the above steps must be followed
• Define the decision variables:it is a group of variables that cause the situation to be
modeled. These are the elements on which we will work

X1, X2, X3, …… ..Xn.

5
• Accuracy of the objective function: it is a function of mathematical component with
decision variables which represents the model; it represents what we want to optimize on

Z = c1x1 + c2x2 + c3x3 +… + + cnxn.

• Precision of the constraints of the problem: these are parameters which limit the model to
be realized, it is also the equations and the inequalities which form the variables of decision.
a11x1 + a12x2 + a13x3 +… + a1nxn (≤, =, ≥) b1
a21x1 + a22x2 + a23x3 +… + a2nxn (≤, =, ≥) b2
am1x1 + am2x2 + am3x3 +… + amnxn (≤, =, ≥) bm
Non-negative constraints.
Xj ≥ 0; j = 1, 2, 3,… n.
With Xi = unknown decision variables;
aij, bi, ci = linear program parameters.
• Precision of the parameters of the model: these are the constants associated with the
constraints and with the objective function. When an organization or business uses operations
research to solve a problem through linear programming, seven steps constituting the
procedures must be followed.
1.Formulation of the problem:
Formulation is the process of translating a real-world problem into a linear program.
Once a problem has been formulated as a linear program, a computer program can be
used to solve the problem.

The basic steps in formulation are:

 Identify the decision variables


The variables in a linear program are a set of quantities that need to be determined in order to
solve the problem; i.e., the problem is solved when the best values of the variables have been
identified. The variables are sometimes called decision variables because the problem is to
decide what value each variable should take. Typically, the variables represent the amount of
a resource to use or the level of some activity

 Formulate the objective function;

The objective of a linear programming problem will be to maximize or to minimize some


numerical value. This value may be the expected net present value of a project or a forest
property; or it may be the cost of a project; it could also be the amount of wood produced, the
expected number of visitor-days at a park, the number of endangered species that will be
saved, or the amount of a particular type of habitat to be maintained.

6
The objective function indicates how each variable contributes to the value to be optimized in
solving the problem. The objective function takes the following general form:
𝑛

min 𝑜𝑟 𝑚𝑎𝑥 𝑧 = ∑ 𝐶𝑖 𝑋𝑖
𝑖=1

where ci = the objective function coefficient corresponding to the ith variable, and
Xi = the ith decision variable.1

The coefficients of the objective function indicate the contribution to the value of
the objective function of one unit of the corresponding variable

 Identify and formulate the constraints.

The Constraints define the possible values that the variables of a linear programming problem
may take. They typically represent resource constraints, or the minimum or maximum level
of some activity or condition. They take the following general form:
𝑛

∑ 𝑎𝑗,𝑖 𝑋𝑖 ≤ 𝑏𝑗
𝑖=1

Where j = 1,2,….,m, Xi = the ith decision variable, aj, i = the coefficient on Xi in constraint j,
and bj = the right-hand-side coefficient on constraint j.

 A trivial step, but one you should not forget, is writing out the non-negativity
constraints

The Non-negativity Constraints for technical reasons beyond the scope of this book, the
variables of linear programs must always take non-negative values (i.e., they must be greater
than or equal to zero).We first define the problem of the company or organization. Defining
the problem means specifying the objective of the organization or business.

2.Observe the system: Collect data to estimate parameters that affect the situation of the
organization or business. This estimate will be used to develop step 3 and evaluate (step 4) a
mathematical problem model of the organization.
3. Formulation of the mathematical model of the problem : At this stage, we develop the
mathematical model of the problem.
Usually there are three basic steps to follow in order to build the model of a linear program:
 Identify the variables of the problem with unknown values (decision variables) and
represent them in symbolic form (eg: X1, X2).
7
 Identify the restrictions (constraints) of the problem and express them by a system
of linear equations.
 Identify the objective or the selection criterionand represent it in a linear form as a
function of the decision variables. Specify whether the selection criterion is to be
maximized or minimized.

4. Checking the model and using the model for prediction: we now try to determine if the
mathematical model developed in step 3 is an exact representation of reality.
5. Select the appropriate alternative giving a model and a set of alternatives: here we
choose the alternative that suits the purpose of the organization or company.
6. Present the results and conclusion of the study to the organization: In this step the
model and recommendation of step 5 are presented to an individual or a group of decision-
makers.
7.Implement and evaluate the recommendations if the organization or company has
accepted the study made. The system must be constantly monitored to ensure that
recommendations are followed to help it achieve its goal.

8
METHODS AND THEIR IMPLEMENTATION

Linear programming - Approach

 Analyze and understand the situation

 Identify the problem with available data

 Building a model

 Solve the model

 Place the results in the context of the initial problem

 Perform decision feedback loops analyzes

FORMALIZATION OF A LINEAR PROGRAM

It is a question here of carrying out a mathematical transcription of an economic, technical


and / or financial problem. To achieve this, you must:

1-Choose the decision variables in a suitable way; these variables designate the quantities
on which it is necessary to act, during a supply or in a production process, in order to achieve
an objective which, one has set. These variables are also called action variables or activity
variables.

2-Write the main relationships between the variables; these relationships must conform to
the letter and the spirit of the problem. They generally reflect the main constraints and the
economic objective to be achieved.

3-Finalize the transcription by presenting, on the one hand, the economic function to be
optimized and, on the other hand, the system of constraints generated by the problem.

A linear program is therefore represented by the economic function provided with its number
and then by the system of constraints.

9
1-The graphical method
The resolution by the graphic method is carried out in two steps:

1st step

It consists in solving the system of constraints and in determining the set of admissible
solutions of the problem. In the case of a standard maximization problem, this set is a closed
and convex polynomial domain.

2nd step

Among all the solutions which satisfy the system of constraints, it is necessary to find the one
which optimizes the economic function. Two methods make it possible to find the solution;
one can either identify all the vertices of the zone of admissible solutions, this is the
enumerative method, or have recourse to the method known as parallel lines.

1.1 Enumerative method or vertices of points


The zone of admissible solutions, also called zone of acceptability, is delimited by segments
of lines or half-lines. The points of intersection of these half-lines or segments, called vertices
of the domain, have coordinates which can be obtained by simple reading but preferably by
calculation, it suffices for this to solve the system of equations obtained from the equations
straight lines which converge at each vertex.

For each vertex, the value of the economic function is calculated and the pair of values which
gives Z its optimum value (maximum or minimum) is retained as the optimal program.

1.2 Methods of parallel lines


One gives an arbitrary value to Z and that makes it possible to represent the economic
function in the reference having been used for the representation of the system of constraints.
Starting from this line, we trace others that are parallel to it. We must find the line which
passes either through the vertex furthest from the origin (case of minimization).

1.3 Application case by graphic resolution


Production problem

A workshop manufactures 2 models X and Y, product X cannot be sold more than 400
copies, product Y cannot be sold more than 600 copies. To manufacture X it takes 3 hours of
labor, and 2 hours for Y, knowing that the company does not have 1,800 hours of labor.

The variable cost margin made on the sale of an X is € 30, from the sale of a Y is € 50. What
is the productive combination that maximizes the margin on variable cost?
10
The definition of the linear program is as follows:

technical constraints: the production of an X consumes 3 hours of labor, the production of a Y


consumes 2 hours. The capacity of this workshop is limited to 1,800 hours, hence the
following inequality: 3x + 2y ≤ 1,800,

Market constraints: it is not possible to sell for product X more than 400 units and for product
Y more than 600 units, hence the following inequalities: x ≤ 400 and y ≤ 600,

logical constraints: the quantities produced cannot be negative, hence the following
inequalities: x ≥ 0 and y ≥ 0

economic function to be maximized: MAX B = 30x + 50y, that's the goal to be achieved.

The graphic representation is as follows:

Figure 1: method graphic figure

(1): 3x + 2y ≤ 1 800 (2): x ≤ 400 (3): y ≤ 600 ∆: y = - 30/50 x

11
The field of possibilities (in yellow) is delimited by the lines passing through the points (0;
0), (400; 0), (400; 300), (200; 600) and (0; 600)

The MAX function is represented by the green line (∆) allowing to find by parallel translation
the point furthest from the field of possibilities. The point furthest from this straight line in
the field of possibilities is point M (200; 600).

So to reach the optimum, the quantities to be produced are: x = 200, y = 600. The maximum
margin will be (30 x 200) + (50 x 600) = 36,000 €

We observe that the commercial constraint of product X is not saturated, we could have sold
200 more units; the commercial constraint of product Y is saturated, the market was limited
to 600 units. Likewise, the technical constraint concerning productive capacity is saturated (3
x 200) + (2 x 600) = 1 800. The logical constraints are respected, namely x ≥ 0 and y ≥ 0

12
2-Simplex method

In this application we will study notions relating to linear programs such as the dual program,
marginal costs as well as techniques for validating the solution of a linear program, i.e.
sensitivity analysis.

2.1 Economic interpretation


The key elements of a standard linear program are:

The so-called economic objective function: This function can represent a cost, a profit,
etc.

The constraints are made up of the coefficients aij of the matrix A, called the
technological matrix, and of the constants bij which form the vector of the second member.
The second member can represent the availability of resources, demand levels etc ...

The variance variables can represent, for example in the farmer's problem, the surplus of
each of the resources: land, water, working hours, irrigation office. They are said to be surplus
variables.

When a variance variable is zero, we say that the corresponding constraint is saturated. In the
farmer's problem, land and labor constraints are saturated. They are also said to be restrictive
because a variation of the second member for example generates a change in the value of the
optimal solution.

Any stress unsaturated at the optimum is not restrictive for the problem, ie it has no influence
on the solution considered.

2.2 Marginal cost


By definition, the marginal cost of a good is called the minimal increase in expenditure,
compared to the optimal solution, which would result from the use of an additional unit of this
good, when the problem posed consists of producing goods. at the lowest cost.

13
If the problem is to transform goods to sell a production with a better profit and the maximum
increase in income which results from the possibility of having an additional unit of one of the
goods is the marginal value of this good. Very often, the qualifier marginal cost is also used in
this case.

If the variance variable is not zero, in the optimal solution, the corresponding good is already
in surplus. Therefore, disposing of this asset will not affect income. We then say that this
good has a marginal value of zero or, by extension, that the difference variable associated
with this good has a marginal value of zero.

On the other hand, if a variance variable is zero in the optimal solution, the corresponding
good is used totally. Subsequently, a variation in availability will generally have an influence
on income. This is why this zero difference variable in the optimal solution has a non-zero
value and this marginal value specifies the variation of the economic function resulting from
the use of an additional unit of the associated good.

2.3. Application of the simplex method


A farmer wants to allocate 150 hectares of irrigable area between growing celery and garlic. It
has 480 hours of labor and 440 m 3 of water and gives a net profit of 100 dinars. One hectare
of garlic requires 4 hours of labor, 2m3 of water and gives a net profit of 200 dinars.

The irrigated perimeter office wants to protect the price of celery. What is the best allocation
of its resources?

Formulation of the problem in a linear programming


Step1: Identification of decision variables. The two activities that the farmer must determine
are the areas to be allocated for the cultivation of celery and garlic:

• X1: the area allocated to growing celery;

• X2: the area allocated to growing garlic.

It is verified that decision variables x1 and x2 are positive: x1≥0, x2≥0.

Step2: Identification of constraints. In this problem the constraints represent the availability
of the factors of production:
14
• Land: the farmer has 150 hectares of land, so the constraint linked to the
limitation of the land area is x1 + x2 ≤ 150

• Water: the cultivation of one hectare of celery requires 4m3 of water and that
of a hectare of garlic requires 2m3 but the farmer only has 440m3. The constraint
which expresses the limitations of water resources is 4x1 + 2x2 ≤ 440.

• Labor: the 480 hours of labor will be separated (not necessarily in full)
between growing celery and garlic. Knowing that a hectare of tomato requires one
hour of labor and a hectare of garlic requires 4 hours of labor, then the constraint
representing the limitations of human resources is x1 + 4x2≤ 480.

• Irrigated Area Office Limitations: These limitations require the farmer to


cultivate no more than 90 hectares of celery. The constraint which represents this
restriction is x1≤ 90.

Step 3: Identification of the objective function. The objective function is to maximize the
profit brought by the cultivation of celery and garlic. The respective contributions 100 and
200 of the two decision variables x1 and x2 are proportional to their value. The objective
function is therefore Z = 100x1 + 200x2.

The linear program that models the farming problem is:

Max 100x1 + 200x2

X1 + X2 ≤ 150

4X1 + 2X2 ≤ 440

X1+ 4X2 ≤ 480

X1 ≤90

x1≥0, x2≥0

b. Standard format

The standardization consists in introducing additional variables (one for each constraint) so as
to rewrite the inequalities (≤) in the form of equality. Each of these variables represents the
number of unused resources. They are called the deviation variable. The standard form is
therefore written:

15
Max (Z = 100x1 + 200x2)

X1 + x2 + e1 = 150

4x1+ 2x2 + e2 = 440

x1+ 4x2 + e3 + 480

x1+ e4 = 90

x1, x2, e1, e2, e3, e4 ≥0

Finally, an algebraic procedure for solving linear programs must be able to choose among the
feasible solutions those which maximize the objective function.

For the simplex method an initial basic feasible solution is required. Such a solution can be
found by canceling all the decision variables. This corresponds in our example to the point of
origin O.

From this point the simplex method will successively generate basic feasible solutions for our
system of equation by ensuring that the value of the objective function is increasing until
locating the optimal solution of the problem which is an extreme point in the realizable
solutions space, therefore a basic realizable solution.

Table 1

Variables X1 X2 e1 e2 e3 e4 R

Non-base X1 X2 * * * *

Base

e1 1 1 1 0 0 0 150

e2 4 2 0 1 0 0 440

e3 1 4 0 0 1 0 480

e4 1 0 0 0 0 1 90

Z 100 200 0 0 0 0

16
Comment

 We find by column the coefficients of each of the variables of the standard form;

 With the exception of sign constraints, the constraint system is made up of three
equations. We then find ourselves in a three-dimensional space generated by three
vectors, for example (i, j, k, l), having as associated matrix:

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

 In the table we find this structure at the level of the variables e1, e2, e3 and e4. These
variables are therefore basic starting variables.

 As soon as a variable is not in the base, it is said to be outside the base; this is the case
for activity variables X1 and X2 in this first table, with respect to the base (e1, e2, e3,
e4) they are equal to zero.

 At the level of the column noted R, one then reads the values associated respectively
with the basic variables, that is to say e1 = 150, e2 = 440 and e3 = 480 e4 = 90.

 Taking these values into account, the economic function can be calculated; here it is 0.

 The economic function acquires its maximum value if all the values of the line noted Z
are negative or zero; otherwise the value of the economic function can be improved.

3-Improvement of the economic function by change of base.

One of the variables of the previous base must be replaced by a non-base variable; we choose
as input variable, the variable to which corresponds the largest positive value of the line noted
Z. Here the variable X1 meets the selection criteria (non-base variable to which corresponds +
200 the largest positive coefficient of the line Z) .

Once the incoming variable has been selected, we can then find the variable that must leave
the database; this is the outgoing variable. For this, the relationships between the elements of
column R and the corresponding elements of the column of the incoming variable are

17
established; or (150/1; 440/2; 480/4 and 90/0). The smallest positive ratio determines the
outgoing variable. Here it is the variable e3 which is the outgoing variable.

NOTES

 The line of the outgoing variable is called a pivot line denoted LP;

 The column of the entering variable is called the pivot column noted CP;

 We call pivot number, the number located at the intersection of the pivot line and the
pivot column. Here 4.

The next base will then be composed of the variables (e1, e2, X2 and e4) where e3 has been
replaced by X2

4-Moving from one table to the next.

The procedure to be followed is still there as long as the optimum of the economic function
has not yet been reached. Among other procedures we will retain the procedure which aims to
make a linear combination between two lines:

 Divide the LP pivot line by the pivot number; we obtain a new row L'P in which the
correspondent of the pivot number is then equal to 1.

 Replace an L line, other than the pivot line.

Let aip be the number located at the intersection of row Li and column Cp.

We replace the line Li by L'i given by the formula: L'i = Li - aip x L'P

18
Application to Table 1

Table 2

Variables X1 X2 e1 e2 e3 e4 R

Base Non-base X1 X2 X3 e2 e3 e4 R

e1 ¾ 0 1 0 -1/4 0 30

e2 7/2 0 0 1 -1/2 0 200

e3 ¼ 1 0 0 ¼ 0 120

e4 1 0 0 0 0 1 90

Z 50 0 0 0 -50 0 -24000

The pivot row of Table 1 is divided by the pivot number 4; we obtain a new line L'2
associated with the variable X2 and composed of the values:

¼ 1 0 0 1/4 0 120

In the pivot column of table 1 are the values 1, 2, 4, 0 and 200. The new row obtained is
multiplied L'2, respectively by -1, -2, -4, 0 and -200. The product by -1 and the addition to L'1
gives:

-1L'2 -1/4 -1 0 0 -1/4 0 -120

L1 1 1 1 0 0 0 150

L'1 = L1 - 3L'2 3/4 0 1 0 -1/4 0 30

The product of the new line L'2 by -200 and the addition to the line noted Z gives:

-50 -200 0 0 -50 0 -24000

100 200 0 0 0 0

50 0 -0 0 -50 0 -24000

19
New line associated with Z.

Once the table is completed, it must be analyzed in order to know if the optimum has been
reached and what are, at each stage, the values of the variables and that of the economic
function.

Following this second table, we can indicate that:

 , X1 and e3 are non-base variables and are therefore all zero.

 X, e1, e2 and e4 are in the base and are respectively 120; 30; 200 and 90

 The economic function Z is equal to -24000 = 24000

 Since at line Z there are still strictly positive values, the maximum of the function has
not yet been reached. We have to take the improvement process from Z.

 The largest positive value of the Z line is equal to 50, so X1 is the variable which must
enter the base. The relationships between the values in column R and the
corresponding coefficients of the input variable give:

 30 / (3/4) = 40; 200 / (7/2) = 54.14; 120 / (1/4) = 480

; 90 / (1) = 90 The smallest strictly positive ratio is 40;

 The outgoing variable is therefore e1 the next base is then composed of X1, e2; X2
and e3

 The row of variable e1 is the pivot row and the column of X1 is the pivot column. The
pivot number is 3/4.

Table 3

Variables X1 X2 e1 e2 e3 e4 R

Non-base X1 X2 X3 e2 e3 e4 R
Base
X1 1 0 4/3 0 -1/3 0 40

e2 0 0 -14/3 1 2/3 0 60

X2 0 1 -1/3 0 1/3 0 110

20
e4 0 0 -4/3 0 1/3 1 50

Z 0 0 -200/3 0 -100/3 0 -26000

Comment

It can be seen that all the values are negative or zero at the level of the line Z. the optimum is
therefore reached. The variables in the database areX1,e2,X2 and e4 according to column R,
the respective values associated with each of these variables are 40, 60, 110 and 50. The
variables e1, e2 and e3 are out of base and are therefore all zero.

The net effect associated with variables excluding bases e1 and e3 is negative. This forces us
to say that entering one of these two variables in the base will generate a decrease in the value
of the objective function. So there is no other basic workable solution that can lead to better
profit. Consequently, this last solution is said to be optimal and this last simplex table is called
an optimal table.

At the optimum, the economic function has the value -26000 = 26000.

This optimum is reached when X1 = 40m3allocated to growing celery and X2 = 110 m3


allocated to the cultivation of garlic for an optimal profit of 26000 Dinar

Taking into account the values obtained by the different variables, all the constraints of the
system of inequation are saturated at the optimum; which confirms the zero values of the
deviation variables.

3-The Dual method

To each maximization (or minimization) problem corresponds a minimization (or


maximization) problem involving the same data; in addition, there is a close link between
their optimal solutions.

21
3.1 Practical rules for switching from primal to dual x
Let (P) be a linear program and (D) its dual;

To go from (P) to (D), we must observe the following five rules:

-R1: The direction of the optimization is reversed ie a maximization problem in (P) becomes a
minimization problem in (D) and vice versa. The decision variables remain positive or zero.

-R2: The row vector of the coefficients of the economic function of (D) is the transpose of the
column vector of the constants second members of the principal constraints of (P).

-R3: The column vector of the constants second members of the principal constraints of (D) is
the transpose of the row vector of the coefficients of the economic function of (P).

-R4: The principal stress matrix of (D) is the transpose of the principal stress matrix of (P). In
these constraints the direction of the inequalities of the Primal Dual is reversed.

-R5: The decision and deviation variables of the two linear programs must be denoted by
different letters in order to avoid any confusion.

22
3.2 Case of application of the DUAL method
Primal DUAL

Max (Z = 100x1 + 200x2) Min (∆ = 150Y1 + 440Y2 + 440Y3 + 90Y4)

X1 + x2 ≤ 150 y1 + 4y2 + y3 + y4≥ 100

4x1 + 2x2 ≤ 440 y1 + 2y2 + 4y3 ≥200

X1 + 4x2 ≤ 480 y1 ≥ 0, y2 ≥ 0, y3 ≥ 0 y4≥ 0

X1 ≤90

X1, ≥ 0, x2 ≥ 0

Main results of duality

We will admit the following important theorem:

Let (P) be a linear program, (D) its dual

Let Z be the economic function in (P) and ∆ its counterpart in (D).

 The linear program (P) admits an optimal solution if and only if (D) admits one and in this
case Zopt = ∆opt

 If a decision variable in (D) has a non-zero optimal value, then the gap variable associated
with it in (P) is necessarily zero at the optimum

 If a deviation variable of (D) has a non-zero optimal value, then the decision variable
associated with it in (P) is necessarily zero at the optimum.

 The optimal values of the decision variables in (P) are the marginal values of the
deviation variables associated with them in (D).

 The optimal values of the deviation variables of (P) are indicators of the optimum of the
decision variables associated with them in (D) on the row of ∆.

 The marginal values of the deviation variables of (P) are the optimal values of the
decision variables associated with them in (D).

23
1.3 APPLICATION EXAMPLE

On Mrs. BARNEY's farm, she ensures that her hens absorb a minimum of 216 units of calcium
and 78 units of vitamins every day. One measure of corn provides 2 units of calcium and 5 units
of vitamins. One scoop of bone-based food provides 4 units of calcium and 1 unit of vitamins.
One measure of millet provides 2 units of calcium and 1 unit of vitamins.

How will Mrs. ALKASS have to mix these three foods so as to meet at the lowest cost the daily
ingestion requirements of her chickens, knowing that the measurements of these foods cost
respectively 4000, 2000 and 3000 francs?

SOLUTION

Daily calcium and vitamin requirements are obtained from a mixture of three products: corn,
bone and millet: choice of variables for corn X1, X2 for bone-based food and X3 for millet.

Min (Z = 4000X1 + 2000X2 + 6000 X3)

2 X1 + 4 X2 + 2 X3≥216 (related to calcium requirement)

5 X1 + X2 + X3≥72 (related to vitamin requirement)

X1; X2; X1≥ 0)

X ≥ 0, Y≥5, Z ≥ 2

Switching to the standard form by adding the difference variables would not make it possible to
have a basic feasible solution from the start; artificial variables must be added, to avoid the
complexity of this procedure we are going to solve the dual program of the primal.

Max (= 216Y1 + 72Y2) Max (∆ = 216Y1 + 72Y2)


2 Y1 +5 Y2 ≤ 4000 2 Y1 +5 Y2 + e1 =4000
4 Y1 + Y2 ≤ 2000 4 Y1 + Y2 + e2 = 2000
2 Y1 + Y2 ≤ 6000 2 Y1+ Y2 + e3 =6000
Y1 Y2≥ 0 Y1, Y2, e1, e2, e3 ≥ 0

24
Table 1

B HB Y1 Y2 e1 e2 e3 R Report

e1 2 5 1 0 0 4000 2000

e2 4 1 0 1 0 2000 500

e3 2 1 0 0 1 6000 3000

Z 216 72 0 0 0

One solves here with the method of the simplex by looking for the pivot line and the pivot
column

Table 2

B HB Y1 Y2 e1 e2 e3 R

e1 0 9/2 1 -1/2 0 3000

Y1 1 ¼ 0 ¼ 0 500

e3 0 ½ 0 -1/2 1 5000

Z 0 18 0 -54 0 -108000

All the values of the line of Z not being negative, we repeat the same procedure to obtain table 3
below
Table 3
B HB Y1 Y2 e1 e2 e3 R
Y 0 1 2/9 -1/9 0 2000/3

Y1 1 0 0 1/4 0 100/3

e3 0 0 0 -1/2 1 13999/3

Z 0 0 -4 -52 0 -12000

25
Conclusion: From the resolution of the Dual program it appears that Y1 = Y2 = 0.

e1 = 0, e2 = 72 and e3 = 0; these values are to be read on the line of. Let's make the
correspondence between Primal and Dual variables:

Primal X1 X2 X3 T1 T2

Dual E1 E2 E3 Y1 Y2

Value 4 52 0 0 0

It is therefore necessary to acquire 4 measures of corn, 52 measures of bone-based food and no


measure of millet for a minimum cost of 120,000.

4-The Two phase simplex method

When a basic feasible solution is not readily available, the two phase simplex method can
be employed as an alternative to the Big M method. In the two phase simplex method, we add
artificial variables to the same constraints as we did in Big M method, Then we seek a basic
feasible solution to the original PL by solving phase I of the LP. In stage I of PL, the objective
function is to minimize the sum of all artificial variables. Upon completion of phase I, we
reintroduce the objective function of the original LP and determine the optimal solution to the
original LP.

The steps involved here are;

Step 1 :Modify the constraints so that the right side of each constraint is non-negative. This
requires that each constraint with a negative right side be multiplied across by -1.

Step 2: Identifyeach constraint which is now (after step 1) = or ≥ constraint. In step 3, we will
add an artificial variable to each constraint.

Step 3:Convert each inequality constraint to a standard format. If the constraint i is a ≤


constrained, then add a deviation variable. If the constraint i is a ≥ the constraint, subtract an
excessive variable ei.

26
Step 4:If (after step 1 ') the constraint i is ≥ or = a constraint, add an artificial variable ai. Also
add the restriction ai ≥0.

Step 4: For now, ignore the objective function of the original PL. Instead solve a LP whose
objective function is min w '= (sum of all variables artificial). This is called phase I PL. The act
of solving phase I PL will force the artificial variables to be zero.

Case 1: The optimal value of w 'is greater than 0. In this case, the original PL has no feasible
solution.

Case 2: The optimal value of w 'is zero, and no artificial variable is in the optimal basis of phase
I, In this case, we drop all the columns in the optimal table of phase I which corresponds to
artificial variables. We now combine the original objective function with the constraints of the
optimal table from phase I. This reports phase II PL. The optimal solution to phase II PL is the
optimal solution to the original PL.

Case 3:The optimal value of W 'is zero and at least one artificial variable is in the optimal basis
of phase I. In this case we are looking for the optimal solution to the original PL if at the end of
phase I we leave fall from the optimal table of phase I all the non-basic artificial variables and
which variables of the original problem that has a negative coefficient in row 0 of the optimal
table of phase I.

EXAMPLE: Bevco makes an orange flavored soft drink called Oranj by combining soda and
orange juice. Each ounce of orange soda contains 0.5 ounce of sugar and 1 mg lives C. Each
ounce of orange juice contains 0.25 ounce of sugar and 3 mg of vitamin C. It costs Bevco 2 ¢ to
produce one ounce of orange soda and 3 ounce ¢ to produce one ounce of orange juice. Bevco's
sales department has decided that each 10-oz bottle of Oranj should contain at least 20mg of
vitamin C and at most 4 ounces of sugar. Employ linear programming to determine how Bevco
can meet the requirements of the sales department at minimum cost.

EXAMPLE

First we use the two-phase simplex to solve the Bevco problem from the previous section.
Remember that Bevco's problem was

27
SOLUTION

As in the Big M method, steps 1-3 transform the constraints into

Step 4:report the following phase I of PL:

This set of equations reports the basic feasible solutions starting for phase I (s1 = 4, a2 = 20, a3 =
10). Note, however, that row 0 for this table (w'-a2 - a3 = 0) contains the base variables a2 and
a3. As in the grand method of M, we have to eliminate a2 and a3 from row 0 before we can solve
the phase

To eliminate a2 and a3 from row 0, simply add row 2 and row 3 to row 0:

Combining the new row 0 with the constraints of phase I yields the initial table of phase I.
Since the problem of phase I is always a minimization problem (even if the original Pl is a

28
maximum problem), we write x2 in the base. The ratio test indicates that x2 will enter the base in
row 2, with a2 going out into the base. After performing the necessary ERO, we get the table in
table 39. Since 5 <20 and 5 <28, x1 writes the base in row 3. Thus, a3 will start from the base.
Since a2 and a3 will be nonbasic after the current pivot is completed, we already know that the
next table will be optimal for phase I.

Since we have concluded W = 0, phase I. The basic feasible solution s1 = 1/4, x2 = 5, x1 = 5 has
been found. No artificial variables are in the optimal basis of phase I, so the problem is an
example of case 2. We now drop the columns for the artificial variables a2 and a3 (we don't need
them anymore) and let's reintroduce the original objective function.

Since x1 and x2 are both in the optimal basis of phase I, they must be eliminated from the

Row 0 of phase II. We add 3 (row 2) + 2 (row 3) from the optimal array from phase I to row 0.

We now begin phase II with the following set of equations:

29
TABLE 8

TABLE 9

30
TABLE 10

This is optimal. Thus, in this problem, phase II does not require any pivot to find an optimal
solution. If row 0 in phase II does not indicate an optimal array, then simply continue with the
simplex until an optimal row 0 is obtained. In summary, our optimal phase II table proves that the
optimal solution to Bevco's problem is z = 25, x1 = 5, x2 = 5, s1 = 1/4, and e2 = 0. This
conforms, of course, to the solution optimal found by the Big M method.

To illustrate Case 1, we are now modifying Bevco's problem so that 36mg of vitamin C is
required. From the past section, we know that this problem is infeasible. This means that the
optimal iodine solution from phase 1 should have W> 0 (case 1). To prove this to be true, we
start with the original problem:

31
TABLE 11

TABLE 12

After performing steps 1-4 of two phase simplex, we get the following phase 1 problem:

From this set of equations, we see that the basic feasible solution of the initial phase I is s1 = 4,
a2 = 36, and a3 = 10. Since the basic variables a2 and a3 occur in the objective function of the
phase I, they are eliminated from row 0 of phase I. To make

32
This, we add rows 2 and 3 to row 0:

With the new row 0, the initial table of phase I is as shown in table 11. Since 4> 2, we should
write x2 in the base. The ratio test indicates that x2 should write the base in row 3, forcing a3 to
start from the base. The resulting table is shown in Table 12. No variable in row 0 has a positive
coefficient, so this is an optimal table of phase I, and since the optimal value of W is 6> 0, the
original LP must have no feasible solution. This is reasonable, because if the original PL had a
feasible solution, it would have been doable in phase I PL (after setting a2 = a3 = 0). This
feasible solution would have yielded W = 0. Since the simplex could not find a solution of phase
1 With W = 0, the original PL must not have any feasible solution. To illustrate case 3: we will
use the two phase simplex method to solve the following LP:

SOLUTION:

We can use x4 as the base variable for the fourth constraint and use the artificial variables a1, a2,
and a3 as the base variables for the first three constraints. Our objective of phase I is to minimize
w = a1 + a2 + a3. After adding the first three constraints to w-a1 - a2 - a3 = 0, we get the initial
phase I table shown in table 13. Even though x5 has the most positive coefficient in row 0, we

33
choose d ' write x3 in the base (as base variable in row 3). We see that this will immediately
report W = 0. Our final table from phase I is shown in Table 14.

Since W = 0, we now have an optimal table from phase I. Two artificial variables remain in the
base (a1 and a2) at a zero level. We can now drop the artificial variable a3 from our first table in
phase II. The only original variable with a negative coefficient in the optimal table of phase I is
x1, so we can drop x1 from the whole future table, This is because from the optimal table of
phase I we find w = x1. implies that x1 may never become positive during phase II, so we can
drop x1 from the whole future array, Since z -40x1 - 10x2 -7x5 -14x6 = 0 does not contain any
base variables, our initial array for phase It is as in Table 15.

TABLE 13

TABLE 14

34
TABLE 15

TABLE 16

We now write x6 in the base in row 4 and get the optimal table shown in table 16. The optimal
solution to our original PL is z = 7, x3 = 7/2, x4 = 1/2, x2 = x5 = x6 = x3 = 0.

35
ADVANTAGES AND LIMITATIONS OF LINEAR PROGRAMMING

Based on the fact that management of production to minimize cost or maximize profit is one the
most considered arms in an enterprise this method presents numerous advantages as it helps this
enterprise solve their complex problems but not withstanding it also threatening in some areas
and limited due to other constraining factors.

ADVANTAGES OF LINEAR PROGRAMMING

Even though linear programming has a number of disadvantages, it's a versatile technique that
can be used to represent a number of real-world situations and outlined below are some of the
advantages of linear programming;

Linear programming techniques help in attaining the optimum use of productive resources. It also
indicates how a decision-maker can employ his productive factors effectively by selecting and
distributing (allocating) these resources. It improve the quality of decisions. The decision-making
approach of the user of this technique becomes more objective and less subjective. This method
may help businesses simplify their operations so they can get more done in less time and for
lower costs. Furthermore, it allows for better decision making in a wide range of situations. For
example, companies can use it to analyze financial or industrial problems, identify solutions and
make adjustments based on the results.

Linear programming techniques are used to solve problems that involve multiple variables and
constraints

The most significant advantage of this technique is highlighting of bottlenecks in the production
processes. For example, when a bottleneck occurs, some machines cannot meet demand while
other remains idle for some of the time.

Linear programming also helps in re-evaluation of a basic plan for changing conditions. If
conditions change when the plan is partly carried out, they can be determined so as to adjust the
remainder of the plan for best results. That is linear programming analysis can help with both
with determining whether management plans are feasible and in unbounded cases where the

36
value of the solution is infinitely large, without violating any of the constraints, warning that the
problem is wrongly formulated.

LIMITATIONS OF LINEAR PROGRAMMING

Linear programming in other to provide possible and practical solutions since there might be
other constraints operating outside the problem which must be taken into account. Just because
we can produce so many units does not mean that they can be sold. Thus, necessary modification
of its mathematical solution is required for the sake of convenience to the decision-maker.

It is not easy to define a specific objective function.

Even if a specific objective function is laid down, it may not be so easy to find out various
technological, financial and other constraints which may be operative in pursuing the given
objective.

Given a specific objective and a set of constraints, it is possible that the constraints may not be
directly expressible as linear inequalities.

Even if the above problems are surmounted, a major problem is one of estimating relevant values
of the various constant coefficients that enter into a linear programming mode, i.e., prices, etc.

This technique is based on the assumption of linear relations between inputs and outputs. This
means that inputs and outputs can be added, multiplied and divided. But the relations between
inputs and outputs are not always linear. In real life, most of the relations are non-linear.

This technique assumes perfect competition in product and factor markets. But perfect
competition is not a reality.

The LP technique is based on the assumption of constant returns. In reality, there are either
diminishing or increasing returns which a firm experiences in production.

It is a highly mathematical and complicated technique. The solution of a problem with linear
programming requires the maximization or minimization of a clearly specified variable. The

37
solution of a linear programming problem is also arrived at with such complicated method as the
‘simplex method’ which involves a large number of mathematical calculations

Mostly, linear programming models present trial-and-error solutions and it is difficult to find out
really optimal solutions to the various economic problems.

38
CONCLUSION

Ultimately, it follows from our work that the linear program is dynamic and flexing problem
solving with a multiplicity of dimension. Resulting from operations research, it is a tool allowing
to solve a large number of problems modeled in the form of linear equation and to solve these
problems in an exact way. One of the most widely used methods in real-time linear programming
is the simplex method because it is supposedly inefficient due to its polynomial complexity.
However, this is a good method in practice. Linear programming offers industry a way to inform
the decision makers of all the important information and the most favorable decision. It is an
asset to companies in today’s growing economy.

39
REFERENCE

 Management control, production budgets, Linear programming 2 pages


 F. Clautiaux, universite Bordeaux, programmes lineaires, modelisation et resolution
graphique, 67 pages.
 EL OSROUTI MOHAMED, MR SAADI, ZOUGAGH SOUFYANE, Linear
programming, universityMohamedPremier, 2015/2016, 29 pages.
 Anderson,Sweeney and Williams. An introduction to management science, 2003
 Elwood S.Buffa. Operations management : Problems and Models. University of
California Angeles, 1963
 Jessica Faith Worrell, Linear Programming,27 pages.

40

You might also like