0% found this document useful (0 votes)
65 views

Simultaneous Linear Algebraic Equations

Chapter 4, Computer Oriented Numerical Methods, V. RAJARAMAN

Uploaded by

Abc Def
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
65 views

Simultaneous Linear Algebraic Equations

Chapter 4, Computer Oriented Numerical Methods, V. RAJARAMAN

Uploaded by

Abc Def
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 18
; Solving Simultaneous Linear Algebraic Equations After reading this chapter, you should be able to: > Write an algorithm to solve simultaneous linear algebraic equations using Gaussian ‘elimination method > Explain pivoting in solving simultaneous linear algebraic equations and why it is needed > Explain what ill-conditioned simultaneous linear algebraic equations are and how to solve them by refining Gaussian elimination method > Write an algorithm to solve simultaneous linear algebraic equations using Gauss- Seidel iterative method > Compare Gaussian elimination method with Gauss-Seidel iterative method and explain how to select the appropriate method to solve simultaneous linear algebraic equations §41 INTRODUCTION Simultaneous linear algebraic equations occur oft and engineering and is an important area of stud} ‘mainly concerned with solving a set of n linear algeb We are normally taught (in schools) a meth “qUations, based on the evaluation of determinants, cal Small number of simultaneous equations (namely, 3 or 4) this rule is satisfactory. d when this method is used The number of multiplication operations neede Meteases very rapidly as the number of equations increase. For example for 10 “QUations it is nearly 70 million arithmetic operations and with 50 equations fi 085 up to 10% operations which is very large even for a fast computer. A erent approach is thus needed for solving such equations on a computer. 83 en in diverse fields of science ly. In this chapter we will be raic equations in n unknowns. od of solving simultaneous Hed Cramer’s rule. For a —_— wo ComPuTer evo Naecs ME : ~~ iger two 12° techniques of solving i rer we W Simy ins me esse jou, = for computers, we equations: : is would about aritl operations for these techniques «as com to 70 million operations of (}, iy uations oie ct method and the iterative nu et be variables to tra meth Sy insform we ethod is a successive app, at its advan es and Prong, | Bs AN Under, a ! ations: toatriangular ods bas e 5 choice when a set a, of quit, | ke a judicious given IN METHOD 542 THE GAUSS ELIMINATIO! all consider three simultaneou: equation > int To illustra ‘unknowns: aya + hs tts = A i |) ay ral reat” ™ © y ay ayhy + ays%s=4 4 ate the first term from eres (4.2) and (43), Lh ‘The first step is t0 celimin: divided by 4, and multiplied by a,, and subtrayy aderto do it Equation (4.1) is from (4.2). This eliminates, fom (4.2) as shown below: yh tc ties = 4 wy If we call a = ky Equation (44) may be rewritten as 7 (a4) — a8 + gH # ay ios = ae kay Malt ultiplying Equation (4.1) by (2 1 }e and whracing fom 439 (a, -k, (04) — ha + (hy — Kya iy + (dy — A) = Ou ka) qo Observe that ait ieee a, are both zero. case when a,, t oeo bes have assumed that a, is not zero. We wilco The above elimination proce that too many issues are not confus wis procedure is algorithmically expressed below _—~ Souwne Smurtanous Livan Aictanac Eouanons (iis 4.1 Elimination of x, from 3 Equations = 1 to 3 andj = 1 to 4 in steps of 1 do Read a, endfor ;=2 to 3 in steps of 1 do : ue a,/a,, forj= 1 to 4 in steps of 1 do a, 4,— ua, endfor endfor In the above algorithm instructions 2 and 4 both command that instructions 105 be repeated a certain number of times. The way it is executed is as follows: ‘P' Instruction 2 commands that all instructions up to and including 5 be repeated first with i = 2 and next with i = 3. Thus i is set equal to 2 and the next jgstrction taken up for execution. This instruction sets u = a,,/a,,. Instruction 4 commands that instruction 5 be done first with J = 1, then with j = 2, next with je3 and lastly with j = 4. Thus when instruction 5 is reached i = 2 and jel. After executing 5 with these values of i and j the inner loop (consisting of instruction 5) is executed again with i = 2 and j = 2, 3, 4. After the inner loop is executed for all values of j, the outer loop command (namely, instruction 2) is tuken up for execution. Thus i is set equal to 3 and with this value the inner loop isagain executed for all values of j, namely, 1, 2, 3 and 4. This type of a structure of loops in an algorithm is called nested loops. In nested loops if the outer loop is to be repeated for i = 1, n and the inner loop for J=1tom then the instructions inside the inner loop will be executed mn times and those in the outer loop times. The sequence of values for (i, j) is shown below: PULA 222.2.nnn wn JF U23..m 123 ..m..123 The execution of Algorithm 4.1 is traced in the table below: i= uea,la,, 1 Ja, a, —ua,, (« $F or ay -0) Gy — Ay — Udy, @,, — a,,— ua, Afwlr Oy — Ay ~ Udy, u<—a,/a,, 4,, —a,,—ua,, =0 Goiten aia 4,, — ay, — ua, afofrfe 4, — a,,—ua,, COnenteD Nunenicat MetHoDs, ————————__ ComPuTer The reduced equations are: ay 8 # yghy + Ay shs = My gph, + 98s = Ay Pe = ( AyXy + AX, = yy 4 ; : y ‘The next step is to eliminate a,x, from Equation (4.8). This ci oe , , Equation (4.7) by w= 4y/a,, and subtracting the resulting equation from ot Algorithmically this is represented below. ‘ ALGORITHM 4.2 For Eliminating x, from Third Equation = ~~ 1 i=3 2 ue aly, 3 forj=2to4 in steps of 1 do 4 a,@a,-ua, endfor Tracing the above algorithm we get the table; iti 3 uea,fa, oe 932 3 | 2 [ay ay — ua, =a, — % 3 | 3 |a,4,,-ua,, 3 | 4 |a,ea,-ua,, Thus at the end of this step the equations are (4.0) an 49) SK, = Oy The , F above set of equations are said to be ina triangular form. ALGO! iminati RITHM 4,3 Elimination of , Extended to n Unknowns L fori=1tonins teps of | and j = Read a,endjor | 4d 2 fori=2tonin steps of 1 do to (n+ 1) in steps of 1 do Z = 1to(m +1) in steps of 1 do 0 — 4, —wa,, endfor a _——-—_ Sovine SimuLTANEOUS LINEAR ALGEBRAIC EQUATIONS (a yiGoRITHM 4.4 Elimination of x, , Extended ton Unknowns | ier 2 ue Af 4-toort 3 forj=(n- Ito (n+ 1) in steps of 1 do 44 4h endfor In general while eliminating x, the procedure would be: ALGORITHM 4.5 Eliminating x, from n Equations 1 foriz(k+ 1) to nin steps of | do 2 uea,fa, 3 ‘for j= ko (n+ 1) in steps of 1 do 4 a, a,— ua, endfor endfor Combining these ideas we may, evolve the general algorithm given triangularize a set of n equations in n unknowns. ‘ALGORITHM 4.6 Triangularizing » Equations in » Unknowns 1 for i= 1 tomin steps of 1 and = 1 to (n+ 1) in steps of 1 do Read a, endfor 2 for k= 1 ton— 1 in steps of 1 do for i= (k-+ 1) to nin steps of 1 do below to 3 4 uca/a, 5 forj=kto (n+ 1) in steps of I do ‘endfor 6 a, dy— 4 Oy endfor endfor ve algorithm for n = 3 by making a table. The student is urged to check the abo From the triangular form obtained in Equations (4.1), (4.7) and (4.9) the ined by back substitution as follows: Values of x,,x, and x, are obtait x, 4,/45, (4.10) £2 (yan (an (4.12) = (Oy 4h back substitution may be expressed as the Extending the above to n equations the following equations: = eal (4.13) fori=(n-1),(n-2), (2-3) t we may write (4.14) joe) yaad Conputer OnienTé> NUMERICAL MetHoos ~~ | | z i ‘1 essed as algorithm 4,7 _ ‘The above equations may be exp! biveg bs hy ALGORITHM 4.7 Back Substitution bs V8, © Af 2 fori=(n-l)tol in steps of - 1 do ; Son (i+ 1) ton in steps of 1 do 5 sum © sum + 4,4, endfor 6 x,€(a,,,.,~sum)/a, endfor EXAMPLE 4.1 Given the simultaneous equations shown below (i) triangularize them (iy back substitution to solve for x, x, x * 2x, + 3x, + 5x, = 23 3x, + 4x, +4, = 14 6x, + Tx, + 2x, = Solution eae ‘The matrix of coefficie Nts is given as Row 1, Ror M each row how they get altered by the algorithm "2 and Row owe ae |_| [Sept | [Sep2 | (eps Triangularication 2 Divide pew ! PY 2 and mons Re Multiply ; Step 3; Divide Row ‘by 2 and m ily it by 3, ee ultiply j Row3 21 by 95 poe 'tbY 6. Subtract from Row 3. At the end Of thes, multiply it by -2. Subtract fi? — Sow Seutancous Lacan Aca Eousrions Qi sing Algorithm 4.7 for backsubstitution 4% © ay/a,, = 39/13 =3 »J=3 sum —0+a,.x,=-19.5 % © 4,,— (-19.5)/a,, = (-20.5 + 19.5)/-0.5 =2 1,j=2sum<-0+a,x,=3*2=6 i=Lj=3sum 6 +a,x,=6415=21 X, © @,,-sum)a,, = (23 -21)/2=1 gus, = 1h =2andx,=3. 543 PIVOTING inthe tiangularization Algorithm 4.6 instruction 4 is uca,fa,. Inthis instruction it has been assumed that 4,, is not zero. If it happens to be zero or nearly zero the algorithm will lead to no results or meaningless results. An example of this type was considered in Chapter 2, Sec. 2.5 for two simultaneous equations. It was seen there that the two equations when interchanged gave a better solution. Similarly if any of the a,,s is small it would be necessary to re-order the equations. Observe that the value of a,,s would be modified during the elimination process and there is no way of predicting their values at the start of the procedure. Referring to Equations (4.1), (4.2) and (4.3) and Algorithm 4.1 it is seen that allelements except a,, in column 1 are made zero by that algorithm. The element 4, is called the pivot element. After a,, and a,, are eliminated the next step in riangularization is to eliminate all the terms below a,, in the second row (in this case a, only). The pivot in this case is a,.. In the elimination procedure the pivot should not be zero or a small number. |nfact for maximum precision the pivot element should be the largest in absolute value of'all the elements below it in its column. In other words a,, should be Picked as the maximum of @,,,d,, and a,,. The element a,, should be greater than ‘xy In general the pivot element a, should be the largest of {la,} form=k+1,n Thus during the Gauss elimination procedure |a,,[s should be searched and the Sauation with the maximum value of [a,,| should be interchanged with the current “uation. For example if during elimination we have the following situation: x, + 2x, +3x,=4 4.15) 0.3x, + 4x,=5 (4.16) Bx, + 3x, =6 (4.17) [BRBID conruren Oniereo Nurenica, Memos = then as |-8| > 0.3, Equations (4.16) and (4.17) should be interchangeg 10 yi x, + 2x, + 3x, 24 ny Bx, + 3x, =6 4 0.3x, + 4x, 5 (64 It should be remembered that interchange of equations does not Affegy the sour algorithm for picking the largest element as the pivot and interchanging the equations is called pivotal condensation. This algorithm is t0 be inserteg ig, to instruction 2 in Algorithm 4.6 for triangularization. ALGORITHM 4.8 Pivotal Condensation > Remarks: To be inserted between instructions 2 and 3 of Algorithm 4,6 2 for k= 1 to (n— 1) in steps of 1 do 201 max € |a,| 202 pek 203. for m=(k+ 1) ton in steps of 1 do 204 if (la,,| > max) then begin 205 max € |a,,| 206 pem end endfor 207 if (max Se) then begin 208 Write ‘Ill-conditioned equations’ 209 Stop end 210 else if (p = k) then Exit to 3 211 forg=kto(n+ 1) in steps of 1 do 212 temp — a, 213 aa ta — og 214 a, temp endfor (k+ 1) ton in steps of 1 do ee algorithm an interesting new feature to be noted is the way of a ee = numbers, In this Procedure 4, is to be interchanged with 4, wnit e Fort in 4, will be lost, Forkterivnan 4, Will replace a,, andthe original a moved into ,, 88 shown in Figue aie 4,, 1S to be temporarily stored — — Sou SiuLTaNEous Lnean Arctannc Equarions 1 3 a Figure 4.1 Illustrating interchange of values. ‘The student is urged to trace though the above algorithm with a set of 4 gnultaneous equations in 4 unknowns, a44 ILLCONDITIONED EQUATIONS qfatany time during pivotal condensation it is found that all values of {la,,)} form=k+lton we less than a preassigned small quantity ‘e’ then the equations are illconditioned and no useful solution is obtained. Even when this does not happen the equations would be illconditioned if the determinant of the coefficients of the equation is small. (Remember Cramer’s rule.) The value of the determinant is available as a byproduct after triangularization as it is equal to the product 4,,4,,4,, --- Gy, Under these conditions some special techniques have to be used to get reasonably useful answers. One of these will be discussed in the next section. 145 REFINEMENT OF THE SOLUTION OBTAINED BY GAUSSIAN ELIMINATION ution will have some rounding error. We will Even with row interchanges the sol discuss a method called iterative refinement which leads to reduced rounding illconditioned problems is emors and often a reasonable solution for some obtained, _ Let 2,2, be the approximate solution obtained from the Gaussian limination procedure for Equations (4.1), (4.2) and (4.3). If we substitute these Suutions on the left hand side of these equations we obtain (0) a,x + ax + aye =al) (4.21) (0) 2 +a x +a x = al) (4.22) (0) 4. ayyx®) + ass x0) = af (4.23) SX ft 0) 4 - a) =a ed differences (a,4 —a\)= A®, (ay, — 0) = AS? and (a4 ou) Ay nee then x, x®, x0 are not good solutions. (Even if these differences Ba Small it does not guarantee that the solutions are correct.) If we subtract Rions (4.21), (4.23) and (4.23) from (41), (42) and (4.3) we obtain IBRD conrren Onerreo Nunencar Meroos oo (0) — 0 (0) 9,61 + ape? + a430) = Al? 4a (0) (0) _ (0) ) Gye\") + ange? + dyyeh) = ALY (ty (0) (0) _ (0) 4 a5,e{° + age) + aygeh” = AY (4% 0 © and £ = x, — x where ef°) =x, ~3{°), ef) =x, — xf”) and ef? = x5 — x0), Equations (4.24), (4.25), (4.26) may be solved by Gaussian elimination values of e,’s obtained. A new approximation to the solutions is then a (0) _ 5) 4 (0) xf = x) + of This procedure may be repeated by substituting x/"s in (4.1), further refinements to solution obtained. It has been found that this refinement procedure lead to more accurate solutions even j systems. (4.2), (43) ay 8 Few cycles op N illconditiong §4.6 THE GAUSS-SEIDEL IT! ERATIVE METHOD The Gauss-Seidel method will be first illustrated with two simultaneos equations. We will later generalise it to n equations in n unknowns, Consider the simultaneous equations given below: x,+4,=2 (4 3x, - 10x, =3 (4.28) Start with an initial value of x° = 0. Substitute this in Equation (4.27) and obtain the value of x? as 2. Use this guessed value in Equation (4.28) to obtain a refined Buess for x,. This is given by x! = Gx? ~ 3/10 = (6 - 3)/10 = 0.3. Use this in Equation (4.27) to get another value of x,”. The progress of the iteration s shown in Table 4.1, TABLE 4.1. Ilustrating iterative Method tion Number 1 2 2 17 0.21 183 179 0.237 4 1763 | 0.229 5 Lm | 0231 6 1769 | 0231 7 1769 | 0.231 Initial guess x9 _— _ ~~~ SOLVING SIMULTANEOUS LINEAR. Aucesraic EQUATIONS 3 the val Observe Sea of x, and x, converge to 1.769 and 0.231 respectively which is the solution. As this is an iterative method iterations are stopped when {uecessive values of x, a8 well as of x, are ‘close enough’, This is a successive approximation method and the technique is illustrated graphically in Figure 4.2. The student is urged to compare this with the iterative technique presented in Chapter 3, Section 3.7. The convergence of the iterative method for two simultaneous equations will be investigated now. Consider the equations 41% + 4%, = a, (4.29) Figure 4.2 _ Illustrating the Gauss-Seidel method. 4, X, + AX, =A, (4.30) The iterative method to proceed from the kth to the (k + 1)th iteration is shown below: at = a3 — 22 (431) yyX3 yy ~ aay"! (432) Eliminating x(* between Equations (4.31) and (4.32) we get Sian ef! + a5 ]= a3 — 210 (4.33) | mm Similarly for the (k + 1)th to (k + 2)th iteration we get [maga ? + an |= as a8" (434) 21 pening 4.3) fom(434)andcaling (aft! — xh) = ef and (xf? - xf!) = eft! eet _ Gin 4, 1922 435) ys WIRD conrurer Oneren Nunca Meroe Thus if the difference between successive iterations is to decrease y, my Satisfy the condition 2921] 6] (4, 21222 If we eliminate x, and work with x, and call the difference between SUCCEssiyg iterations in x,, namely eh (ft! = xf) = 41 then we obtain the equation ket _ 912421 ok eqs e (43 441922 Again for the convergence of the iterative procedure we should ensure that aya cata 1 (4.36) 91922 Convergence will thus be ensured if the diagonal terms are larger than the off diagonal terms. Ifa,,|> a,,| and a,2 a,, or if fa, | a, and a,,| > a,, the above condition (Equation 4.36) would be satisfied. For Equations (4.27) and (4.28) we have an0|_| 3 |_ 93 fa;,ay9|_[=10 and the procedure should converge. If the same equations are interchanged ve get 3x, - 10x, =3 (4.28) x, +x,52 2 aaa eH XY 3.3351 1129 3 . ‘Thus the same iterative procedure would diverge! The student is urged to draw the strai i hese és : e traight lines corresponding to thes Pee and satisfy himself that the Procedure diverges ond answer why iverges. For these equations 04.7 AN ALGORITHM TO SEIDEL METHOD Consider the set of n equations in n unknowns IMPLEMENT THE GAUSS- ) 4 + O12 +O, = O,444 (433) Sou YING SiuLTaNeous Livean Avceeawc Equarions (igi Gy4X1 + day xy ++ mtn = Dang (4.39) Q3)X, + 39%) ++ a, x = a ann = Bang (4.40) y)X1 + Oy9Xy +o a,,%, = nn (4.41) i o in the Gauss-Seidel method x3...) are set equal to zero and x! is calculated from (4.38). This value of x, and zeros are substituted for x, X,y .. in (439) and x, computed from Eq. (4.39). The values of x} and x} along with x, are used in (4.40) to compute x}. Finally x! x},....24.) sreused in (4.41) to compute x,. W-th this the first iteration ends. In the second iteration aad a are used to compute x;.x7,.x}...x! are used to get x? and so on. The main point to observe is that always the latest approximations for the values of variables are used in an iteration. ‘The complete algorithm to implement the Gauss-Seidel iteration is given as Algorithm 4.9. zeros fOr yy Xsr ALGORITHM 4.9 Gauss-Seidel Iterative Method 1 fori= 1 ton in steps of 1 and j= 1 tom +1 in steps of 1 do Read a, endfor Read e, maxit Remarks: e is the allowed relative error in the result vector. maxit is the maximum number of iterations allowed for the solution to converge. 3 fori= ton in steps of 1 do x, 0 endfor Remarks: All xs are initialised to zero. The following for loop sets the limit maxit for the maximum number of iterations to be allowed. 4 for iter = 1 to maxit do 5 big 6 — fori=1ton in steps of 1 do 7 sum <0 8 for j=1tonin steps of 1 do 8 if j #i) then sum € sum + 4,%, endfor 0 temp < (a,,,, —sumya, nN relerror < |(x,- temp)/temp | 7 if relerror > big then big « relerror B x, temp endfor & (ISB) Commuter Onievteo Numericar MerHoos a 14 if (big Se) then et begin Write ‘Converges to a solution’ , for i= | ton in steps of 1 do Write x, endfor Stop end endfor : ao 15 Write ‘Does not converge in maxit aaa 16 for i= 1 to nin steps of 1 do Write x, endfo 17 Stop The main part of Algorithm 4.9 starts with statement 5. The Variable big stores the largest magnitude of the relative error in the variables. It is initialiseq to 0 in statement 5. The loop consisting of statements 8 and 9 calculates for a given i=k. : Lay, jal jek In statement 10 the latest value of x, is calculated and temporarily stored giving dL ayt, | /ay j jek temp = latest value of x, = Bins) Statement 11 calculates the relative error in +,(for i = k) between two latest iterates. Statement 12 Picks the biggest relative error in all xS. In statement 13, x, is set to the latest calculated value of the variable, Loop between 6 and 13 calculates the latest trial value of +, for all ’’s. On coming out of the loop ‘big’ has the highest relative error. When big is smaller than the Assigned error then the solution converges. Observe that ifthe relative error is not below the assigned error limit for a total number Of iterations equal to maxit then the Procedure stops after giving an appropriate message. Such a‘sfety exit’ is essential in iterative methods. Typically the maximum number of iterations set ‘would be ‘reasonable’, say, 25 iterations fora (20 : 20) set of simultaneous equations, bserve that in the inner is the latest calculated values cae tre ees ea The analysis given to test the Convergence of f os cai may be extended ton simultaneous equations, ie . la|> > ley] for att; fat ~~ SOWING SULTAN LINEAR Aucearaic | a, |ai,| 2 la] for at least one i jai These conditions are normall such cases occur Particularly when timtaoe agonal Slements dominate of discretizing partial differential equations. In such applications our off diagonal elements are zero and sometimes the ze o form a pate, When such a pattern is Perceived the Gauss-Seidel algo thm a sid to take advantage of the pattern and reduce the mate of site spoons and 548 COMPARISON OF DIRECT METHODS AND ITERATIVE Both direct and iterative methods have their strengths and weaknesses and a choice is based on the particular set of equations to be solved. Gauss elimination will lead toa solution in a finite number of steps for any set of equations provided the determinant of coefficients is not very small. The disadvantages are: (i) The computational effort is approximately (2n/3) arithmetic operations in each elimination step. (ii) The amount of book keeping necessary (such as row interchanges) is considerable. (ii) The rounding errors may become quite large particularly for illconditioned equations. (iv) Any special structure.in the matrix of coefficients is difficult to preserve during elimination. Thus savings cannot be made in calculations if there are many zero coefficients. ation method iterative methods may not always teed only under special conditions. hod is superior to the elimination In contrast to elimin converge. In fact convergence may be guaran When it converges, however, the iterative met method due to the following advantages: (i) Computational effort is approximately 2n* arithmetic operations per iteration and if convergence is achieved in less than 1 iterations it is significantly superior compared to the elimination method. Further, special pattern of zeros in the coefficient matrix could be used to tailor i ion effort. a procedure with reduced calculation effort. ; Gi) Another important advantage of the iterative method = the sma rounding error, the rounding error being the one conn in the last iteration. Thus for illconditioned system an iterative method is a g choice. (iti) In Gauss-Seidel method equal to the newly calculated value of the iter: ‘thm 4.9.) Instead if we set ate stored in temp: ) (Algori TTT FRB Conputen Ontnreo Nunerica. MerHoos — x, a,+ (comp —#) if w= 1. When w= 0, x © «temp if w eae x Athy been found in many problems that if w is eee inte i i between 0 and 1, convergence of Gauss-Seide! Method is fg 7 Mss tely there is no way 10 find the best val ue of wW. his norm ; ale and error. With such an acceleration iteration Methng would be quite attractive. ___N EE equations by Gauss elimination: then observe that x, 4.1 Solve the following set of (i) x, +x, 44,53 2x, + 3x, +4, =6 5 X-y-= (ii) 2x, +4x, + 2x, = 15 2x, +x, +24, =-5 4x, +x,-2x,=0. Is row interchange necessary for the above equations? 4.2 Write a computational procedure to evaluate the following determi- nants, Check your procedure by hand calculation. 23 2 iid fg ad Ola 2) Gji2 3 1 iii) [4 6 0-1 412 11 4 32201 O29 4 43 Sole the following simultaneous equations with complex coefficients mea Computer procedure to solve n equations in n unknowns with complex coefficients by extending Gauss elimination. (142i, + (1- 2x, = 347 G+ Six, + 2 diye, =~ 2 — 4, 44 In the Gauss eliminati © Gauss elimination procedure the equations are triangularized by 0% +k +a,5x, oF Ink + Ox =a, + Oy =a, fficients of variables below the pivot as shown above: le to eliminate the coefac; ‘effi low column as shown aa. both above and bel eliminating the coe Tt would be possibi the pivot in a given q = aq 933%3 = ayy By this method the equations are diagonalized and the back-subsitution step is eliminated. This procedure is called Gauss—Jordan elimination. Write a computational algorithm for the general case of n equations in n-unknowns. 4,5 Solve the following simultaneous equation by Gauss—I iminati procedure explained in Problem 4.4, ’y Gauss—Jordan elimination 2x + 6x) - x, =~14 5x, - x) +2x, =29 -3x, — 4x +35 =4, 4.6 Modify Algorithms 4.6 and 4.7 for Gauss eliminatio iterative refinement. There must be Provision to ref times. mn and incorporate fine the solution 3 4,7 Solve the two simultaneous equations of Example 2.22 of Section 2.5 without row interchange by iterative refinement. 4.8 Repeat Problem 4.1 using Gauss-Seidel iterative method. Get your an- swer to 4 significant digits. Compare your answer with that obtained using the elimination method. 49 Solve the following equations by Gauss-Seidel procedure. The answer should be correct to 3 significant digits. 9x, + 2x, + 4x, =20 x +10x, + 4x; =6 2x, — 4x, +10x3 =-15. 4.10 Apply the accelerated convergence technique on the example of Section 4.6 with various ws. Comment. 4.11 Solve the following simultaneous equations 25x, +5.2x, =6.2 1.251x, + 2.605x, =3.152 mination usi ing-point arithmetic and get your an- by Gauss elimination using floating-point arithmetic anc Swer to 4 significant digits. Improve the solution by iterative refinement. 4.12 Repeat Problem 4.11 using Gauss-Seidel method. Compare the answers obtained. fe® Computer OnieNTED NUMERICAL MeTHoos Te 4.13 Modify Algorithm 4,9 to include the idea of accelerated conver, the Gauss-Seidel method discussed in the last section. 4.14 Consider the following sparse set of equations BeNce of 2x, — 2X2 =1 =x, + 2x, — 3x3 =-2 —2x, + 2x; — 4x4 =-l Xy-X, =3. (i) Are the zero coefficients preserved as zeros during Gauss elimination) (ii) If yes, how would you modify the Gauss elimination algorithm o reduce the number of arithmetic operations? 4.15 Modify Gauss-Seidel algorithm to solve tridiagonal linear systems of equations so that unnecessary arithmetic operations are not performed. _— aX, | + 4% +0 [+0 + a | =D, ax, | +4% | + xs +0 + 0 | =b, = 0 +ayX, | 3%; ax, | + wf = b, 0 0 4,5, [+a,2, +a,x,+ |. | =2, o 9 0 | Qik ya | + Gan a?

You might also like