Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Triangular Factorization in Linear Systems, Lecture notes of Physics

The process of obtaining the triangular factorization of a matrix using gaussian elimination, and how to use it to solve linear systems. It includes a detailed example and a theorem about the factorization of a matrix into a lower-triangular and an upper-triangular matrix.

Typology: Lecture notes

2011/2012

Uploaded on 07/18/2012

madan
madan 🇮🇳

11 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
SEC.3.5 TRIANGULAR FACTORIZATION 143
Triangular Factorization
We now discuss how to obtain the triangular factorization. If row interchanges are not
necessary when using Gaussian elimination, the multipliers mij are the subdiagonal
entries in L.
Example 3.21. Use Gaussian elimination to construct the triangular factorization of the
matrix
A=
431
245
126
.
The matrix Lwill be constructed from an identity matrix placed at the left. For each row
operation used to construct the upper-triangular matrix, the multipliers mij will be put in
their proper places at the left. Start with
A=
100
010
001
431
245
126
.
Row 1 is used to eliminate the elements of Ain column 1 below a11 . The multiples m21 =
0.5 and m31 =0.25 of row 1 are subtracted from rows 2 and 3, respectively. These
multipliers are put in the matrix at the left and the result is
A=
100
0.510
0.25 0 1
431
02.54.5
01.25 6.25
.
Row 2 is used to eliminate the elements in column 2 below the diagonal of the second
factor of Ain the above product. The multiple m32 =−0.5 of the second row is subtracted
from row 3, and the multiplier is entered in the matrix at the left and we have the desired
triangular factorization of A.
(8) A=
100
0.510
0.25 0.51
431
02.54.5
008.5
.
Theorem 3.10 (Direct Factorization A=LU: No Row Interchanges). Suppose
that Gaussian elimination, without row interchanges, can be performed successfully to
solve the general linear system AX =B. Then the matrix Acan be factored as the
product of a lower-triangular matrix Land an upper-triangular matrix U:
A=LU.
Furthermore, Lcan be constructed to have 1s on its diagonal and Uwill have nonzero
diagonal elements. After finding Land U, the solution Xis computed in two steps:
1. Solve LY =Bfor Yusing forward substitution.
2. Solve UX =Yfor Xusing back substitution.
docsity.com
pf3
pf4
pf5

Partial preview of the text

Download Triangular Factorization in Linear Systems and more Lecture notes Physics in PDF only on Docsity!

S EC. 3.5 T RIANGULAR FACTORIZATION 143

Triangular Factorization

We now discuss how to obtain the triangular factorization. If row interchanges are not

necessary when using Gaussian elimination, the multipliers m (^) i j are the subdiagonal

entries in L.

Example 3.21. Use Gaussian elimination to construct the triangular factorization of the

matrix

A =

The matrix L will be constructed from an identity matrix placed at the left. For each row

operation used to construct the upper-triangular matrix, the multipliers m (^) i j will be put in

their proper places at the left. Start with

A =

Row 1 is used to eliminate the elements of A in column 1 below a 11. The multiples m 21 =

− 0 .5 and m 31 = 0 .25 of row 1 are subtracted from rows 2 and 3, respectively. These

multipliers are put in the matrix at the left and the result is

A =

Row 2 is used to eliminate the elements in column 2 below the diagonal of the second

factor of A in the above product. The multiple m 32 = − 0 .5 of the second row is subtracted

from row 3, and the multiplier is entered in the matrix at the left and we have the desired

triangular factorization of A.

(8) A = 

Theorem 3.10 (Direct Factorization A = LU : No Row Interchanges). Suppose

that Gaussian elimination, without row interchanges, can be performed successfully to

solve the general linear system AX = B. Then the matrix A can be factored as the

product of a lower-triangular matrix L and an upper-triangular matrix U :

A = LU.

Furthermore, L can be constructed to have 1’s on its diagonal and U will have nonzero

diagonal elements. After finding L and U , the solution X is computed in two steps:

  1. Solve LY = B for Y using forward substitution.
  2. Solve UX = Y for X using back substitution.

144 C HAP. 3 S OLUTION OF L INEAR S YSTEMS AX = B

Proof. We will show that, when the Gaussian elimination process is followed and

B is stored in column N + 1 of the augmented matrix, the result after the upper-

triangularization step is the equivalent upper-triangular system UX = Y. The matrices

L , U , B , and Y will have the form

L =

m 21 1 0 · · · 0

m 31 m 32 1 · · · 0

m (^) N 1 m (^) N 2 m (^) N 3 · · · 1

, B =

a

( 1 ) 1 N + 1

a

( 2 ) 2 N + 1

a

( 3 ) 3 N + 1

a

( N ) N N + 1

U =

a

( 1 ) 11

a

( 1 ) 12

a

( 1 ) 13

· · · a

( 1 ) 1 N

0 a

( 2 ) 22 a

( 2 ) 23 · · · a

( 2 ) 2 N

0 0 a

( 3 ) 33 · · · a

( 3 ) 3 N

0 0 0 · · · a

( N ) N N

, Y =

a

( 1 ) 1 N + 1

a

( 2 ) 2 N + 1

a

( 3 ) 3 N + 1

a

( N ) N N + 1

Remark. To find just L and U , the ( N + 1 )st column is not needed.

Step 1. Store the coefficients in the augmented matrix. The superscript on a

( 1 ) r c

means that this is the first time that a number is stored in location ( r , c ).

a

( 1 ) 11

a

( 1 ) 12

a

( 1 ) 13

· · · a

( 1 ) 1 N

a

( 1 ) 1 N + 1

a

( 1 ) 21 a

( 1 ) 22 a

( 1 ) 23 · · · a

( 1 ) 2 N a

( 1 ) 2 N + 1

a

( 1 ) 31 a

( 1 ) 32 a

( 1 ) 33 · · · a

( 1 ) 3 N a

( 1 ) 3 N + 1

a

( 1 ) N 1

a

( 1 ) N 2

a

( 1 ) N 3

· · · a

( 1 ) N N a

( 1 ) N N + 1

Step 2. Eliminate x 1 in rows 2 through N and store the multiplier m (^) r 1 , used to

eliminate x 1 in row r , in the matrix at location ( r , 1 ).

for r = 2 : N

m (^) r 1 = a

( 1 ) r 1 / a

( 1 ) 11

ar 1 = m (^) r 1 ;

for c = 2 : N + 1

146 C HAP. 3 S OLUTION OF L INEAR S YSTEMS AX = B

ar p = m (^) r p ;

for c = p + 1 : N + 1

a

( p + 1 ) r c =^ a

( p ) r c −^ m^ r p^ ∗^ a

( p ) pc ;

end

end

The final result after x (^) N − 1 has been eliminated from row N is

a

( 1 ) 11 a

( 1 ) 12 a

( 1 ) 13 · · · a

( 1 ) 1 N a

( 1 ) 1 N + 1

m 21 a

( 2 ) 22

a

( 2 ) 23

· · · a

( 2 ) 2 N

a

( 2 ) 2 N + 1

m 31 m 32 a

( 3 ) 33

· · · a

( 3 ) 3 N

a

( 3 ) 3 N + 1

m (^) N 1 m (^) N 2 m (^) N 3 · · · a

( N ) N N

a

( N ) N N + 1

The upper-triangular process is now complete. Notice that one array is used to store

the elements of both L and U. The 1’s of L are not stored, nor are the 0’s of L and

U that lie above and below the diagonal, respectively. Only the essential coefficients

needed to reconstruct L and U are stored!

We must now verify that the product LU = A. Suppose that D = LU and

consider the case when rc. Then dr c is

(9) dr c = m (^) r 1 a

( 1 ) 1 c

  • m (^) r 2 a

( 2 ) 2 c

  • · · · + m (^) rr − 1 a

( r − 1 ) r − 1 c

  • a

( r ) r c

Using the replacement equations in steps 1 through p + 1 = r , we obtain the following

substitutions:

m (^) r 1 a

( 1 ) 1 c = a

( 1 ) r ca

( 2 ) r c

m (^) r 2 a

( 2 ) 2 c

= a

( 2 ) r ca

( 3 ) r c

m (^) rr − 1 a

( r − 1 ) r − 1 c

= a

( r − 1 ) r ca

( r ) r c

When the substitutions in (10) are used in (9), the result is

dr c = a

( 1 ) r ca

( 2 ) r c

  • a

( 2 ) r ca

( 3 ) r c

  • · · · + a

( r − 1 ) r ca

( r ) r c

  • a

( r ) r c = a

( 1 ) r c

The other case, r > c , is similar to prove. •

Computational Complexity

The process for triangularizing is the same for both the Gaussian elimination and tri-

angular factorization methods. We can count the operations if we look at the first N

S EC. 3.5 T RIANGULAR FACTORIZATION 147

columns of the augmented matrix in Theorem 3.10. The outer loop of step p + 1 re-

quires Np = N − ( p + 1 ) + 1 divisions to compute the multipliers m (^) r p. Inside the

loops, but for the first N columns only, a total of ( Np )( Np ) multiplications and

the same number of subtractions are required to compute the new row elements a

( p + 1 ) r c.

This process is carried out for p = 1, 2,... , N − 1. Thus the triangular factorization

portion of A = LU requires

N − 1 ∑

p = 1

( Np )( Np + 1 ) =

N

3 − N

multiplications and divisions

and

N − 1 ∑

p = 1

( Np )( Np ) =

2 N

3 − 3 N

2

  • N

subtractions.

To establish (11), we use the summation formulas

M

k = 1

k =

M ( M + 1 )

and

M

k = 1

k

2

M ( M + 1 )( 2 M + 1 )

Using the change of variables k = Np , we rewrite (11) as

N − 1 ∑

p = 1

( Np )( Np + 1 ) =

N − 1 ∑

p = 1

( Np ) +

N − 1 ∑

p = 1

( Np )

2

N − 1 ∑

k = 1

k +

N − 1 ∑

k = 1

k

2

( N − 1 ) N

( N − 1 )( N )( 2 N − 1 )

N

3 − N

Once the triangular factorization A = LU has been obtained, the solution to the

lower-triangular system LY = B will require 0 + 1 + · · · + N − 1 = ( N

2 − N )/ 2

multiplications and subtractions; no divisions are required because the diagonal ele-

ments of L are 1’s. Then the solution of the upper-triangular system UX = Y requires

1 + 2 + · · · + N = ( N

2

  • N )/2 multiplications and divisions and ( N

2 − N )/2 sub-

tractions. Therefore, finding the solution to LUX = B requires

N

2 multiplications and divisions, and N

2 − N subtractions.

We see that the bulk of the calculations lies in the triangularization portion of the

solution. If the linear system is to be solved many times, with the same coefficient