



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The process of obtaining the triangular factorization of a matrix using gaussian elimination, and how to use it to solve linear systems. It includes a detailed example and a theorem about the factorization of a matrix into a lower-triangular and an upper-triangular matrix.
Typology: Lecture notes
1 / 5
This page cannot be seen from the preview
Don't miss anything!
S EC. 3.5 T RIANGULAR FACTORIZATION 143
We now discuss how to obtain the triangular factorization. If row interchanges are not
necessary when using Gaussian elimination, the multipliers m (^) i j are the subdiagonal
entries in L.
Example 3.21. Use Gaussian elimination to construct the triangular factorization of the
matrix
The matrix L will be constructed from an identity matrix placed at the left. For each row
operation used to construct the upper-triangular matrix, the multipliers m (^) i j will be put in
their proper places at the left. Start with
Row 1 is used to eliminate the elements of A in column 1 below a 11. The multiples m 21 =
− 0 .5 and m 31 = 0 .25 of row 1 are subtracted from rows 2 and 3, respectively. These
multipliers are put in the matrix at the left and the result is
Row 2 is used to eliminate the elements in column 2 below the diagonal of the second
factor of A in the above product. The multiple m 32 = − 0 .5 of the second row is subtracted
from row 3, and the multiplier is entered in the matrix at the left and we have the desired
triangular factorization of A.
Theorem 3.10 (Direct Factorization A = LU : No Row Interchanges). Suppose
that Gaussian elimination, without row interchanges, can be performed successfully to
solve the general linear system AX = B. Then the matrix A can be factored as the
product of a lower-triangular matrix L and an upper-triangular matrix U :
Furthermore, L can be constructed to have 1’s on its diagonal and U will have nonzero
diagonal elements. After finding L and U , the solution X is computed in two steps:
144 C HAP. 3 S OLUTION OF L INEAR S YSTEMS AX = B
Proof. We will show that, when the Gaussian elimination process is followed and
B is stored in column N + 1 of the augmented matrix, the result after the upper-
triangularization step is the equivalent upper-triangular system UX = Y. The matrices
L , U , B , and Y will have the form
m 21 1 0 · · · 0
m 31 m 32 1 · · · 0
m (^) N 1 m (^) N 2 m (^) N 3 · · · 1
a
( 1 ) 1 N + 1
a
( 2 ) 2 N + 1
a
( 3 ) 3 N + 1
a
( N ) N N + 1
a
( 1 ) 11
a
( 1 ) 12
a
( 1 ) 13
· · · a
( 1 ) 1 N
0 a
( 2 ) 22 a
( 2 ) 23 · · · a
( 2 ) 2 N
0 0 a
( 3 ) 33 · · · a
( 3 ) 3 N
0 0 0 · · · a
( N ) N N
a
( 1 ) 1 N + 1
a
( 2 ) 2 N + 1
a
( 3 ) 3 N + 1
a
( N ) N N + 1
Step 1. Store the coefficients in the augmented matrix. The superscript on a
( 1 ) r c
means that this is the first time that a number is stored in location ( r , c ).
a
( 1 ) 11
a
( 1 ) 12
a
( 1 ) 13
· · · a
( 1 ) 1 N
a
( 1 ) 1 N + 1
a
( 1 ) 21 a
( 1 ) 22 a
( 1 ) 23 · · · a
( 1 ) 2 N a
( 1 ) 2 N + 1
a
( 1 ) 31 a
( 1 ) 32 a
( 1 ) 33 · · · a
( 1 ) 3 N a
( 1 ) 3 N + 1
a
( 1 ) N 1
a
( 1 ) N 2
a
( 1 ) N 3
· · · a
( 1 ) N N a
( 1 ) N N + 1
Step 2. Eliminate x 1 in rows 2 through N and store the multiplier m (^) r 1 , used to
eliminate x 1 in row r , in the matrix at location ( r , 1 ).
for r = 2 : N
m (^) r 1 = a
( 1 ) r 1 / a
( 1 ) 11
ar 1 = m (^) r 1 ;
for c = 2 : N + 1
146 C HAP. 3 S OLUTION OF L INEAR S YSTEMS AX = B
ar p = m (^) r p ;
for c = p + 1 : N + 1
a
( p + 1 ) r c =^ a
( p ) r c −^ m^ r p^ ∗^ a
( p ) pc ;
end
end
The final result after x (^) N − 1 has been eliminated from row N is
a
( 1 ) 11 a
( 1 ) 12 a
( 1 ) 13 · · · a
( 1 ) 1 N a
( 1 ) 1 N + 1
m 21 a
( 2 ) 22
a
( 2 ) 23
· · · a
( 2 ) 2 N
a
( 2 ) 2 N + 1
m 31 m 32 a
( 3 ) 33
· · · a
( 3 ) 3 N
a
( 3 ) 3 N + 1
m (^) N 1 m (^) N 2 m (^) N 3 · · · a
( N ) N N
a
( N ) N N + 1
The upper-triangular process is now complete. Notice that one array is used to store
the elements of both L and U. The 1’s of L are not stored, nor are the 0’s of L and
U that lie above and below the diagonal, respectively. Only the essential coefficients
needed to reconstruct L and U are stored!
We must now verify that the product LU = A. Suppose that D = LU and
consider the case when r ≤ c. Then dr c is
(9) dr c = m (^) r 1 a
( 1 ) 1 c
( 2 ) 2 c
( r − 1 ) r − 1 c
( r ) r c
Using the replacement equations in steps 1 through p + 1 = r , we obtain the following
substitutions:
m (^) r 1 a
( 1 ) 1 c = a
( 1 ) r c − a
( 2 ) r c
m (^) r 2 a
( 2 ) 2 c
= a
( 2 ) r c − a
( 3 ) r c
m (^) rr − 1 a
( r − 1 ) r − 1 c
= a
( r − 1 ) r c − a
( r ) r c
When the substitutions in (10) are used in (9), the result is
dr c = a
( 1 ) r c − a
( 2 ) r c
( 2 ) r c − a
( 3 ) r c
( r − 1 ) r c − a
( r ) r c
( r ) r c = a
( 1 ) r c
The other case, r > c , is similar to prove. •
The process for triangularizing is the same for both the Gaussian elimination and tri-
angular factorization methods. We can count the operations if we look at the first N
S EC. 3.5 T RIANGULAR FACTORIZATION 147
columns of the augmented matrix in Theorem 3.10. The outer loop of step p + 1 re-
quires N − p = N − ( p + 1 ) + 1 divisions to compute the multipliers m (^) r p. Inside the
loops, but for the first N columns only, a total of ( N − p )( N − p ) multiplications and
the same number of subtractions are required to compute the new row elements a
( p + 1 ) r c.
This process is carried out for p = 1, 2,... , N − 1. Thus the triangular factorization
portion of A = LU requires
N − 1 ∑
p = 1
( N − p )( N − p + 1 ) =
3 − N
multiplications and divisions
and
N − 1 ∑
p = 1
( N − p )( N − p ) =
3 − 3 N
2
subtractions.
To establish (11), we use the summation formulas
M ∑
k = 1
k =
and
M ∑
k = 1
k
Using the change of variables k = N − p , we rewrite (11) as
N − 1 ∑
p = 1
( N − p )( N − p + 1 ) =
N − 1 ∑
p = 1
( N − p ) +
N − 1 ∑
p = 1
( N − p )
2
N − 1 ∑
k = 1
k +
N − 1 ∑
k = 1
k
2
3 − N
Once the triangular factorization A = LU has been obtained, the solution to the
lower-triangular system LY = B will require 0 + 1 + · · · + N − 1 = ( N
2 − N )/ 2
multiplications and subtractions; no divisions are required because the diagonal ele-
ments of L are 1’s. Then the solution of the upper-triangular system UX = Y requires
2
2 − N )/2 sub-
tractions. Therefore, finding the solution to LUX = B requires
2 multiplications and divisions, and N
2 − N subtractions.
We see that the bulk of the calculations lies in the triangularization portion of the
solution. If the linear system is to be solved many times, with the same coefficient