NUMERICAL METHODS IN CIVIL ENGINEERING


NUMERICAL METHODS IN CIVIL
ENGINEERING
LECTURE NOTES
Janusz ORKISZ
2007-09-09
l.888 Numerical Methods in Civil Engineering I
Introduction, errors in numerical analysis. Solution of nonlinear algebraic equations
Solution of large systems of linear algebraic equations by direct and iterative methods.
Introduction to matrix eigenvalue problems. Examples are drawn from structural mechanics.
Prep. Admission to Graduate School of Engineering.
l.889 Numerical Methods in Civil Engineering II
Continuation of l.888. Approximation of functions: interpolation, and least squares
curve fitting; orthogonal polynomials. Numerical differentiation and integration. Solution of
ordinary and partial differential equations, and integral equations; discrete methods of
solution of initial and boundary-value problems. Examples are drawn from structural
mechanics, geotechnical engineering, hydrology and hydraulics.
Prep. l.888, Numerical Methods in Civil Engineering I.
Table of contents
1. Introduction
1.1. Numerical method
1.2. Errors in numerical computation
1.3. Significant digits
1.4. Number representation
1.5. Error bounds
1.6. Convergence
1.7. Stability
2. Solution of non-linear algebraic equation
2.1. Introduction
2.2. The method of simple iterations
2.2.1. Algorithm
2.2.2. Convergence theorems
2.2.3. Iterative solution criteria
2.2.4. Acceleration of convergence by the relaxation technique
2.3. Newton  Raphson method
2.3.1. Algorithm
2.3.2. Convergence criteria
2.3.3. Relaxation approach to the Newton  Raphson method
2.3.4. Modification for multiple routs
2.4. The secant method
2.5. Regula falsi
2.6. General remarks
3. Vector and matrix norm
3.1. Vector norm
3.2. Matrix norm
4. Systems of nonlinear equations
4.1. The method of simple iterations
4.2. Newton  Raphson method
5. Solution of simultaneous linear algebraic equations (SLAE)
5.1. Introduction
5.2. Gaussian elimination
5.3. Matrix factorization LU
5.4. Choleski elimination method
5.5. Iterative methods
5.6. Matrix factorization LU by the Gaussian Elimination
5.7. Matrix inversion
5.7.1. Inversion of squared matrix using Gaussian Elimination
5.7.2. Inversion of the lower triangular matrix
5.8. Overdetermined simultaneous linear equations
6. The algebraic eigenvalue problem
6.1. Introduction
6.2. Classification of numerical solution methods
6.3. Theorems
6.4. The power method
6.4.1. Concept of the method and its convergence
6.4.2. Procedure using the Rayleigh quotient
6.4.3. Shift of the eigenspectrum
6.4.4. Application of shift to acceleration of convergence to max = 1
6.4.5. Application of a shift to acceleration of convergence to min
6.5. Inverse iteration method
6.5.1. The basic algorithm
6.5.2. Use of inverse and shift In order to find the eigenvalue closest to a given one
6.6. The generalized eigenvalue problem
6.7. The Jacobi method
6.7.1. Conditions imposed on transformation
7. Ill-conditioned systems of simultaneous linear equations
7.1. Introduction
7.2. Solution approach
8. Approximation
8.1. Introduction
8.2. Interpolation in 1D space
8.3. Lagrangian Interpolation ( 1D Approximation)
8.4. Inverse Lagrangian Interpolation
8.5. Chebychev polynomials
8.6. Hermite Interpolation
8.7. Interpolation by spline functions
8.7.1. Introduction
8.7.2. Definition
8.7.3. Extra conditions
8.8. The Best approximation
8.9. Least squares approach
8.10. Inner Product
8.11. The generation of orthogonal functions by GRAM - SCHMIDT process
8.11.1. Orthonormalization
8.11.2. Weighted orthogonalization
8.11.3. Weighted orthonormalization
8.12. Approximation in a 2D domain
8.12.1. Lagrangian approximation over rectangular domain
9. Numerical differentation
9.1. By means of the approximation and differentation
9.2. Generation of numerical derivatives by undetermined coefficients method
10. Numerical integration
10.1. Introduction
10.2. Newton  Cotes formulas
10.2.1. Composite rules
10.3. Gaussian quadrature
10.3.1. Composite Gaussian  Legendre integration
10.3.2. Composite Gaussian  Legendre integration
10.3.3. Summary of the Gaussian integration
10.3.4. Special topics
11. Numerical solution of ordinary differential equations
11.1. Introduction
11.2. Classification
11.3. Numerical approach
11.4. The Euler Method
11.5. Runge Kutta method
11.6. Multistep formulas
11.6.1. Open (Explicit) (Adams  Bashforth) formulas
11.6.2. Closed (Implicit) formulas (Adams  Moulton)
11.6.3. Predictor  corrector method
12. Boundary value problems
12.1. Finite difference solution approach
13. On solution of boundary value problems for partial differential
equations by the finite difference approach (FDM)
13.1. Formulation
13.2. Classification of the second order problems
13.3. Solution approach for solution of elliptic equations by FDM
14. Parabolic equations
15. Hyperbolic equations
16. MFDM
16.1. MWLS Approximation
Chapter 1 1/6 2007-11-05
1. INTRODUCTION
1. INTRODUCTION
1.1. NUMERICAL METHOD
- any method that uses only four basic arithmetic operations : + , - , : , "
- theory and art.
x = a
x2 = a , a e" 0
a
x = , x `" 0
x
a
x + x = x +
x
1 a
# ś#
x = ś# x + ź#
#
2 x #
- numerical method
# ś#
1 a
ś# ź#
xn = xn-1 +
ś#
2 xn-1 ź#
# #
1.2. ERRORS IN NUMERICAL COMPUTATION
Types of errors :
Inevitable error
(i) Error arising from the inadequacy of the mathematical model
Example :
Pendulum
R
j
l
mg
ma
Chapter 1 2/6 2007-11-05
d2
a = l - acceleration
dt2
n
2
d  d g
# ś#
+ą + sin = 0 - nonlinear model including large displacements
ź#
dt2 ś# dt l
# #
and friction
d2 g
+  = 0 - simplified model - small displacement,
dt2 l
linearized equation and no friction
(ii) Error noise in the input data
d 
( )
l , g ,  0 , , ..........
t = 0
dt
Error of a solution method
Example:
d
= f (t,(t))
dt
j
Euler method
t
Exact solution
Chapter 1 3/6 2007-11-05
Numerical errors
(iii) Errors due to series truncation
Example
( )
The temperature u x,t in a thin bar:
2
" 10 " 10
-n2Ą t
( )sin = + H"
nĄ x
u x,t = exp
( )
"C l2 " " "
n
l
n=1 n=1 n=11 n=1
(iv) Round off error
Example
2
x = = 0.667
3
THE OTHER CLASSIFICATION OF ERRORS
(i) The absolute error
%
Let x - exact value, x - approximate value
~
 = x - x
(ii) The relative error
~
x - x
 =
x
PRESENTATION OF RESULTS
%%
xexpected = x ą  = x 1 ą 
( )
Example
xexpected = 2.53 ą 0.10 H" 2.53 1 ą 0.04 = 2.53 ą 4%
( )
Chapter 1 4/6 2007-11-05
1.3. SIGNIFICANT DIGITS
Number of digits starting from the first non-zero on the left side
Example
Number of Number of
significant digits significant digits
2345000 7 5 1
2.345000 7 5.0 2
0.023450 5 5.000 4
0.02345 4
Example
Number of
Subtraction
significant digits
2.3485302 8
-2.3485280 8
0.0000022 2
1.4. NUMBER REPRESENTATION
FIXED POINT FLOATING POINT
324.2500 : 1000 324.2500 =
3.2425 102 : 103
.3242 : 100
3.2425 10-1 : 102
.0032 : 10
3.2425 10-3 : 10
.0003
3.2425 10-4
1.5. ERROR BOUNDS
(iii) Summation and subtraction
Given: a ą"a, b ą"b
Searched: x = a + b = a ą "a + b ą "b
error evaluation
"x = x - a - b d" "a + "b
(iv) Multiplication and division
ab
x = ln x = ln a + ln b - ln c - ln f
cf
dx da db dc df
= + - -
x a b c f
error evaluation
#ś#
"a "b "c "f
"x d" x + + +
ś#ź#
a b c f
# #
Chapter 1 5/6 2007-11-05
1.6. CONVERGENCE
Example
#ś#
1 a
xn = xn -1 + , lim xn = ?
ś#
2 xn -1 ź#
n"
# #
let
xn - x
n = xn = x 1 + n
()
x
Ą#ń#
1 a
x 1+ n =
( ) ( )
ó#x 1+n-1 + x 1+ n-1
2 x
( )Ą#
Ł#Ś#
"
a
Ą#ń# Ą# ń#
11 1+n-1 -n-1
1
1+n = = =
ó#1+ n-1 + 1+ n-1 Ą# ó#1+ n-1 + 1+ n-1 Ą#
2 2
Ł#Ś# Ł# Ś#
2
#ś#
1 n-1
= 2 +
ś#
21+ n-1 ź#
# #
for
n-1
x0 = a 0 > 0 n-1 > 0 < 1
1+n-1
one obtaines
2
n-1 1 n-1 1
< 
n == n-1
n-1
2 1+ n-1 2 1+n-1 2
( )
1
n < n-1 iteration is convergent
2
limn = 0 lim xn a
n" n"
In numerical calculations we deal with a number N. It describes a term that satisfy an
admissible error B requirement
where
n < B for n e" N , where
x 1+n - x 1+n-1
xn - xn-1 ( ) ( ) n -n-1
n == =
xn x 1+n 1+n
( )
Chapter 6/6 2007-11-05
1.7. STABILITY
Solution is stable if it remains bounded despite truncation and round off errors.
Let
2
#ś#
1a 1 
n-1
%%
xn = xn 1+ ł = xn-1 + 1+ ł n = 1+ ł + ł
( ) ( ) ( )
ś#
n
%
2xn-1 ź# n 2 1+ n-1 n n
# #
limn = ł precision of the final result corresponds to the precision of
n
n"
the last step of calculations i.e.
%
x x 1+łn
( )
Example
Unstable calculations
(v) Time integration process
(vi) Ill-conditioned simultaneous algebraic equations
6
Chapter 2 1/15 2007-11-05
2. SOLUTION OF NON-LINEAR ALGEBRAIC EQUATIONS
2. SOLUTION OF NON-LINEAR ALGEBRAIC EQUATIONS
2.1. INTRODUCTION
- source of algebraic equations
- multiple roots
- start from sketch
- iteration methods
equation to be solved y(x) = 0 x = ...
2.2. THE METHOD OF SIMPLE ITERATIONS
2.2.1. Algorithm
Algorithm Example
Let
#ś#
1 a
x = f ( x ) xn = xn-1 +
ś#
2 xn-1 ź#
# #
a=2, x0=2
1 2 3
#2 ś#
x1 = + = = 1.5000
x1 = f x0
( )
ś# ź#
2 2 2
# #
1 3 2 17
#ś#
x2 = + 2 " = = 1.4167
x2 = f x1
( )
ś#ź#
2 2 3 12
# #
.................. ..................
xn = f xn-1
( )
..................
Chapter 2 2/15 2007-11-05
Geometrical interpretation
Example :
x2 - 4x + 2.3 = 0 x = f (x)
Algorithm
(i)
(ii)
x2 + 2.3
()
2
x = 4x - 2.3 xn = 4xn-1 - 2.3
x = xn = xn-1 + 2.3
()1
44
Let x0 = 0.6
x1 = 0.316
x1 = .62 + 2.3 = .665
()1
4
x2 = .6652 + 2.3 = .686 x2 = 1.264 - 2.3
()1
4
& & & & & ................ cannot be performed
x6 = .6962 + 2.3 = .696
()1
4
x6 - x5 0.696 - 0.696
= = 0
Solution converged within three digits :
x6 0.696
Chapter 2 3/15 2007-11-05
2.2.2. Convergence theorems
Theorem 1
If
f x1 f x2 d" L x1 - x2 with 0 < L < 1
( )- ( )
for x1, x2 " a, b ;
[ ]
then the equation x = f x has at most one root in a, b .
( ) [ ]
Theorem 2
If f x satisfy conditions of Theorem 1 then the iterative method
( )
xn = f xn-1
( )
converges to the unique solution x " a, b ; of x = f x for any x0 " a, b ;
[ ] ( ) [ ]
Geometrical interpretation
2.2.3. Iterative solution criteria
Convergence
xn - xn-1
n =< B
xn
Residuum
f (xn-1) - xn-1 xn - xn-1
rn == = n < B
f (xn-1) xn
Notice : both criteria are the same for the simple iterations method
Chapter 2 4/15 2007-11-05
2.2.4. Acceleration of convergence by the relaxation technique
( )
x = f x
ą 1
ą x + x = ą x + f x x = x + f x a" g x
( ) ( ) ( )
1+ą 1+ą
The best situation if g '(x) H" 0
ą 1
g ' x = + f ' x
( ) ( )
1+ą 1+ą
let
g ' x* = 0 ą = - f ' x*
( ) ( )
then
f ' x*
( )
1
g x = f x - x
( )
1- f ' x* 1- f ' x*
( )( ) ( )
Example :
aa a
2
x2 = a > 0 x = f x = f x = - = -1
( ) ( )
xx x2
then
1 a -1 1 a
# ś#
g x =- x = x +
( )
ś# ź#
1-(-1 x 1- (-1 2 x
) ) # #
hence
#ś#
1 a
xn = xn-1 +
ś#
2 xn-1 ź#
# #
Chapter 2 5/15 2007-11-05
2.3. NEWTON  RAPHSON METHOD
2.3.1. Algorithm
( )
F x = 0
2
dF 1 d F
F(x + h) = F(x) + h + h2 + ... = F(x) + F '(x)h + R H" F(x) + F '(x)h = 0
dx 2 dx2 x
x
F x
( )
2
F x + F x h = 0 h = -
( ) ( )
2
F x
( )
F xn-1
( )
xn = xn-1 + h = xn-1 -
2
F xn-1
( )
Geometrical interpretation
2.3.2. Convergence criteria
Solution convergence
xn - xn-1
n =< B1
xn
Residuum
F(xn )
rn =< B2, F(x0 ) `" 0
F(x0 )
Chapter 2 6/15 2007-11-05
Example
F
y=x2- 2
x*
x2 x1 x0 x
x2 = 2 x2 - 2 = 0
( )
F x = x2 - 2,
( )
F2 x = 2x
2
xn-1 - 2
xn = xn-1 -
2xn-1
x0 = 2
22 - 2 3
x1 = 2 - = =1.500000
2" 2 2
9
3 - 2 17
4
x2 = - = = 1.416667
3
2 2" 12
2
577
x3 = = 1.414216
408
& & & & & & & & ...
Convergence
3
- 2
2
1 = = 0.333333
3
2
2
3
( ) - 2
2
r1 == 0.125000
22 - 2
Chapter 2 7/15 2007-11-05
17 3
-
12 2
2 = = 0.058824
17
12
2
17
( ) - 2
12
r2 == 0.003472
22 - 2
577 17
-
408 12
3 == 0.001733
577
408
2
577
( ) - 2
408
r3 == 0.000003
22 - 2
Convergence
0
0 0,1 0,2 0,3 0,4 0,5 0,6
-1
-2
-3
-4
-5
-6
log10(no of iteration)
solution convergence residuum convergence
2.3.3. Relaxation approach to the Newton  Raphson method
( )
F x = 0
1
ą x + F x = ą x x = x + F x a" g x
( ) ( ) ( )
ą
1
( ) ( )
g2 x = 1+ F2 x
ą
1
2
g x" = 0 ą = -
( )
2
F x*
( )
F xn-1
F x ( )
( )
xn = xn-1 -
x = x -
2
F xn-1
2
F x ( )
( )
log10(d), log10(r)
Chapter 2 8/15 2007-11-05
2.3.4. Modification for multiple routs
Let x = c be a root of F(x) multiplicity.
Then one way introduce
2 2
F x F x F x
( ) ( ) ( )
2
u x = u x = 1-
( ) ( )
2
2
F x
( )
2
F x
( )
( )
Instead to F(x) apply the Newton-Raphson method to u(x)
u xn-1
( )
xn = xn-1 -
u2 xn-1
( )
Example
y
100
2 4 6 8
x
-100
-200
F(x)= x4 - 8.6x3 - 35.51x2 + 464.4x - 998.46 = 0
F'(x)= 4x3 - 25.8x2 -71.02x+ 464.4
F''(x)= 12x2 - 51.6x -71.02
Let
x0 = 40
.
( )
F 4.0 = -3.42;
F'(4.0)= 23.52
( )
F2 2 4.0 = -85.42
F 4.0 -3.42
( )
u 4 = = = -0.145408
( )
2
F 4.0 23.52
( )
F 4.0 " F'' 4.0
( ) ( ) -3.42 "(-85.42)
u' 4 = 1.0 - = 1.0 - = 0.471906
( )
2
(23.52)2
F' 4.0
( )
()
Chapter 2 9/15 2007-11-05
( ) - .145408
u 4
x1 = x0 - = 40 - = 4.308129
.
( )
u2 4 .471906
x2 = 4.308129 - .00812 = 4.300001
x3 = 4.300000
conventional N - R method
x19 = 4.300000
& & & & & & & & & ..........
2.4. THE SECANT METHOD
Fn-1 xn-1 - xn-2
xn = xn-1 - H" xn-1 - Fn-1
2
Fn-1 Fn-1 - Fn-2
Fn-1
xn = xn-1 - ( - xn-2 )
xn-1
Fn-1 - Fn-2
starting points should satisfy the inequality
F x0 F x1 < 0
( ) ( )
Geometrical interpretation
F( )
x
F1
x5
x0 x2 x3
x
x* x4 x1
F0
Chapter 2 10/15 2007-11-05
Example
F( )
x
4
x3 =
3
x2 = 1
x1 = 2
x
34
x4 =
81
x0 = 0
Algorithm
( )
x2 = 2 F x a" x2 - 2 = 0
Let
( )
x0 = 0 F 0 = -2
and
( )
x1 = 2 F 2 = 4 - 2 = 2
then
2
x2 = 2 - (2 - 0) =1 F(1) = -1
2 - (-2)
-14 4 2
# ś#
x3 = 1- (1- 2) = = 1.333333 F = - = -0.222222
ś# ź#
-1- 2 3 3 9
# #
2
4 - 4 14 14 34
# ś#
x4 = -2 9 # -1ś# = = 1.555556 F = = 0.419753
ś# ź# ś# ź#
3 - - (-1) 39 9 81
# # # #
3
34
14 14 4 55
# ś#
81
x5 = - - = = 1.410256
34 2
9 - (- )ś# ź#
9 3 39
# #
81 9
55 17
# ś#
F = - = -0.011177
ś# ź#
39 1521
# #
xT = 2 H"1.414214 - true solution
Chapter 2 11/15 2007-11-05
Convergence
2 - 0
 = = 1
1
2
2
r1 = = 1
-2
1- 2
 = = 1
2
1
-1 1
r2 = = = 0.500000
-2 2
4
-1 1
3
 = = = 0.250000
3
4
4
3
2
- 1
9
r3 = = = 0.111111
-2 9
14 4
- 1
9 3
 == = 0.142857
4
14
7
9
34
17
81
r4 = = = 0.209877
-2 81
55 14
- 17
39 9
 == = 0.103030
5
55
165
39
17
- 17
1521
r5 = = = 0.005588
-2 3042
Convergence
0
00,2 0,4 0,6 0,8
-0,5
-1
-1,5
-2
-2,5
log10(no of iteration)
solution convergence residuum convergence
log10(d), log10(r)
Chapter 2 12/15 2007-11-05
2.5. REGULA FALSI
Let fix one starting point e.g. x = x0 in the secant method.
Then xn-2, Fn-2 in the secant method are replaced by x0, F0 .
Fn-1
xn =xn-1 - xn-1 , F(x0)F(x1) < 0
( - x0
)
Fn-1 - F0
Geometrical interpretation
Example
x2 = 2 F(x) a" x2 - 2 = 0
Let
x0 = 2 F 2 = +2
( )
and
x1 = 0 F 0 = -2
( )
then
-2
x2 = 0 - ( - 2 = 1 F 1 = -1
0
) ( )
-2 - 2
-14 4 2
# ś#
x3 = 1- ( )
1- 2 = = 1.333333 F = -
ś# ź#
-1- 2 3 3 9
# #
2
-
44 7 7 1
# # ś#
9
x4 = - - 2ś# = = 1.400000 F = -
ś# ź# ś# ź#
2
33 5 5 25
# # # #
- - 2
9
Chapter 2 13/15 2007-11-05
1
-
77 24
#
25
x5 = - - 2ś# = = 1.411769
ś# ź#
1
55 17
# #
- - 2
25
xT = 2 H"~ 1.414214 - true solution
Convergence
0 - 2
1 = not exist
0
-2
r1 = = 1
2
1- 0 -1 1
2 = = 1 r2 = = = 0.500000
1 2 2
2
4
- 1
-1 1
9
3
r3 = = = 0.111111
 = = = 0.250000
3
4
2 9
4
3
4
7 4
- 2
- 1
100
5 3
r4 = = = 0.020000
 = = = 0.047619
4
7
2 100
21
5
2
24 7
- 1
- 1
289
17 5
r5 = = = 0.003460
 == = 0.008333
5
24
2 289
120
17
Convergence
0
0 0,2 0,4 0,6 0,8
-0,5
-1
-1,5
-2
-2,5
-3
log10(no of iteration)
solution convergence residuum convergence
Remarks
The regula falsi algorithm is more stable but slower than the one corresponding to the
secant method.
log10(d), log10(r)
Chapter 2 14/15 2007-11-05
2.6. GENERAL REMARKS
Rough preliminary evaluation of zeros (roots) is suggested
Traps
Newton Raphson
DIVERGENT
(wrong starting point)
Secant Method
DIVERGENT
Regula Falsi
CONVERGENT
Chapter 2 15/15 2007-11-05
Rough evaluation of solution methods
REGULA FALSI  the slowest but the safest
SECANT METHOD  faster but less safe
NEWTON-RAPHSON  the fastest but evaluation of the function
derivative is necessary
Chapter 3 1/2 2007-11-05
3. VECTOR AND MATRIX NORM
3. VECTOR AND MATRIX NORM
Generalization of the modulus of a scalar function to a vector-valued function is called
a vector norm, to a matrix-valued function is called a matrix norm.
3.1. VECTOR NORM
Vector norm x of the vector x " V
where:
V is a linear N-dimensional vector space,
ą is a scalar
satisfies the following conditions:
(i) x e" 0 "x" V and x = 0 if x=0
(ii) ąx = ą " x " scalars ą and "x " V
(iii) x + y d" x + y " x, y " V
Examples
1
N
Ą# 2 ń#2
(1) x = xi Ą# p = 2 Euclidean norm
"
ó#
1
Ł# i=1 Ś#
(2) x = max xi p = " maximum norm
2
i
1
N
p
Ą# p ń#
(3) x = xi Ą# p e" 1
"
ó#
3
Ł# i=1 Ś#
Examples
x = 2,3,-6
{ }
1
x = 22 +32 +62 2 = 7
()
1 p = 2
x = -6 = 6
p = "
2
x = 2 +3 +-6 =11
p = 1
3
Chapter 3 2/2 2007-11-05
3.2. MATRIX NORM
Matrix norm of the N N matrix A must satisfy the following conditions:
( )
(i) A e" 0 and A = 0 if A=0
(ii) ąA = ą " A " scalar ą
(iii) A + B d" A + B
(iv) AB d" A " B
where A and B have to be of the same dimension.
Examples
1 1
N N N N
Ą# ń#2 Ą# ń#2
1
2 2
A = or A = - average value
1 ó#""a Ą# 1 ó# ""a Ą#
ij ij
2
N
i=1 j=1 i=1 j=1
Ł# Ś# Ł# Ś#
N N
1
A = max aij or A = max aij - maximum value
" "
2 2
i i
N
j=1 j=1
Example
1 2 3
Ą#ń#
ó#4
A = 5 6Ą#
ó#Ą#
ó#Ą#
Ł#7 8 9Ś#
1
2
1
Ą#
A = 12 + 22 + 32 + 42 + 52 + 62 + 72 + 82 + 92 Ą# = 5.627314
()ń#
1
2
ó#3
Ł#Ś#
ż#1+ 2 + 3 6
ż#
11
#4 #15
A = max + 5 + 6 = max = 8
##
2
3
#7 + 8 + 9 3 #24
#
#
Chapter 4 1/13 2007-11-05
4. SYSTEMS OF NONLINEAR EQUATIONS
4. SYSTEMS OF NONLINEAR EQUATIONS
Denotations
x = x1, x2, x3,..., xN
{ }
F x = F1 x ,......FN x
( ) ( ) ( )
{ }
F x = 0
( )
Example
ż#
( )
#F x, y a" y2 - 2x = 0
1
#
#F2(x, y) a" x2 + y2 - 8 = 0
#
4.1. THE METHOD OF SIMPLE ITERATIONS
Algorithm
xn = f xn-1 f = f1(x), K, fn (x) , x = x1, K, xn
( ) { } { }
Example
1
ż#x = y2 a" f1 x x = x, y
{ }
( )
#
2
1
# ż#
f x = y2 , 8 - x2 #
( )
# Ź#
#y = 8- x2 a" f2 x
( )
#2 #
#
ż# - x a" f1 x
( )
#x = y2
! x = f (x)
#
( )
#y = x2 +y2 +y- 8 a" f2 x
#
Chapter 4 2/13 2007-11-05
Convergence criterion
xn - xn-1
n = , n d" amd
xn
amd - admissible error
Theorem
Let ! denote the region ai d" xi d" bi , i = 12,K,N in the Euclidean N-dimensional
,
space.
Let f satisfy the conditions
 f is defined and continuous on !
 J x d" L < 1
( )
f
 For each x "! , f x also lies in !
( )
Then for any x0 in ! the sequence of iterations xn = f xn-1 is convergent to the
( )
unique solution x
" f1 " f1 " f1
Ą# ń#
L
ó#
" x1 " x2 " xn Ą#
ó# Ą#
ó# L L Ą#
ó# Ą#
Jacobian matrix J = L L
ó# Ą#
ó# Ą#
L L
ó# Ą#
ó#" fN L L " fN Ą#
ó#
" x1 " xN Ą#
Ł# Ś#
Example
ż# f1 x = y2 - x
( )
Ą# -1 2y
ń#
#
J =
#
ó#2x 2y +1Ą#
f2 x = x2 + y2 + y - 8
( ) Ł# Ś#
#
#
4.2. NEWTON  RAPHSON METHOD
F(x) = 0
2
" F x " F x
( ) 1 ( )
F x + h = F x + h + h2 +L
( ) ( )
2
" x 2 " x
" F x
( )
F x + h H" F x + h a" F x + J x h = 0 h = -J-1F
( ) ( ) ( ) ( )
" x
xn = xn-1 + hn-1 = xn-1 - J-11Fn-1
n-
xn = xn-1 - Jn-1Fn-1 Jn-1xn = Jn-1xn-1 - Fn-1 = bn-1

Chapter 4 3/13 2007-11-05
Jn-1xn = bn-1
xn Solution of simultaneous
linear algebraic equations
on each iteration step
x = lim xn
n"
Relaxation factor may be also introduced
Jn-1xn = Jn-1xn-1 - Fn-1
Example
2 2
ż# ż# - 2x ż#F1 x #
y2 - 2x = 0 ( )#, x = x
#y = 2x #y # ż# #
#
#
F x =
( ) a" #
# # Ź#
2 2
( )Ź# # Ź#
# x2 + y2 - 8 = 0 # - 8# #F x #
#x + y2 = 8 #x + y2 # # 2 # #y#
"F1 "F1
Ą#ń#
ó#Ą#
"x "y -2 2y
Ą#ń#
ó#Ą#
J ==
"F2 "F2 ó#2x 2yĄ#
ó#Ą#
Ł#Ś#
ó#Ą#
"x "y
Ł#Ś#
Chapter 4 4/13 2007-11-05
Algorithm
2
Ą# - 2 2 y x ż# - 2 x #
ń# ż# # Ą# - 2 2 y x y
ń# ż# #
# #
# Ź#=- #2 2Ź#
# Ź#
ó#Ą# ó#Ą#
2 x 2 y y 2 x 2 y y
#
Ł#Ś# # # Ł#Ś# # #x + y - 8 #
n -1 n n -1 n -1 ##
Let = 1
ż# # 0
ż# #
#0 #
x0 ==
# Ź# #2.8284Ź#
# #
#2 2# # #
Ą#-2 5.65685 x1 8
ń# ż# # Ą#-2 5.65685 0
ń# ż# # ż# #
= -
# Ź# # Ź# # Ź#
ó#
0 5.65685Ą# # y1# ó# 0 5.65685Ą# #2.8284# #0#
Ł#Ś# Ł#Ś#
4.0000
ż# #
x1 =
#2.8284Ź#
# #
Error estimation
(after the first step of iteration)
xn - xn-1
Estimated relative solution error n =
xn
x1 - x0
1 =
x1
x1 - x0 = 4.0000 - 0.0000, 2.8284 - 2.8284
{}
= 4.0000 , 0.0000
{}
Euclidean norm
1
2
Ą#1 4.00002 + 0.00002 Ś# 2.8284
()ń#
2
Ł#
1E == = 0.8165
1
2
Ą#1 4.00002 + 2.82842 Ś# 3.4641
()ń#
2
Ł#
Maximum norm
sup 4.0000 , 0.0000
{ } 4.0000
1M == = 1.0000
sup 4.0000 , 2.8284 4.0000
{}
Fn
Relative residual error rn =
F0
F1
r1 =
F0
1
2
ż#1 n #
Euclidean norm F =
#n (x)2 Ź#
"F #
j
E
j=1
#
Chapter 4 5/13 2007-11-05
F0 = 8.0000 , 0.0000
{ }
F1 = 0.0000, 16.0000
{}
1
1
2
2
2
11
Ą#F1 x0 2 + F2 x0 2 ń#
Ą#
F0 E = =
( ) ( )
{Ś#
}
{ }
22
Ł#0.0000 + 8.00002 ń# = 5.6568
Ł#Ś#
1
2
2
1
Ą#
F1 E =
{Ś#
}
2
Ł#0.0000 +16.00002 ń# =11.3137
11.3137
r1E == 2.0000
5.6568
Maximum norm F = sup Fi
M
i
F0 M = sup 8.0000 , 0.0000 = 8.0000
()
F1 M = sup 0.0000, 16.0000 = 16.0000
()
16.0000
r1M == 2.0000
8.0000
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
1E = 0.81649658 > BC =10-6
1M =1.00000000 > BC = 10-6
r1E = 2.00000000 > BR =10-8
r1M = 2.00000000 > BR = 10-8
Second step of iteration
Ą#-2.0000 5.6568 x2
ń# ż# # Ą#-2.0000 5.6568 4.0000 0.0000
ń# ż# # ż# #
= -
# Ź# # Ź# # Ź#
ó#
8.0000 5.6568Ą# #y2 # ó# 8.0000 5.6568Ą# #2.8284# #16.0000#
Ł#Ś# Ł#Ś#
2.4000
ż# #
x2 =
#2.2627Ź#
# #
Error estimation
(after the second step of iteration)
Estimated relative solution error
x2 - x1
2 =
x2
x2 - x1 = 2.4000 - 4.0000 , 2.2627 - 2.8284 =
{} {-1.6000 , - 0.5657
}
Chapter 4 6/13 2007-11-05
Euclidean norm
1
2
Ą#1 1.60002 + 0.56572 Ś# 1.2000
()ń#
2
E Ł#
2 == = 0.5145
1
2
Ą#1 2.40002 + 2.26272 Ś# 2.3324
()ń#
2
Ł#
Maximum norm
sup 1.6000 , 0.5657
{ } 1.6000
2M == = 0.6667
sup 2.4000 , 2.2627 2.4000
{}
Relative residual error
F2
r2 =
F0
1
2
ż#1 n #
Euclidean norm F =
#n (x)2 Ź#
"F #
j
E
j=1
#
F2 = 0.3200 , 2.8800
{ }
1
1
2
2
2
2
Ą#0 ń#
11
Ą#F1
F0 E = x0 2 + F2 x0 2 ń# = + 2 2 = 5.6568
( ) ( )
( )
{}
22
{ }
Ł#Ś#
ó#Ą#
Ł#Ś#
1
2
2
1
F2 E = Ą#
{Ś#
}
2
Ł#0.3200 + 2.88002 ń# = 2.0490
2.0490
r2E == 0.3622
5.6568
Maximum norm F = sup Fi
M
i
F0 M = sup 8.0000, 0.0000 = 8.0000
()
F2 M = sup 0.3200 , 2.8800 = 2.8800
()
2.8800
r2M == 0.3600
8.0000
Brake-off test
E
2 = 0.51449576 > BC = 10-6
M
2 = 0.66666667 > BC =10-6
r2E = 0.36221541 > BR =10-8
r2M = 0.36000000 > BR = 10-8
Chapter 4 7/13 2007-11-05
Third step of iteration
Ą# -2 4.5255 x3 Ą#ń# ż# # ż# # ż#
ń# ż# # -2 4.5255 2.4000 0.3200 5.1200
#
ó#4.8 4.5255Ą# # y3 Ź# = ó#4.8 4.5255Ą# # Ź# - # Ź# = # Ź#
Ł#Ś# # # Ł#Ś# #2.2627# #2.8800# #18.8800#
2.0235
ż# #
x3 =
#2.0257Ź#
# #
Error estimation
(after five steps of iteration)
Estimated relative solution error
x3 - x2
3 =
x3
x3 - x2 = 2.0235 - 2.4000 , 2.0257 - 2.2627 =
{} {-0.3765 , - 0.2371
}
Euclidean norm
1
2
Ą#1
()ń#
0.3146
E Ł#2 0.37652 + 0.23712 Ś#
3 == = 0.1554
1
2
Ą#1 2.02352 + 2.02572 Ś# 2.0259
()ń#
Ł#2
Maximum norm
sup 0.3765 , 0.2371
{ } 0.3765
3M == = 0.1859
sup 2.0235 , 2.0257 2.0257
{}
Relative residual error
F3
r3 =
F0
1
2
ż#1 n #
Euclidean norm F =
#n (x)2 Ź#
"F #
j
E
j=1
#
F3 = +0.0562 , 0.1979
{ }
11
22
11
Ą#F1 x0 2 + F2 x0 2 ń# Ą#0.00002 + 8.0000 2 ń#
F0 E = = = 5.6568
( ) ( ) ( )
{ } { }
22
Ł#Ś# Ł# Ś#
1
2
2
1
Ą#
F3 E =
{Ś#
}
2
Ł#0.0562 + 0.19792 ń# = 0.1455
0.1455
r3E == 0.0257
5.6568
Chapter 4 8/13 2007-11-05
Maximum norm F = sup Fi
M
i
F0 M = sup 8.0000 , 0.0000 = 8.0000
()
F3 M = sup 0.0562 , 0.1979 = 0.1979
()
0.1979
r3M == 0.0247
8.0
Brake-off test
3E = 0.15538736 > BC = 10-6
M
3 = 0.18585147 > BC =10-6
r3E = 0.02572098 > BR =10-8
r3M = 0.02474265 > BR =10-8
Fourth step of iteration
Ą# -2 4.0513 x4
ń# ż# # Ą# -2 4.0513 2.0235 0.0562 4.1033
ń# ż# # ż# # ż# #
ó#4.0471 4.0513Ą# # y4 Ź# = ó#4.0471 4.0513Ą# # Ź# - # Ź# = # Ź#
Ł#Ś# # # Ł#Ś# #2.0257# #0.1979# #16.1979#
2.0001
ż# #
x4 =
#2.0002Ź#
# #
Error estimation
(after four steps of iteration)
Estimated solution error
x4 - x3
4 =
x4
x4 - x3 = 2.0001- 2.0235 , 2.0002 - 2.0257 =
{} {-0.0236 , - 0.0254
}
Euclidean norm
1
2
Ą#1
()ń#
0.0247
E Ł#2 0.02342 + 0.02542 Ś#
4 == = 0.0122
1
2
Ą#1 2.00012 + 2.00022 Ś# 2.0002
()ń#
Ł#2
Maximum norm
sup 0.0234 , 0.0254
{ } 0.0254
4M == = 0.0127
sup 2.0001, 2.0002 2.0002
{}
Chapter 4 9/13 2007-11-05
Relative residual error
F4
r4 =
F0
1
2
ż#1 n #
Euclidean norm F =
#n (x)2 Ź#
"F #
j
E
j=1
#
F4 = +0.0007 , 0.0012
{ }
11
22
11
Ą#F1 x0 2 + F2 x0 2 ń# Ą#0.00002 + 8.0000 2 ń#
F0 E = = = 5.6568
( ) ( ) ( )
{ } { }
22
Ł#Ś# Ł# Ś#
1
2
2
1
Ą#
F4 E =
{Ś#
}
2
Ł#0.0007 + 0.00122 ń# = 0.0010
0.0010
r4E == 0.0002
5.6568
Maximum norm F = sup Fi
M
i
F0 M = sup 8.0000 , 0.0000 = 8.0000
()
F4 M = sup 0.0007 , 0.0012 = 0.0012
()
0.0012
r4M == 0.0002
8
Brake-off test
E
4 = 0.01223018 > BC = 10-6
M
4 = 0.01272134 > BC = 10-6
r4E = 0.00017009 > BR = 10-8
r4M = 0.0001460 > BR =10-8
Chapter 4 10/13 2007-11-05
Aitken acceleration process
x - xn = ąn x - xn-1
( )
ASSUME ąn = ą constant
then
2
x - xn = ą( - xn-1 x - xn x - xn-1
x )= xn-2 xn - xn-1
x =
x - xn-1 x - xn-2
xn - 2xn-1 + xn-2
x - xn-1 = ą( - xn-2 )
x
Example
OLD 2
x2x4 - x3 2.400 2.0001- 2.02352
NEW
x4 == =1.9985
OLD
x4 - 2x3 + x2 2.0001- 2 2.0235 + 2.400
OLD 2
y y4 - y3 2.2627 2.0002 - 2.02572
NEW
2
y4 == =1.9972
OLD
y4 - 2y3 + y2 2.0002 - 2 2.0257 + 2.2627
Hence continuing N - R iteration
Ą# -2 3.9943 x5
ń# ż# # Ą# -2 3.9943 1.9985
ń# ż# # ż#-0.0085 3.9886
# ż# #
=
ó#3.9971 3.9943Ą# #y Ź# = ó#3.9971 3.9943Ą# # Ź# - # Ź# # Ź#
Ł#Ś# # 5 # Ł#Ś# #1.9972# #-0.0173# #15.9828#
2.0000
ż# #
x5 =
#2.0000Ź#
# #
Chapter 4 11/13 2007-11-05
Error estimation
(after five steps of iteration)
Estimated solution error
x5 - x4
5 =
x5
x5 - x4 = 2.0000 -1.9985 , 2.0000 -1.9972 = 0.0015 , 0.0028
{} { }
Euclidean norm
1
2
Ą#1
()ń#
0.0023
E Ł#2 0.00152 + 0.00282 Ś#
5 == = 0.0011
1
2
Ą#1 2.00002 + 2.00002 Ś# 2.0000
()ń#
Ł#2
Maximum norm
sup 0.0015 , 0.0028
{ } 0.0028
5M == = 0.0014
sup 2.0000 , 2.0000 2.0000
{}
Relative residual error
F5
r5 =
F0
1
2
ż#1 n #
Euclidean norm F =
#n (x)2 Ź#
"F #
j
E
j=1
#
F5 = 0.00001 , 0.00001
{ }
11
22
11
Ą#F1 x0 2 + F2 x0 2 ń# Ą#0.00002 + 8.0000 2 ń#
F0 E = = = 5.6568
( ) ( ) ( )
{ } { }
22
Ł#Ś# Ł# Ś#
1
2
2
1
Ą#
F5 E =
{Ś#
}
2
Ł#0.00001 + 0.000012 ń# = 0.00001
0.00001
r5E == 0.000002
5.6568
Maximum norm F = sup Fi
M
i
F0 M = sup 8.0000 , 0.0000 = 8.0000
()
F5 M = sup 0.00001 , 0.00001 = 0.00001
()
0.00001
r5M == 0.000002
8.0000
Chapter 4 12/13 2007-11-05
Brake-off test
5E = 0.00113413 > BC = 10-6
M
5 = 0.00142690 > BC = 10-6
r5E = 0.00000164 > BR = 10-8
r5M = 0.00000129 > BR = 10-8
SOLUTION SUMMARY
Standard case  no acceleration
solution Relative solution error Relative residual error
Iteration
Euclidean Maximum Euclidean Maximum
Number
x y
norm E norm M norm rE norm rM
1 4.0000000000 2.8284271247 0.8164965809 1.0000000000 2.0000000000 2.0000000000
2 2.4000000000 2.2627416998 0.5144957554 0.6666666667 0.3622154055 0.3600000000
3 2.0235294118 2.0256529555 0.1553873552 0.1858514743 0.0257209770 0.0247426471
4 2.0000915541 2.0002076324 0.0122301810 0.0127213409 0.0001700889 0.0001495997
5 2.0000000014 2.0000000115 0.0000802250 0.0001038105 0.0000000084 0.0000000064
6 2.0000000000 2.0000000000 0.0000000041 0.0000000057 0.0000000000 0.0000000000
Aitken Acceleration included from the fourth iteration
solution Relative solution error Relative residual error
Iteration
Euclidean Maximum Euclidean Maximum
Number
x y
norm E norm M norm rE norm rM
1 4.0000000000 2.8284271247 0.8164965809 1.0000000000 2.0000000000 2.0000000000
2 2.4000000000 2.2627416998 0.5144957554 0.6666666667 0.3622154055 0.3600000000
3 2.0235294118 2.0256529555 0.1553873552 0.1858514743 0.0257209770 0.0247426471
4 1.9985355138 1.9971484092 0.0134178544 0.0142627172 0.0024025701 0.0021567540
5 2.0000003576 2.0000022149 0.0011341275 0.0014269013 0.0000016404 0.0000012862
6 2.0000000000 2.0000000000 0.0000007932 0.0000011074 0.0000000000 0.0000000000
7 2.0000000000 2.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0000000000
Chapter 4 13/13 2007-11-05
Relative error estimation
2
Residual Error - Maximum Norm
1.8
Residual Error - Euclidean Norm
Estimated Solution Error - Maximum Norm
1.6
Estimated Solution Error - Euclidean Norm
1.4
1.2
1
0.8
0.6
0.4
0.2
0
123456
Number of iterations
Logarithm of relative error's estimation
1
0
-1
-2
-3
-4
-5
-6
-7
-8
Residual Error - Maximum Norm
-9 Residual Error - Euclidean Norm
Estimated Solution Error - Maximum Norm
-10
Estimated Solution Error - Euclidean Norm
-11
00.2 0.4 0.6 0.8
Logarithm of iteration's number
The same in the log-log scale
Magnitude of error
Logarithm of error's magnitude
Chapter 5 1/29 2007-12-13
5. SOLUTION OF SIMULTANEOUS LINEAR ALGEBRAIC EQUATIONS
5. SOLUTION OF SIMULTANEOUS LINEAR ALGEBRAIC EQUATIONS
(SLAE)
(SLAE)
5.1. INTRODUCTION
- Sources of S.L.A.E.
- Features
A x = b
nn n1 n1
symmetric
ż# AT = A
#
x " Rn positive definite (energy >0)
xTAx > 0 "
#
#
#
mostly A : banded (or sparse)
#
#
#
#
n >> 1
#
#
Solution methods
ż#= elimination : Gauss - Jordan det A `" 0 - non singular
( )
#
Cholesky AT = A, xT Ax > 0, as above
# ()
#
#= iterative : Jacobi
#
Gauss - Seidel
#
#= combined (iteration and elimination)
#
= special methods: frontal solution
#
#
methods for sparse matrices
#
#
5.2. GAUSSIAN ELIMINATION
Example
6 2 2 4 x1 ż# #
Ą#ń# ż# # 1
ó#Ą# #x # #-1#
-1 2 2 -3
# #= # #
2
ó#Ą#
ó#Ą#
0 1 1 4#x Ź# # 2Ź#
3
# # # #
ó#Ą#
1 0 2 3#x4 # # #
1#
Ł#Ś# # # #
Chapter 5 2/29 2007-12-13
Assume Table
AMb IMx
[ ] [ ]
6 2 2 4 1 6 2 2 4 1
Ą# ń# Ą# ń#
ó# ó#0 7 7 - 7 - 5 Ą#
3 3 3 6
ó#-1 2 2 - 3 -1Ą# ó# Ą#
Ą#
ó# Ą# ó# Ą#
0 1 1 4 2 0 1 1 4 2
ó# Ą# ó#0 - 1 5 7 5 Ą#
1 0 2 3 1
3 3 3 6
Ł# Ś# Ł# Ś#
6 2 2 4 1 6 2 2 4 1
Ą#ń# Ą# ń#
partial
ó#0 7 7 - 7 - 5 Ą# ó#0 7 7 - 7 - 5 Ą#
3 3 3 6 3 3 3 6
ó#Ą# ó# Ą#
pivoting

33 5
ó#Ą# ó# Ą#
0 0 0 5 0 0 2 2
14 7
ó#0 0 2 2 5 Ą# ó#0 0 0 5 33 Ą#
interchange
7 14
Ł#Ś# Ł# Ś#
of rows 3,4
There are several ways how to proceed now.
(i)
31 23
6 2 2 0 - ń#
6 2 0 0 - ń#
Ą# Ą#
35 35
ó#0 7 7 0 4 Ą# ó#0 7 0 0 8 Ą#
3 3 15 315
ó#Ą# ó# Ą#

8 8
ó# ó#
0 0 2 0 - Ą#
0 0 2 0 - Ą#
35 35
ó#0 0 0 5 33 Ą# ó#0 0 0 5 33 Ą#
14 14
Ł#Ś# Ł# Ś#
39
6 0 0 0 - ń#
1 0 0 0 -13 70
Ą# Ą# ń#
35
ó#0 7 0 0 8 Ą# ó#0 1 0 0 8 Ą#
315 35
ó#Ą# ó# Ą#

8 4
ó# ó#
0 0 2 0 - Ą#
0 0 1 0 - Ą#
35 35
ó#0 0 0 5 33 Ą# ó#0 0 0 1 33 Ą#
14 70
Ł#Ś# Ł# Ś#
final solution
(ii)
6 2 2 4 -1 6 2 2 4 1
Ą#ń# Ą# ń#
ó#0 7 7 - 7 - 5 Ą# ó#0 7 7 - 7 - 5 Ą#
3 3 3 6
ó#Ą#3 3 3 6
ó# Ą#

5 4
ó#Ą# ó#
0 0 2 2 0 0 1 0 - Ą#
7 35
ó#0 0 0 1 33 Ą# ó#0 0 0 1 33 Ą#
70 70
Ł#Ś# Ł# Ś#
6 2 2 4 1 1 0 0 0 -13 70
Ą#ń# Ą# ń#
ó#0 1 0 0 8 Ą# ó#0 1 0 0 8 Ą#
35 35
ó#Ą# ó# Ą#

4 4
ó#
0 0 1 0 - Ą# ó#
0 0 1 0 - Ą#
35 35
ó#0 0 0 0 33 Ą# ó#0 0 0 1 33 Ą#
70 70
Ł#Ś# Ł# Ś#
final solution
Chapter 5 3/29 2007-12-13
General algorithm
n
Ax = b "! xj = bi, i = 1,2, ..., n
"a
ij
j=1
where
j
a11 a12 L a1n
Ą#ń#
ó#a a22 L a2n Ą#
21
Ą# ń#ó#Ą#
A a" =
Ł#aij Ś#ó#Ą#
nn
" " L "
ó#Ą#
" " ..aij " i
ó#Ą#
ó#an1 an2 L ann Ą#
Ł#Ś#
I
steps forward (without pivoting)
(
aikk-1)
(
( ( (
k
mik = , aij0) = aij , bi(0) = bi
aijk ) = aijk -1) - mikakj -1) k-1
(
where akk )
(
k = 1,2,...,n - 1; j = k +1,...,n; i = k +1,...,n
bi(k ) = bi(k -1) - mikbkk -1)
steps back
II
n
Ą#ń#
1
xi = bi(n-1) - ai(jn-1)xj (
ó#Ą#
" i = n -1,..., 2,1
aiin-1)
ó# j=i+1 Ą#
Ł#Ś#
Number of operations:
1
3 2
( ) - for Gauss procedure (not bounded)
N + N + 0 N
3
4 3
- for Cramer s formulas
N + 0(N )
Multiple right hand side
A M b1Lbk I M x1Lxk
[ ] [ ]
Chapter 5 4/29 2007-12-13
5.3. MATRIX FACTORIZATION LU
Simultaneous equations in matrix notation:
Ax =b,
det A `" 0
Matrix factorization
A=LU
L - lower triangle matrix
U - upper triangle matrix
Given
Ly = b
y step foreward
LUx = b
{

x step back
Ux = y
y
Gauss elimination method
Ly = b y = L-1b
I. Obtain
Ux = y x = U-1y
II. Solve
Chapter 5 5/29 2007-12-13
5.4. CHOLESKI ELIMINATION METHOD
Assumptions
ż# symetric matrix
AT = A
#
t
nonsingular and positive definite matrix
#x Ax > 0 det A `" 0
#a = 0 for i - j > m, m d" n banded matrix
ij
#
Definition
A matrix is said to be strictly diagonally dominant if
n
aii > aij , i = 1,2,& ,n
"
j=1
j`"i
Theorem
A
If a real matrix is symmetric, strictly diagonally dominant, and has positive
A
diagonally elements, then is positive definite.
Matrix factorization
A = LLT , U=LT
Ly = b y step foreward
ż#
Ax = b
#
LTx = y x step back
#
Remark
U a" LT
here
Solution algorithm
Chapter 5 6/29 2007-12-13
Initial step: Choleski factorization of matrix
j-1
2
ljj = ajj - diagonal elements
"l
jk
k =1
j-1
#ś# 1
lij = aij - ljk ź# off diagonal elements
ś# "l # ljj
ik
k =1
#
where j = 1, 2,..., n , i = j +1,..., n
I step foreword
i-1
Ą#ń#
1
yi = - yj Ą# i = 1,2, ..., n
ó#b "l Ś# lii
i ij
j=1
Ł#
II step back  similar as in the Gauss-Jordan algorithm
n
Ą#ń#
1
xi = yi - lji xj i = n,...,2,1
ó#Ą#
"
lii
ó# j=i+1 Ą#
Ł#Ś#
Example
Cholesky factorization of the given matrix
a11 a12 a13 4 -2 0 l11 0 0 l11 l21 l31
Ą#ń# Ą# ń# Ą# ń# Ą# ń#
ó#a ó#-2 ó#l Ą# ó#
A = a22 a23 Ą# = 5 -2Ą# = l22 0 0 l22 l32 Ą#
21 21
ó#Ą# ó# Ą# ó# Ą# ó# Ą#
ó#Ą# ó# Ą# ó# Ą# ó# Ą#
a32 a33 Ś# Ł# 0 -2 5 l32 l33 Ś# Ł# 0 0 l33 Ś#
Ł#a31 Ś# Ł#l31
Column 1:
2
l11 = a11 l11 = a11 l11 = 4 = 2
a21 -2
l11l21 = a21 l21 = l21 = = -1
l11 2
a31 0
l11l31 = a31 l31 = l31 = = 0
l11 2
Column 2:
2
2 2 2
l21 +l22 = a22 l22 = a22 - l21 l22 = 5 -(-1 = 2
)
11
l31l21 +l32l22 = a32 l32 = a32 - l31l21 l32 = Ą#-2 - 0 ń# = -1
() (-1
)Ś# 2
Ł#
l22
Column 3:
2
2 2 2 2 2
l31 + l32 + l33 = a33 l33 = a33 - l31 - l32 l33 = 5 -02 -(-1 = 2
)
Final result:
4
Ą# -2 0 2 0 0 2 -1 0
ń# Ą# ń# Ą# ń#
ó#-2 ó#-1
A = 5 -2Ą# = 2 0Ą# ó#0 2 -1Ą#
ó#Ą# ó# Ą# ó# Ą#
ó# -2 5 0 -1 2Ś# Ł#0 0 2
Ą# ó# Ą#
0
Ł#Ą# ó# Ś#
Ś# Ł#
Chapter 5 7/29 2007-12-13
5.5. ITERATIVE METHODS
Example
115
20x1 + 2x2 - x3 = 25 x1 = - x2 + x3 +
#
10 20 4
#
22 30
2x1 +13x2 - 2x3 = 30 ! x2 = - x1 + x3 +
Ź#
13 13 13
#
x1 + x2 + x3 = 2 x3 = -x1 -x2 +2
#
The method of simple iterations may be applied, using one of the following algorithms:
JACOBI ITERATION SCHEME GAUSS - SEIDEL ITERATION SCHEME
11 5 11 5
x1(n) =- x2(n-1) + x3(n-1) + x1(n) =- x2(n-1) + x3(n-1) +
10 20 4 10 20 4
22 30 22 30
x2(n) =- x1(n-1) + x3(n-1) + x2(n) =- x1(n) + x3(n-1) +
13 13 13 13 13 13
x3(n) =-x1(n-1) - x2(n-1) +2 x3(n) =-x1(n) - x2(n) +2
(0) (0) (0)
Let x =x =x =0
1 2 3
1 1 5
x1(1) = - "0+ "0+ = 1.250000
10 20 4
22 30
x2(1) = - "0 + "0+ = 2.307692
13 13 13
x3(1) = -0 - 0 + 2 = 2.000000
11 5
x1(1) = - "0 + "0+ = 1.250000
10 20 4
22 30
x2(1) = - "1.250000 + "0+ = 2.115385
13 13 13
x3(1) = -1.250000 - 2.115385 + 2 = - 1.365385
115
x1(2) = - " 2.307692+ " 2.000000+ = 1.119231
10 20 4
22 30
x2(2) = - "1.250000 + " 2.000000+ = 2.423077
13 13 13
x3(2) = -1.250000 - 2.307692 + 2 =-1.557692
11 5
x1(2) = - " 2.115385 - "1.365385+ = 0.970192
10 20 4
22 30
x2(2) = - "0.970192 - "1.365385+ = 1.948373
13 13 13
x3(2) = -0.970192 -1.948373 + 2 = -0.918565
JACOBI GAUSS - SEIDEL
n x1(n) x2 (n) x3(n) x1(n) x2 (n) x3(n)
3 0.929808 1.895858 -1.542308 1.009234 2.011108 -1.020342
4 0.983299 1.927367 -0.825666 0.997872 1.997198 -0.995070
5 1.015980 2.029390 -0.910666 1.000527 2.000677 -1.001204
10 0.999906 2.000106 -1.002296 0.999999 1.999999 -0.999999
11 0.999875 1.999661 -1.000013 1.000000 2.000000 -1.000000
Chapter 5 8/29 2007-12-13
General algorithm
Matrix notation
Matrix decomposition
0 0
+
+
=
0 0
0
+ +
AL D U
=
Simultaneous algebraic equations to be analyzed
% % %
Ax = b L x + D x + U x = b
Iteration algorithms
Jacobi Gauss - Seidel
-1 -1
% % % % % % % % %
x(n) = - D ( L + U ) x(n-1) + D b x(n) = -( L + D )-1 U x(n-1) + ( L + D )-1b
Remark : Inversion of the whole matrix
% %
( L + D ) is not required
Index notation
A = {aij}, b = {bi}, x = {xi} ; i,j = 1, 2, & , n
Simultaneous algebraic equations to be analyzed
n
x = bi
"aij j
i=1
Iteration algorithms
Jacobi Gauss  Seidel
Ą#ń#
(n-1) (n) (n-1)
n i-1 n
Ą# ń#
1 1
ó#
xi(n) = xi(n) = aijxj + bi Ą#
ó#-"a xj - "
"a xj + bi Ą# aii ó# j=1 ij j=i+1Ą#
Ą#
aii ó#- j=1 ij
Ł# Ś#
ó#Ą#
j`"i
Ł#Ś#
i = 1, 2, ... , n
Chapter 5 9/29 2007-12-13
Theorem
When A is a positive definite matrix the Jacobi and Gauss  Seidel methods are
convergent. (It is a sufficient but not necessary condition)
Relaxation technique
(n) (n-1)
Ą#ń#
xi(n) = xi(n-1) + - xi(n-1) Ś# = 1- x + x(n)
( )
Ł#xi
Ć
xi(n) - Direct Gauss  Seidel result, n-th iteration
xi(n) - relaxed solution, n-th iteration
> 0 - relaxation parameter
Variable relaxation parameter (n-1)
Residuum
Ć Ć Ć Ć Ć
r(n-1) = x(n) - x(n-1) , "r(n-1) = r(n) - r(n-1)
let
Ć Ć Ć
r(n) = r(n-1) + (n-1) r(n) - r(n-1) = r(n-1) + (n-1) "r(n-1)
(Ć Ć )
and
tt t
ĆĆ
I = r(n) r(n) = r(n-1) r(n-1) + 2(n-1) r(n-1) "r(n-1)
( ) ( ) ( )
2 t
ĆĆ
+ (n-1) "r(n-1) "r(n-1)
( ) ( )
hence using the condition
tt
t
dI
Ć ĆĆ Ć Ć
min I = 2 r(n-1) "r(n-1) + 2 "r(n-1) "r(n-1) + 2(n-1) "r(n-1) "r(n-1) = 0
( ) ( )
(Ć ) ( )
(n-1)
d(n-1)
find the optimal relaxation coefficient
tt
Ć Ć Ć Ć
"r(n-1) r(n-1) "r(n-1) r(n)
( ) ( )
(n-1)
==1-
tt
ĆĆ ĆĆ
"r(n-1) "r(n-1) "r(n-1) "r(n-1)
( ) ( ) ( ) ( )
hence
(n)
Ć(n-1) Ć
xi = xi + (n-1) r(n-1)
Example continuation : Relaxation (using Gauss  Seidel)
Let = 0.8 = const
Chapter 5 10/29 2007-12-13
(2)
x1 = 1.250000 +0.8 "(0.970192 -1.250000)= 1.026154
(2)
x2 = 2.115385 +0.8 "(1.948373 - 2.115385)= 1.981775 !
(2)
x3 = -1.365385+0.8 "(-0.918565 -1.365385)= -1.007929
Further iterations
Gauss  Seidel followed by
relaxation
G.S. (3) RELAX (3) G.S. (4) RELAX (4)
1.001426 1.006372 1.000401 1.001595
ż## ż## ż## ż##
## ## ## ##
1.998561 ! 1.995204 ! 1.999695 ! 1.998798
#Ź# #Ź# #Ź# #Ź#
#-0.999987# #-1.001575# #-1.000097# #-1.000393#
## ## ## ##
Example continuation : Relaxation (using Gauss  Seidel)
Gauss  Seidel iteration Gauss  Seidel iteration
Relaxation Relaxation
(3 ) (5 )
1 2 3 4 5
x1
1.250000 0.970192 1.009234 1.002145 0.999935 1.000045 1.000032
x2
2.115385 1.948373 2.011108 1.999716 1.999724 2.000046 2.000007
x3 -1.365385 -0.918565 -1.020342 -1.001861 -0.999659 -1.000090 -1.000038
%1
-0.279808 0.039042 -0.002210 0.000109
r
%2
-0.167012 0.062735 0.000008 0.000322
r
%3
0.446820 -0.101777 0.002202 -0.000431
r
%1
0.318850 0.002319
"r
%2
0.229747 0.000314
"r
%3
-0.548597 -0.002633
"r

0.818412 0.879922
Error estimation
after the first step of iteration
Estimated relative solution error
x1 - x0
1 =
x1
x1 - x0 = 1.250000 - 0.0000, 2.115385 - 0.000000, -1.365385 - 0
{}
= 1.250000 , 2.115385, -1.365385
{}
Chapter 5 11/29 2007-12-13
Euclidean norm
1
2
Ą#1 1.2500002 + 2.1153852 + (-1.365385)2 Ś# 1.622922
()ń#
Ł#3
1E == =1.000000
1
2
Ą#1 1.2500002 + 2.1153852 + (-1.365385)2 Ś# 1.622922
()ń#
Ł#3
Maximum norm
sup 1.250000 , 2.115385, -1.365385
{ }
2.115385
1M == = 1.000000
sup 1.250000 , 2.115385, -1.365385 2.115385
{}
Fn
Relative residual error rn =
F0
F1
r1 =
F0
1
2
ż#1 n #
Euclidean norm F =
#n (x)2 Ź#
"F #
j
E
j=1
#
F1 =
{-5.596155 , -2.730775,0.000000
}
1
1
2
2
2
11
Ą#F1 222Ą#
F0 E = x0 + F2 x0 + F3 x0 ń# =
( ) ( ) ( )
{Ś#
}
{ }
33
Ł#25 + 302 + 22 ń# = 22.575798
Ł#Ś#
1
2
22 2
1
Ą#ń#
F1 E =
(-5.596155 +
) (-2.730775 + 0.000000 = 3.595093
) ()
{}
3
Ł#Ś#
3.595093
r1E == 0.159245
22.575798
Maximum norm F = sup Fi
M
i
F0 M = sup 25,30, 2 = 30
()
F1 M = sup -5.596155 , -2.730775 ,0.000000 = 5.596155
()
5.596155
r1M == 0.186539
30
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
1E =1.000000 > BC =10-6
1M =1.000000 > BC = 10-6
r1E = 0.159245 > BR =10-8
r1M = 0.186539 > BR =10-8
Chapter 5 12/29 2007-12-13
after second step of iteration
Estimated relative solution error
x2 - x1
2 =
x2
x2 - x1 = 0.970192 -1.250000,1.948373- 2.115385, -0.918565 +1.365385
{}
=
{-0.279808 ,-0.167012,0.446820
}
Euclidean norm
1
2
Ą#1
()ń#
0.319288
E Ł#3 (-0.279809)2 + 0.1670122 + 0.4468202 Ś#
2 == = 0.234088
1
2
Ą#1 0.9701922 +1.9483732 + (-0.918565)2 Ś# 1.363934
()ń#
Ł#3
Maximum norm
sup
{-0.279808 , -0.167012 ,0.446820
}
0.446820
M
2 == = 0.229330
sup 0.970192 ,1.948373, -0.918565 1.948373
{}
Relative residual error
F2
r2 =
F0
Euclidean norm
F2 = 0.780849 , 0.893637,0.000000
{ }
1
1
2
2
2
11
Ą#F1 222Ą#
F0 E = x0 + F2 x0 + F3 x0 ń# =
( ) ( ) ( )
{Ś#
}
{ }
33
Ł#25 + 302 + 22 ń# = 22.575798
Ł#Ś#
1
2
222
1
Ą#ń#
F2 E = 0.780849 + 0.893637 + 0.000000 = 0.685155
() () ()
{}
3
Ł#Ś#
0.685155
r2E == 0.030349
22.575798
Maximum norm
F0 M = sup 25,30, 2 = 30
()
F2 M = sup 0.780849 , 0.893637,0.000000 = 0.893637
()
0.893637
r2M == 0.029788
30
Chapter 5 13/29 2007-12-13
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
E
2 = 0.234088 > BC = 10-6
M
2 = 0.229330 > BC =10-6
r2E = 0.030349 > BR =10-8
r2M = 0.029788 > BR = 10-8
after third step of iteration
Estimated relative solution error
x3 - x2
3 =
x3
x3 - x2 = 1.009234 - 0.970192, 2.011108 -1.948373, -1.020342 + 0.918565
{}
= 0.039042,0.062735, -0.101777
{}
Euclidean norm
1
2
Ą#1 0.0390422 + 0.0627352 + (-0.101777)2 Ś# 0.072614
()ń#
Ł#3
3E == = 0.050906
1
2
Ą#1 1.0092342 + 2.0111082 + (-1.020342)2 Ś# 1.426442
()ń#
Ł#3
Maximum norm
sup 0.039042 ,0.062735, -0.101777
{ }
0.101777
M
3 == = 0.050608
sup 1.009234 , 2.011108, -1.020342 2.011108
{}
Relative residual error
F3
r3 =
F0
Euclidean norm
F3 =
{-0.227238 , - 0.203556,0.000000
}
1
1
2
2
2
11
Ą#F1 222Ą#
F0 E = x0 + F2 x0 + F3 x0 ń# =
( ) ( ) ( )
{Ś#
}
{ }
33
Ł#25 + 302 + 22 ń# = 22.575798
Ł#Ś#
1
2
22
1
Ą#
F3 E =
(-0.227247 +
) (-0.203554 + 0.0000002 ń# = 0.176140
)
{}
3
Ł#Ś#
0.176140
r3E == 0.007802
22.575798
Chapter 5 14/29 2007-12-13
Maximum norm
F0 M = sup 25,30, 2 = 30
()
F3 M = sup -0.227238 , -0.203556 ,0.000000 = 0.227238
()
0.227238
r3M == 0.007575
30
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
3E = 0.050906 > BC = 10-6
M
3 = 0.050608 > BC = 10-6
r3E = 0.007802 > BR =10-8
r3M = 0.007575 > BR = 10-8
after third step of iteration and relaxation
Estimated relative solution error
x3' - x2
3' =
x3'
x3' - x2 = 1.002145 - 0.970192, 1.999716 -1.948373, -1.001861+ 0.918565
{}
= 0.031953,0.051343, - 0.083296
{}
Euclidean norm
1
2
Ą#1
()ń#
0.059429
E Ł#3 0.0319532 + 0.0513432 + (-0.083296)2 Ś#
3' == = 0.041998
1
2
Ą#1 1.0021452 +1.9997162 + (-1.001861)2 Ś# 1.415025
()ń#
Ł#3
Maximum norm
sup 0.031953 ,0.051343, -0.083296
{ }
0.083296
M
3' == = 0.041654
sup 1.002145 ,1.999716, -1.001861 1.999716
{}
Relative residual error
F3'
r3' =
F0
Euclidean norm
Chapter 5 15/29 2007-12-13
F3' = 0.044190, 0.004317, 0.000000
{ }
1
1
2
2
2
11
Ą#F1
Ą#
F0 E = x0 2 + F2 x0 2 + F3 x0 2 ń# =
( ) ( ) ( )
{Ś#
}
{ }
33
Ł#25 + 302 + 22 ń# = 22.575798
Ł#Ś#
1
2
2
1
F3' E = Ą#
{Ś#
}
3
Ł#0.044190 + 0.0043172 + 0.0000002 ń# = 0.025635
0.025635
E
r3' == 0.001135
22.575798
Maximum norm
F0 M = sup 25,30, 2 = 30
()
F3' M = sup 0.044190 , 0.004317,0.000000 = 0.044190
()
0.044190
M
r3' == 0.001473
30
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
E
3' = 0.041998 > BC =10-6
M
3' = 0.041654 > BC = 10-6
E
r3' = 0.001135 > BR =10-8
M
r3' = 0.001473 > BR = 10-8
after fourth step of iteration
Estimated relative solution error
x4 - x3'
4 =
x4
x4 - x3' = 0.999935 -1.002145, 1.999724 -1.999716, - 0.9996590 +1.001861
{}
=
{-0.002210, 0.000008, 0.002202
}
Euclidean norm
1
2
Ą#1
()ń#
0.001801
E Ł#3 (-0.002210)2 + 0.0000082 + 0.002202)2 Ś#
4 == = 0.001274
1
2
Ą#1 0.9999352 +1.9997242 + (-0.999659)2 Ś# 1.413988
()ń#
Ł#3
Maximum norm
Chapter 5 16/29 2007-12-13
sup
{-0.002210 ,0.000008, 0.002202
}
0.002210
M
4 == = 0.001105
sup 0.999935 , 1.999724, -0.999659 1.999724
{}
Relative residual error
F4
r4 =
F0
Euclidean norm
F4 =
{-0.002186, - 0.004403, 0.000000
}
1
1
2
2
2
11
Ą#F1 222Ą#
F0 E = x0 + F2 x0 + F3 x0 ń# =
( ) ( ) ( )
{Ś#
}
{ }
33
Ł#25 + 302 + 22 ń# = 22.575798
Ł#Ś#
1
2
22
1
Ą#
F4 E =
(-0.002186 +
) (-0.004403 + 0.0000002 ń# = 0.002838
)
{}
3
Ł#Ś#
0.002838
r4E == 0.000126
22.575798
Maximum norm
F0 M = sup 25,30, 2 = 30
()
F4 M = sup -0.002193 , -0.004400 ,0.000000 = 0.004400
()
0.004403
r4M == 0.000147
30
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
E
4 = 0.001274 > BC = 10-6
M
4 = 0.001105 > BC = 10-6
r4E = 0.000126 > BR =10-8
r4M = 0.000147 > BR = 10-8
after the fifth step of iteration
Chapter 5 17/29 2007-12-13
Estimated relative solution error
x5 - x4
5 =
x5
x5 - x4 = 1.000045 - 0.999935, 2.000046 -1.999724, -1.000090 + 0.999659
{}
= 0.000109, 0.000322,-0.000431
{}
Euclidean norm
1
2
Ą#1 0.0001092 + 0.0003222 + (-0.000431)2 Ś# 0.000317
()ń#
Ł#3
5E == = 0.000224
1
2
Ą#1 1.0000452 + 2.0000462 + (-1.000090)2 Ś# 1.414267
()ń#
Ł#3
Maximum norm
sup 0.000109 , 0.000322 , -0.000431
{ }
0.000431
M
5 == = 0.000215
sup 1.000045 , 2.000046, -1.000090 2.000046
{}
Relative residual error
F5
r5 =
F0
Euclidean norm
F5 = 0.001075, 0.000862, 0.000000
{ }
1
1
2
2
2
11
Ą#F1
Ą#
F0 E = x0 2 + F2 x0 2 + F3 x0 2 ń# =
( ) ( ) ( )
{Ś#
}
{ }
33
Ł#25 + 302 + 22 ń# = 22.575798
Ł#Ś#
1
2
2
1
F5 E = Ą#
{Ś#
}
3
Ł#0.001075 + 0.0008622 + 0.0000002 ń# = 0.000796
0.000796
r5E == 0.000035
22.575798
Maximum norm
F0 M = sup 25,30, 2 = 30
()
F5 M = sup 0.001075 , 0.000862,0.000002 = 0.001075
()
0.001075
r5M == 0.000036
30
Brake-off test
Chapter 5 18/29 2007-12-13
Assume admissible errors for convergence BC and residuum BR ; check
5E = 0.000224 > BC = 10-6
M
5 = 0.000215 > BC = 10-6
r5E = 0.000035 > BR =10-8
r5M = 0.000036 > BR = 10-8
after fifth step of iteration and relaxation
Estimated relative solution error
x5' - x4
5' =
x5'
x5' - x4 = 1.000032 - 0.999935, 2.000007 -1.999724, -1.000038 + 0.999659
{}
= 0.000097, 0.000283, - 0.000379
{}
Euclidean norm
1
2
Ą#1
()ń#
0.000279
E Ł#3 0.0000972 + 0.000282 + (-0.000379)2 Ś#
5' == = 0.000197
1
2
Ą#1 1.0000322 + 2.0000072 + (-1.0000038)2 Ś# 1.414233
()ń#
Ł#3
Maximum norm
sup 0.000097 ,0.000283, -0.000379
{ }
0.000379
M
5' == = 0.000190
sup 1.000032 , 2.000007, -1.000038 2.000007
{}
Relative residual error
F5'
r5' =
F0
Euclidean norm
F5' = 0.000683, 0.000230, 0.000000
{ }
1
1
2
2
2
11
Ą#F1
Ą#
F0 E = x0 2 + F2 x0 2 + F3 x0 2 ń# =
( ) ( ) ( )
{Ś#
}
{ }
33
Ł#25 + 302 + 22 ń# = 22.575798
Ł#Ś#
1
2
2
1
F5' E = Ą#
{Ś#
}
3
Ł#0.000683 + 0.0002302 + 0.0000002 ń# = 0.000416
0.000416
E
r5' == 0.000018
22.575798
Chapter 5 19/29 2007-12-13
Maximum norm
F0 M = sup 25,30, 2 = 30
()
F5' M = sup 0.00683 , 0.000230,0.000000 = 0.000683
()
0.000683
M
r5' == 0.000023
30
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
5E = 0.000197 > BC =10-6
M
5 = 0.000190 > BC = 10-6
r5E = 0.000018 > BR =10-8
r5M = 0.000023 > BR = 10-8
Relative error estimation
0
0 0,2 0,4 0,6 0,8
-0,5
-1
-1,5
-2
-2,5
-3
-3,5
-4
-4,5
-5
Logarithm of iteration's number
Solution Convergence - Maximum Norm
Solution Convergence - Euclidean Norm
Residual Error - Maximum Norm
Residual Error - Euclidean Norm
magnitude
Logarithm of error's
Chapter 5 20/29 2007-12-13
5.6. MATRIX FACTORIZATION LU BY THE GAUSSIAN ELIMINATION
A=LU
The LU factorization of matrix A may be done by the Gaussian elimination approach. The
main difference between the Gauss procedures of the solution of the SLAE and matrix
factorization LU is that in the last case we have to store the multipliers {mij}. Application:
 solution of problems with multiple right hand side
 matrix inversion
Example
1 1 2 -4
Ą# ń#
ó#2 -1 3 1 Ą#
ó# Ą#

ó# Ą#
3 1 -1 2
ó#1 -1 -1 -1Ą#
Ł# Ś#
1 1 2 -4 1 1 2 -4 1 1 2 -4
Ą#ń# Ą# ń# Ą# ń#
ó#Ą# ó# Ą# ó# Ą#
m21 ó# 2 -3 -1 9 Ą# ó# 2 -3 -1 9 Ą# ó# 2 -3 -1 9 Ą#
m31 3 -2 -7 14
ó#Ą# ó# Ą# ó# Ą#
3 2 3 -19 3 8 3 2 3 -19 3 8
ó#Ą# ó# ó#
m41 1 -2 -3 5
1 2 3 -7 3 -1Ą# 1 2 3 7 19 -75 19Ą#
Ł#Ś# Ł# Ś# Ł# Ś#
m32
m42 m43
Then
1 0 0 0 1 1 2 -4
Ą#ń# Ą# ń#
1 1 2 -4
Ą#ń#
ó#2 1 0 0Ą# ó#0 -3 -1 9 Ą#
ó#2 -1 3 1 Ą#
ó#Ą# ó# Ą#
ó#Ą#
=
ó#3
2
ó#Ą#1 0Ą# ó#0 0-19 3 8Ą#
3 1 -1 2
3
ó#Ą# ó# Ą#
ó#Ą#
Ą#
-75
Ł#1 -1 -1 -1Ś# ó#1 2 3 7 19 Ą# ó#0
1Ą# ó# 0 0
ó#Ś# Ł#
Ł# 19Ą#
Ś#
A = L U
Generally
a11 a12 & a1n 1 0 & 0 u11 u12 & u1n
Ą#ń# Ą# ń# Ą#ń#
ó#a a22 & a2n Ą# ó#m 1 & 0Ą# ó#
0 u22 & u2n Ą#
21 21
ó#Ą# ó# Ą# ó#Ą#
=
ó#Ą# ó# Ą# ó#Ą#
MM MM MM
ó#Ą# ó# Ą# ó#Ą#
an2 & ann Ś# Ł#mn1 mn2 & 1Ś# Ł# 0 0 & unn Ś#
ó#Ą# ó# Ą# ó#Ą#
Ł#an1
Chapter 5 21/29 2007-12-13
5.7. MATRIX INVERSION
5.7.1. Inversion of squared matrix using Gaussian Elimination
Ą# ń#
A M I M A-1Ś#
[ ]
Ł#I
Example
2 1 1
Ą#ń#
ó#1
A = 2 1Ą#
ó#Ą#
ó#1 1 2Ą#
Ł#Ś#
Ą#2 1 1 M 1 0 0ń# Ą#2 1 1 M 1 0 0ń#
ó# Ą#
ó#Ą#
ó#0 3 11
ó#Ą#
1 2 1 M 0 1 0 M - 1 0Ą#
2 2 2
ó# Ą#
ó#Ą#
1 3 1
ó# 1 1 2 M 0 0 1Ą# ó#0
M - 0 1Ą#
Ł# Ś#
ó# Ą#
Ł# 2 2 2 Ś#
Ą#2 1 1 M 1 0 0ń# Ą#2 1 1 M 1 0 0 ń#
ó#Ą# ó# Ą#
ó#0 ó#0
3 11 3 11
M - 1 0Ą# Ą#
M - 1 0
2 2 2 2 2 2
ó#Ą# ó# Ą#
ó#0 0 4 M - 1 - 1 1Ą# ó#0 0 1 M - 1 - 1 3 Ą#
ó#Ą# ó#
4 4 4Ą#
Ł# 33 3 Ś# Ł# Ś#
5 1 3
Ą#2 1 1 M 1 0 0 ń# Ą#2 1 0 M
-ń#
4 4 4Ą#
ó#Ą# ó#
ó#00 M - 0 Ą#
Ą# ó#00 M - -
33 7 33 9 3

28 8 28 8 8Ą#
ó#Ą# ó#
ó#0 0 1 M - 1 - 1 3 Ą# ó#0 0 1 M - 1 - 1 3 Ą#
ó#Ś# Ł# Ą#
Ł# 4 4 4Ą# ó# 4 4 4 Ś#
5 1 3 3 1 1
Ą#2 1 0 M ń# Ą#2 ń#
- - -
4 4 4Ą# ó# 0 0 M 2 2 2Ą#
ó#
ó#0 Ą# ó#0 Ą#
1 3 1 1 3 1
1 0 M -4 4 4Ą# ó# 1 0 M - 4 4 4Ą#
- -
ó#
ó#0 0 1 M - 1 - 1 3 Ą# ó#0 0 1 M - 1 - 1 3 Ą#
ó#Ą# ó#Ą#
Ł# 4 4 4 Ś# Ł# 4 4 4 Ś#
3 1 1
Ą#1 0 0 M ń#
- -
4 4 4Ą#
ó#
ó#0 Ą#
1 3 1
-
1 0 M -4 4 4Ą#
ó#
ó#0 0 1 M - 1 - 1 3 Ą#
ó#Ą#
Ł# 4 4 4 Ś#
Algorithm
A C = I , A = [aij] , C = [cij], where C0 = I
I. Step forward
aik (k -1)
aij (k ) = aij (k -1) - mik akj (k -1) , mik = , k = 1, 2, & , n-1; i, j = k+1, ... , n;
akk (k -1)
cij (k ) = cij (k -1) - mikckj (k -1) j = 1, 2, ... , n;
Chapter 5 22/29 2007-12-13
II. Step back
akk (k -1) = 1, aik (k -1) = 0 , k = n, n-1, ... , 2; i = k-1, k-2, ... , 1;
1
ckj(k -1) = ckj(k ) (k ) mik
akk
aik (k )
cij(k -1) = cij(k ) - ckj(k ) (k ) j = 1, 2, ..., n;
akk
5.7.2. Inversion of the lower triangular matrix
Example
c11 0 0
Ą# ń# Ą#ń#
1 0 0
?
ó# Ą# ó#c Ą#
C = c22 0 = L-1 , LC=I
L = 4 0Ą# ,
21
ó#2 ó#Ą#
ó#3 5 6Ą# ó#Ą#
c32 c33 Ś#
Ł#c31
Ł# Ś#
1 0 0 c11 0 0 1 0 0
Ą#ń# Ą# ń# Ą#ń#
ó#2 4 0Ą# ó#c c22 0 Ą# ó#0 1 0Ą#
=
21
ó#Ą# ó# Ą# ó#Ą#
ó#Ą#
Ł#c31 Ś# Ł#0 Ą#
Ł#3 5 6Ś# ó# c32 c33 Ą# ó# 0 1Ś#
c111 + c210 + c310 = 1 c11 = 1
#
#
1
#
c11 2 + c214 + c310 = 0 c21 = -
2 #
#
1
Ą#ń#
c11 3 + c21 5 + c316 = 0 c31 = - #
1 0 0
ó#Ą#
12
#
#
ó# Ą#
1 1
C = -0
1 Ź#
2 4
ó# Ą#
0 2 + c22 4 + c32 0 = 1 c22 =
#
4 ó#- 1 - 5 1 Ą#
#
ó# Ą#
Ł# 12 24 6 Ś#
5
#
03 + c22 5 + c32 6 = 0 c32 = -
#
24
#
1
#
03 + 05 + c336 = 1 c33 =
#
6 #
Algorithm
1
cii =
i = 1,2,...,n
lii
i-1
1
cij =- ckj
"lik
j = 1,2,...,i -1; k = j, j +1,...,i -1
lii k = j
Chapter 5 23/29 2007-12-13
5.8. OVERDETERMINED SIMULTANEOUS LINEAR EQUATIONS
F(x)
(2,2)
2 4
(3, )
3
6 9 x + y = 2
(7 ,7)
x - y = 0
(1,1)
x - 2y = -2
x
THE LEAST SQUARES METHOD
Use of the Euclidean error norm
22 2
Let B = (x + y - 2) +(x - y) +(x - 2y + 2)
" B
= 2 x+ y - 2 + 2 x - y + 2 x - 2y + 2 = 0 # 3x - 2y = 0
() ( ) ( )
#
" x #
Ź#
" B
= 2 x+ y - 2 2 x - y 2" 2 x - 2y +2 = 0 # - 2x+6y = 6
()- ( )- ( )
" y #
#
22 2
13 2 2
# ś# # ś# # ś#
6
9
solution x = , y = B = + - + =
ś# ź# ś# ź# ś# ź#
7
7
77 7 7
# # # # # #
General approach
Index notation
m
i = 12,L,n ; j = 12,L,m m < n
, ,
"a xj = bi ,
ij
j=1
where m  number of unknows
n  number of equations
In the above example m=2, n=3.
2
n m
# ś#
ź#
B = x - bi ź#
"ś#"aij j
ś#
i=1 j=1
# #
x-2y=-2
x+y=2
x-y=0
Chapter 5 24/29 2007-12-13
n m n m n
#ś#
" B
= 2
ś#ź#
"a "a xj - bi = 0 "a "a xj = "a bi , k=1, & , m
ik
" xk i=1 ik # j=1 ij #ik ij i=1
i=1 j=1
Matrix notation
t
A x = b B = Ax - b Ax - b
( ) ( )
nm m1 n1
" B
tt
= 2At Ax - b = 0 AAx = Ab
( )
" x
= =
x x x x x x
n m m 1 n 1 m m m 1 m 1
Example
Once more the same example as before, but posed now in the matrix notation
Ą# ń#
1 1 2
Ą# ń#
x
ó# Ą# Ą# ń#
ó# Ą#
A = - 1Ą# b = 0 x =
ó#1 ó#
ó# Ą#
yĄ#
Ł# Ś#
ó#1 - 2Ą#
ó#-2Ś#
Ą#
Ł# Ś# Ł#
1 1
Ą# ń#
1 1 1ó#1 3
Ą#ń# Ą# -2
ń#
AT A = ,
ó#1 -1 -2Ą# ó# -1Ą# = ó#-2 6 Ą#
Ą#
Ł#Ś#
ó# -2Ś# Ł# Ś#
Ł#1 Ą#
2
Ą# ń#
1 1 1ó# Ą# 0
Ą#ń# Ą# ń#
t
Ab =
ó#1 -1 -2Ą# ó# 0 Ą# = ó#6Ą#
Ł#Ś#
ó#-2Ś# Ł# Ś#
Ł#Ą#
Chapter 5 25/29 2007-12-13
Pseudo solution by means of the least squares method (LSM)
6
Ą# ń#
3
Ą# -2 x 0
ń# Ą# ń# Ą# ń#
7
ó# Ą#
= x =
ó#-2 6 Ą# ó# ó#6Ą#
ó# Ą#
yĄ# 9
Ł# Ś# Ł# Ś# Ł# Ś#
Ł# 7 Ś#
A weighted LSM may be also considered.
Matrix notation
t
B = Ax - b W Ax - b
( ) ( )
hence
tt
AWAx = AWb
where
W = diag w1, w2,K, wn = w1,K, wn
()
Index notation
2
n m
#ś#
B =
ś#ź#
"#"a - bi # wi
ij
i=1 j=1
n m n
"B
= 0 , k = 1, ..., m
"a wi"a xj = "a wibi
ik ij ik
"xk
i=1 j=1 i=1
Chapter 5 26/29 2007-12-13
5.9 UNDERDETERMINED SLAE  MINIMUM LENGTH METHOD
CASE: LESS EQUATIONS THAN UNKNOWNS
2.5
2
1.5
1
y = (1/3)*(5 - 2*x)
0.5
min (x2 + y2)
0
-0.5
-1
-1.5
-2
-1 0 1 2 3 4 5
SOLUTION APPROACH: MINIMUM LENGTH METHOD (MLM)
Introductory example
Find
2 2
min  ,  = x2 + y2
x, y
when
2x + 3y = 5
(i) Elimination
1
y = (5 - 2x)
3
hence
1
2
 = x2 + (5 - 2x)2
9
find
1
2 2
min  ,  = x2 + (5 - 2x)2
x
9
d 4 10 15
2
 = 2x - (5 - 2x) = 0 x = , y =
dx 9 13 13
(ii) Lagrange multipliers approach
Chapter 5 27/29 2007-12-13
I = (x2 + y2) - (2x + 3y - 5)
"I # 10
ż#x =
= 2x - 2 = 0
#
#
"x 13
#
#
"I # # 15
= 2y - 3 = 0 !
Ź# #y =
"y 13
# #
# # 10
"I
= -(2x + 3y - 5) = 0# # = 13
#
" #
GENERAL LINEAR CASE
Find
n
2 2
min 2 ,  =
i
"x
x1,K,xn
i=1
when
A x = b , m < n , linear constraints
mxn nx1 mx1
SOLUTION BY THE ELIMINATION APPROACH
let
Ą#
An = Am, A-m)ń# , x1 ={ x1 , x }
ó#
m x x m x (n Ą# n x m x (n-m) x1
Ł#m Ś#
eliminated remaining
unknowns unknowns
hence
An x1 = Am x1+ A-m) x = b1
m x n x m x m x m x (n (n-m) x1 m x
x1 = Am -1(mb1+ A-m) x ) eliminated unknowns, (*)
m x m x x m x (n (n-m) x1
and
m n
2 2 2 2
 = (xn-m,K, xn) + =  (xn-m+1,K, xn ) .
i i
"x "x
i=1 i=n-m+1
Finally we find in two steps the solution of the minimization problem
2
min  (xn-m+1,K, xn )
xn-m+1,K,xn
- step 1  use of the optimality conditions
2
"
= 0 , for k = n - m +1, n - m + 2,K, n
"xk
hence we obtain the first part of the unknowns xn-m+1,K, xn
Chapter 5 28/29 2007-12-13
- step 2  use of the elimination formulas (*); they provide the remaining unknowns
x1,K, xm
Example
Given undeterminen SLAE
2x + 3y - z = 4
- x + 4y - 2z = -4
Solution by the minimum length approach
Find
2 2
min  ,  = x2 + y2 + z2
x,y,z
when
x
Ą# ń#
2 3 - 1 4
Ą# ń# Ą# ń#
ó# Ą#
y =
ó# Ą# ó# Ą#
ó# Ą#
- 1 4 - 2 - 4
Ł# Ś# Ł# Ś#
ó# Ą#
z
Ł# Ś#
Hence
2 3 | -1 2 3
Ą# ń# Ą# ń# Ą#-1
ń#
A3 = A2 = , A1 =
ó# ó# ó#
2 x
Ł#-1 4 | -2Ą# 2 x Ł#-1 4Ą# 2 x Ł#- 2Ą#
Ś# Ś# Ś#
and
x = {x y | z} , b = {4 - 4}
Solution process
4
Ą# - 3
ń#
1
A2 -1 =
ó# Ą#
2 x
11 1 2
Ł# Ś#
x 4 #
Ą# ń# Ą# - 3 4 ś# # 28 ś#
ń#ś# Ą# ń# Ą#-1
ń# Ą# ń# Ą#- 2
ń#
1 1
ś#
= - [z]ź# = + [z]ź#
ó#yĄ# ó#1 2Ą#ś# ó# ó# ó# ó#
11 5Ą# ź#
Ł# Ś# Ł# Ś# Ł#- 4Ą# Ł#- 2Ą# ź# 11ś# Ł#- 4Ą# Ł# Ś#
Ś# Ś# Ś#
# # # #
elimination and minimization
x 28
Ą# ń# Ą# ń# Ą# - 2
ń#
1 1
ó#yĄ# ó# ó# Ą#[z]
= 5
ó# Ą# ó#- 4Ą# + 11 ó# Ą#
Ą#
11
ó# Ą# ó# Ą# ó# Ą#
0Ś# Ł# 11Ś#
Ł#z Ś# Ł#
x
Ą# ń#
1
2
 = [x y z]ó# yĄ# = [(28 - 2z)2 + (-4 + 5z)2 + (11z)2]
ó# Ą#
112
ó# Ą#
Ł#z Ś#
Chapter 5 29/29 2007-12-13
2
d 2 1.52
= [-2(28 - 2z) + 5(-4 + 5z) +121z] = 0 z =
dz 121 3
x 28
Ą# ń# Ą# ń# Ą#- 2 2024 7.36
ń# 1
1 1 1.52 1 Ą# ń# Ą# ń#
= + = =
ó#yĄ# ó# ó# ó# Ą# ó#
11 5Ą# 3 825 3
Ł# Ś# Ł#- 4Ą# 11 Ł# Ś# Ł#-110Ś# Ł#- 0.40Ą#
Ś# Ś#
Finally
2.453333
Ą# ń#
1
ó# Ą#
{x y z} = {7.36 , - 0.40 ,1.52} = - 0.133333
ó# Ą#
3
ó# Ą#
0.506667Ś#
Ł#
Chapter 6 1/38 2008-01-23
6. THE ALGEBRAIC EIGENVALUE PROBLEM
6. THE ALGEBRAIC EIGENVALUE PROBLEM
6.1. INTRODUCTION
Application in mechanics : principal stresses, principal strains, dynamics, buckling, ...
Formulation
Ax = x where A real n n matrix, x "!n
Find non-trivial solution det A - I = 0 
( )
Ax = j x , j=1, 2, 3, & ,n
j j
where
1,...,n - eigenvalues
x1,..., xn - eigenvectors
Example
2x + y = x 2 -  1 x 0
Ą# ń# Ą# ń# Ą# ń#
=
ó#
x + 2y = y 1 2 - Ą# ó#yĄ# ó#0Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
eigenvalues evaluation
2 -  1
2
1 =1, 2 =3
det A == 2 -  -1 = 2 - 4 + 3 = 0
( )
1 2 - 
Let
1 = 3
1
ż#x =
1
#
2x1 + y1 = 3x1 -x1 + y1 = 0
Let x12 + y12 = 1 # 2
#
x1 + 2y1 = 3y1 x1 - y1 = 0
1
#
y1 =
#
2
#
2 = 1
ż#x =- 1
2
#
2x2 + y2 = x2 x2 + y2 = 0
Let x22 + y22 = 1 # 2
#
x2 + 2y2 = y2 x2 + y2 = 0
1
#
y2 =+
#
2
#
x = x, y
{ }
1
ń#
1 Ą# ń# 1 Ą# - 1
x1 = , x2 =
ó#1Ą# ó# Ą#
+1
2 2
Ł# Ś# Ł# Ś#
Chapter 6 2/38 2008-01-23
Eigenvectors
6.2. CLASSIFICATION OF NUMERICAL SOLUTION METHODS
- methods oriented on evaluation of all eigenvalues and eigenvectors (eg. Jacobi method)
- methods oriented on evaluation of a selected group of eigenvalues and eigenvectors
- methods oriented on evaluation of a single eigenvalue and eigenvector (usually
extremal, eg. power method, reverse iteration method)
6.3. THEOREMS
Theorem 1 If A has distinct all eigenvalues, then there exists a complete set of linearly
independent eigenvectors, unique up to a multiplicative constant.
Theorem 2 (Cayley-Hamilton theorem)
The matrix A satisfies its own characteristic equation, i.e. if p(x) is a
polynomial in x then
p(A) = p()
where  is an eigenvalue of A .
Theorem 3 If g(x) is a polynomial in x and  is an eigenvalue of a matrix A then g()
is an eigenvalue of the matrix g(A) .
Example
Let
C = 3A2 - 2A + 4I
then
2
C = 3A - 2A + 4
Theorem 4 The eigenvalues (but not eigenvectors) are preserved under the similarity
transformation.
Definition 1 The similarity transformation R-1AR of the matrix A, where R is a non-
singular matrix, does not change the eigenvalue  .
Chapter 6 3/38 2008-01-23
Let Ax = x
x = Ry y= R-1x , det R `" 0
R-1 ARy = Ry
R-1ARy = y
Thus eigenvalues for A and R-1AR matrices are the same
Theorem 5 (Gerschgorin s theorem).
Let A be a given n n matrix and let Ci, i = 1,2,...,n be the discs with
centers aii and radii
n
Ri = aik
"
k =1
k `"i
Let D denote the union of the discs Ci . Then all the eigenvalues of A lie within D.
Conclusion:
min e" min aii - Ri , max d" max aii + Ri
( ) ( )
i i
Example
a11 = -2
Ą#- 2 1 3 R1 = 1 + 3 = 4
ń#
ó#
a22 =4
A = R2 = -1 + 2 = 3
ó#-1 4 2Ą#
Ą#
ó# - 2 3Ś# a33 =3 R3 = 3 + - 2 = 5
Ą#
3
Ł#






ż#-2 + 4
ż#-2 - 4
#
#
max < max 4 + 3 = 8
#
min > min 4 - 3 = - 6
#
#
#
3 + 5
3 - 5 #
#
Remarks
- Theorem is useful for a rough evaluation of the eigenvalues spectrum
- Theorem holds also for complex matrices
- The quality of the Gerschgorin s evaluation depends on how much dominant are the
diagonal terms of the matrix A considered. Evaluation is exact for diagonal matrices.
Chapter 6 4/38 2008-01-23
Theorem 6 ( Sturm sequence property). The number of agreements in sign of successive
Ć Ć
numbers of the sequence pr () for any given  , in a symmetric tridiagonal matrix T , is
Ć
equal to the number of eigenvalues of this matrix, which are strictly greater than  .
Ć
Here pr () are the principal minors of the matrix T - I
b1 c1 b1
Ą#ń# Ą# -  c1
ń#
ó#c b2 c2 Ą# ó# Ą#
c1 b2 -  c2
1
ó#Ą# ó# Ą#
ó#Ą# ó# Ą#
c2 b3 c3 c2 b3 -  c3
T = , T -  I =
ó#Ą# ó# Ą#
K K K K K KĄ#
ó#K K K K K K Ą# ó#
ó# ó#
cn-1 Ą# cn-1 Ą#
ó#Ą# ó# Ą#
cn-1 bn Ś# cn-1 bn - Ś#
ó#Ą# ó# Ą#
Ł# Ł#
and
p0  =1
( )
p1  = b1 - 
( )
2
pk  = bk -  pk -1  ck -1 pk -2  , k = 2,3,L,n
( ) ( ) ( )- ( )
Remark:
Ć Ć
If pj () = 0 then we record the sign opposite to the sign pj-1() .
Example
Determine intervals containing at most one eigenvalue of the matrix
2
Ą# -1
ń#
ó# Ą#
ó#-1 2 -1 Ą#
ó#
T = -1 2 -1 Ą#
ó#
-1 2 -1Ą#
ó# Ą#
ó# Ą#
-1 2
Ł# Ś#
Solution
(2
Ą# - ) -1
ń#
ó# Ą#
-1 (2 - ) -1
ó# Ą#
T - I = ó# Ą#
-1 (2 - ) -1
ó# Ą#
-1 (2 - ) -1
ó# Ą#
ó#
-1 (2 - )Ą#
Ł# Ś#
Chapter 6 5/38 2008-01-23
hence
p0  =1
( )
p1  = 2 - 
( )
p2  = 2 -  2 - 22
( ) ( ) ( )-(-1 *1 = 2 -  -1 = 2 - 4 + 3
) ( )
2
p3  = 2 -  2 - 4 + 3 -(-1 2 -  = 2 -  2 - 4 + 2
( ) ( ) ) ( ) ( )
() ()
22
p4  = 2 -  2 -  2 - 4 + 3 -(-1 2 - 4 + 3 = 2 -  2 - 4 + 2 - 2 - 4 + 3
( ) ( ) ( ) ) ( )
() ()() ()
22
Ą#ń#
p5  = 2 -  2 -  2 - 4 + 2 - 2 - 4 + 3 -(-1 2 -  2 - 4 + 2 =
( ) ( ) ( ) ) ( )
() () ()
Ł#Ś#
2 2
Ą#ń# Ą#
= 2 -  2 -  2 - 4 + 2 - 2 2 - 4 + 2 -1Ś# = 2 -  2 -  - 2ń# 2 - 4 + 2 -1
( ) ( ) ( ) ( )Ś#
() () ()
{}
Ł# Ł#
a" det T - I
( )
Ć Ć
We select now values of  , and we record for each value of  the sign of the polynomials
Ć Ć Ć
pj () . For pj () = 0 we record the sign opposite to the sign pj-1()
Table no. 1
Ć
Ć

Sign of pk ()
k
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
0 + + + + + + + + +
1 + + + + - - - - -
2 + + - - - - - + +
3 + + - - + + + - -
4 + - - + + + - - +
5 + - + + - - + + -
Number of
eigenvalues 5 4 3 3 2 2 1 1 0
Ć
>

Examining the final row of this Table 1 we find a single eigenvalue in each of the
intervals [0, 0.5], [0.5, 1.0], [1.5, 2.0], [2.5, 3.0], [3.5, 4.0]. Moreover a conclusion can be
drawn from the first column of signs in Table 1 - the matrix T has 5 positive eigenvalues i.e.
it is positive definite. The same conclusion may be drawn from the Gershgorin s Theorem:
min > 2 -(-1 + -1)= 0
.
Theorem 7 Positive definite matrix has all eigenvalues positive. Symmetric positive
definite matrix n n has all n linearly independent eigenvectors.
( )
Chapter 6 6/38 2008-01-23
Example
Positive definite matrix
1 1 0
Ą#ń#
3
ó#0
A = 1 1Ą#  -1 = 0  = 1 one eigenvector {1 0 0}
( )
ó#Ą#
ó#0 0 1Ą#
Ł#Ś#
Symmetric positive definite matrix
1 0 0
Ą#ń#
3
ó#0
B = 1 0Ą#  -1 = 0  = 1 three eigenvectors {1 0 0}, {0 1 0}, {0 0 1}
( )
ó#Ą#
ó#0 0 1Ą#
Ł#Ś#
Definition 2 Rayleigh Quotient.
xtAx
Ax = x  =
xtx
min d"  d" max for arbitrary x " &!
Remarks:
If x is eigenvector of a matrix A, the Rayleigh Quotient (RQ)  constitute the
corresponding exact eigenvalue of this matrix.
However, for a vector x, being only an approximation of an eigenvector, the RQ
presents an evaluation of the relevant fine eigenvalue.
Theorem 8 Orthogonal transformation preserve matrix symmetry as well as its
eigenvalues
Given
Ax = x,
is an orthogonal matrix.
Let
x = Qy , where Q
QTQ = I Q-1 = QT
Then
AQy = Qy
QTAQy = y
Remark:
The orthogonal transformation of the matrix is a particular case of the similarity
transformation. Therefore, the statement of this theorem is obvious.
Chapter 6 7/38 2008-01-23
6.4. THE POWER METHOD
6.4.1. Concept of the method and its convergence
This is a method to find the unique dominant eigenvalue: max max , - min
( )
Let
u - eigenvector j =1, 2,..., n
Au = ju
j
j j
nn
n1
 - eigenvalue
j
Let
1 > 2 e" 3 ... e" n
n
Let
x0 = u
"ą j j
j=1
ą ,u - are not known
j j
Let
x1 = Ax0
x2 = Ax1 = AAx0 = A2x0
...............
xk +1 = Axk k = 1, 2, ...
n n n
k +1 k +1 k +1
x = A x = A ą u = ą A u = ą kj+1u
k +1 0 " j j " j j " j j
j=1 j=1 j=1
k +1
Ą# ń#
n

# ś#
Ą#
xk +1 = k +1ó# jś# j ź# u
1 "ą ś# ź# j
1
ó#j=1 # # Ą#
Ł# Ś#
Notice:
Effect of multiplication the vector x by the matrix A is equivalent to multiplication by 
Let xS(k+1) is the s-th vector component. Then
k +1
n
Ą#
j ń#
# ś#
1k +1 +
ó#ą1us1 j ś# ź# usj
"ą Ą#
k
1 Ś#
xs(k +1) ó#
j=2
# # Ą#
# ś#
2
Ł#
== 1 + O
ś# ź#
k
n
xs(k) Ą# 1
j ń#
# ś# # #
1k + Ą#
ó#ą1us1 j ś# ź# usj
"ą
1 Ś#
ó# j=2
# # Ą#
Ł#
2
Thus magnitude of decides about the convergence rate
1
Finally
xs(k +1)
lim = 1
k"
xs(k)
Chapter 6 8/38 2008-01-23
Remark:
If the dominant eigenvalue has multiplicity r, say we get
k +1
Ą# ń#
r n

# ś#
j
Ą#
xk +1 = k +1ó# ju + ą ś# ź#
u
1 "ą j " j
ś# ź#
ó#j=1 1 j Ą#
# #
j=r +1
Ł# Ś#
and
k
xs(k +1) # ś#
r+1
= 1 + Oś# ź#
xs(k) 1
# #
The dominant eigenvalue 1 is found but xk -1 converges to a linear combination of the first r
eigenvectors.
For real symmetric matrices the Rayleigh Quotient provides means of accelerating
xs(k+1)
the convergence rate over the ratio.
xs(k)
6.4.2. Procedure using the Rayleigh quotient
POWER METHOD
GIVEN PROBLEM Ax = x
xT Ax
RAYLEIGH QUOTIENT
 =
xTx
x0
0. ASSUMPTION
xk
vk =
1
1. NORMALIZATION
2
(xTxk)
k
2. POWER STEP xk+1 = Avk
vT Avk
k
 == vT Avk = vTxk+1
3. RAYLEIGH QUOTIENT
k+1 k
vT vk k
k
k+1 - k (
(
k) = , kv) = vk +1 - vk
4. ERROR ESTIMATION
+1 +1
k +1
??
( (
5. BRAKE OFF TEST
k) < B , kv) < Bv if No  go to 1, if Yes  go to 6.
+1 +1
max H"  , xmax H" xk +1
6. FINAL RESULTS
k+1
Chapter 6 9/38 2008-01-23
Example
1
Ą# ń# Ą#4 -  1 0 ń#
4 0
2 2
ó# Ą# ó# Ą#
1 1 1 1
A = 4 4 -  = 0
ó# Ą#
2 2Ą# det(A - I) = ó# 2 2
ó# Ą# ó# Ą#
1 1
0 4 0 4 - Ą#
ó# Ą# ó#
Ł# 2 Ś# Ł# 2 Ś#
1 1
1 = 4 + , 2 = 4 , 3 = 4 - exact eigenvalues
2 2
1 = 4.7071067 , 2 = 4 , 3 = 3.2928933
1 1 1 1 1 1 1 1
ż##, v2 = ż# #, v3 = ż# #,
v1 = 0 - -
#22 Ź# # Ź# #- Ź#
2 2
22 2 2
## # # # #
EIGENSPECTRUM EVALUATION BY THE GERSCHGORIN S THEOREM
1 1 1 1
min > 4 - - = 3 max < 4 + + = 5
2 2 2 2
Matrix is positive definite and symmetric 3 different eigenvalues.
POWER METHOD SOLUTION PROCESS
Assume
x0 = {1 1 1}
hence
x0 x0 ż# 1 1 1 #
v0 = = =
# Ź#
1 1
#
2 2
(xTx0)(1 1 1)3 3 3 #
0
Ą# ń#
1
1
Ą#ń#
40 9
ó#
3Ą# Ą# ń#
2
ó#Ą# 2Ą#
ó#
ó# Ą#
ó#Ą# 1
11 1
x1 = Av0 = 4
Ą#
22Ą# ó# 3Ą# = 3 ó# 5 Ą#
ó#
ó#9 Ą#
ó#
ó#Ą#
1
ó#
ó# Ą#
04
1
2Ą#
Ł# Ś#
Ł# 2 Ś#
ó#
3Ą#
Ł# Ś#
9
Ą# ń#
2Ą#
ó#
14
Ą#ń#
1 1 1 1
ó# Ą#
 = vt x1 = 5 = = 4.666667
1 0
ó#Ś#
3 3 3Ą# 3
Ł#
ó# Ą#3
ó#9 2Ą#
Ł# Ś#
0.556022
Ą#ń#
1 9 9 1 6 9 9
ż## ż##
ó#0.617802Ą#
v1 == 5 =
5
#Ź# #Ź#
1
ó#Ą#
3 #2 2# Ą##
22
ń#2 13 3 #2 2# Ł#0.556022Ą#
ó#Ś#
1 9 9
ś# # ś#
ó#ś# ź# + 52 + ś# ź# Ą#
2
3
ó## 2 # # #
Ł#Ą#
Ś#
Chapter 6 10/38 2008-01-23
1
Ą#ń#
40
2 0.556022 2.532988
Ą# ń# Ą# ń#
ó#Ą#
ó#0.617802Ą# ó#3.027230Ą#
ó#Ą#
11
x2 = Av1 = 4 =
22Ą# ó# Ą# ó# Ą#
ó#
ó#
ó#Ą#
Ś# Ł#2.532988Ą#
1 Ł#0.556022Ą# ó# Ś#
04
ó#Ą#
Ł# 2 Ś#
2.532988
Ą#ń#
ó#3.027230Ą#
2 = 0.556022, 0.617802,0.556022 = 4.687023
[]
ó#Ą#
ó#2.532988Ą#
Ł#Ś#
0.540082
Ą#ń#
x2 ó#0.645464Ą#
v2 ==
1
2
xT x2 ó#
( )ó#Ą#
2
Ł#0.540082Ą#
Ś#
Error estimation
4.687023 - 4.666667
(
2) == 0.004342
4.687023
v2 - v1 = {0.5400818 - 0.5560218,0.6454636 - 0.6178020,0.5400818 - 0.5560218}=
= {-0.015940,0.027662, - 0.015940}
(v)
 = v2 - v1 = 0.035684
Notice greater accuracy of 2 than v2 i.e. 2() < 2(v)
v3 = 0.528458, 0.664428, 0.528458
{ }
3 = 4.697206
Error estimation
4.69721- 4.68702
(
3) == 0.002169
4.69721
(
3v) = 0.016438
& & & & & & & & & & & & & & & & &
ż#v11 = 0.501681,0.704721,0.501681
{ }
#
#
#11 = 4.707074
#
ż#v25 = 0.500056,0.707028,0.500056 - result exact within 3 4 digits
{ }
#
#
- result exact within 7 digits
#25 = 4.707107
#
ż#v" = 0.500000,0.707107,0.500000 - result exact
{ }
#
#
#
# = 4.707107
Chapter 6 11/38 2008-01-23
6.4.3. Shift of the eigenspectrum
Given eigenvalue problem
Ax = x
Let
 =  + l Ax = x + lx
A
( - lI x = x x,  =  + l
)
A p  = 
( )
A - lI p  = p  - l =  - l
( ) ( )
6.4.4. Application of shift to acceleration of convergence to max = 1
p
The optimal shift
2 + n
lopt =
2
in order to speed-up the convergence
rate while evaluating 1





n

l
opt
Example
1
4 + 4 -
2 + 3 1
2
lopt == = 4 - = 3.646447 - optimal shift for 1
22
2 2
evaluation
Chapter 6 12/38 2008-01-23
Ą#ń#
11
#ś#
Ą# 1 ń#
4 - 4 + 0
1 0
ó#ś#ź# Ą#
ó# Ą#
2
2
ó## 2 2 # Ą#
ó# Ą#
ó#Ą#
11 1
#ś# 1 1
ó# Ą#
A - loptI = 4 - 4 + = 1 1
ó#Ą#
ś#ź#
ó# Ą#
22 2
2 2 2
# #
ó#Ą#
ó# Ą#
ó#
11
#ś#Ą# ó# 0 1 1 Ą#
ó# 04 - 4 +
ó# Ą#
ś#ź#Ą# Ł#
2
2 Ś#
ó#Ś#
2 2 Ą#
# #
Ł#
Let
x0 = 1, 1, 1
{ }
1 1 1
ż# #
v0 = , ,
# Ź#
3 3 3
# #
Ą# 1 ń# Ą# 1 ń#
1 0
ó#Ą# ó#1+ Ą#
22
ó#Ą# ó# Ą#
1 0.492799
Ą# ń#Ą# ń#
1 ó# 1 Ą# 11 ó# 1 Ą#
ó#1Ą#ó#0.781474Ą#
x1 = 1 1 =
=
ó#Ą# ó#2 + Ą#
ó# Ą#ó# Ą#
2
23 2 3 2
ó#Ą# ó# Ą#
ó#
Ł#1Ą#ó# Ś#
Ś#Ł#0.492799Ą#
11
ó#Ą# ó# Ą#
0 1
ó#Ą# ó#1+ Ą#
22
Ł#Ś# Ł# Ś#
Ą# 1 ń#
ó#1+ Ą#
2
ó# Ą#
11 ó# 1 Ą# 1 # 3 ś#
1 = 1, 1, 1 = 4 + = 1.020220
[]
ś#ź#
36
2 3ó#2 + 2 Ą# 2
# #
ó# Ą#
1
ó# Ą#
ó#1+ Ą#
2
Ł# Ś#
(1) = 1.020220 + 3.646447 = 4.666667
v1 = 0.470635, 0.746326, 0.470635
{ }
x2 = 0.539558, 0.734501, 0.539558
{ }
2 = 1.056047 (2) = 4.702493
v2 = 0.509439, 0.693501, 0.509439
{ }
Error estimation
4.702493 - 4.666667
(
2) == 0.007619
4.702493
(
2v) = v2 - v1 = 0.076171
v8 = 0.500073, 0.707173, 0.500073 - result exact within 3-4 digits
{ }
(8) = 4.707107 - result exact within 7 digits
Remark:
Due to shift number of iterations was reduced from 25 to 8.
Chapter 6 13/38 2008-01-23
6.4.5. Application of a shift to acceleration of convergence to min
p(k)
p(l)
When l = max the solution
procedure is convergent to min
The optimal convergence to min is
obtained when:
l
n-1 + max
ln=lmin
ln-1 l=lmax
lopt = 1
2
lopt
p(l-lopt)
p(l-lmax)
Examples
(i) Let
1
l = 3 = 4 + = 4.707107  =  + 4.707107
2
Ą# ń#
# 1 ś# 1
Ą# 1 1 ń#
0
ó#ś#4 - 4 - ź# Ą# 0
ó#- Ą#
2
2
#
2
2
ó## Ą#
ó# Ą#
1
ó# # 1 ś# 1 1 1 1
Ą#
ó# Ą#
A - 3I = ś#4 - 4 - ź# = -
ó# Ą#
ó# Ą#
2 2 2 2
2 2
# #
ó# Ą#
Ą#
1 1
1 # 1 ś#Ą# ó#
ó#
0 ś#4 - 4 - ź#Ą# ó# 0 2 - Ą#
ó#
2
Ł# Ś#
2
2
# #
Ł# Ś#
Let
x0 = 1, 1, 1
{ }
x0 ż# 1 1 1 #
v0 == , ,
1 #Ź#
2
xt " x0 # 3 3 3 #
( )
0
Ą# 1 1 ń#
0
ó#- Ą#
2
2
Ą#1- 2 ń#
ó#Ą#
1
Ą# ń#Ą#-0.119573
ń#
ó# Ą#
ó# 1 1 1 Ą# 1 1
ó#1Ą#ó#
x1 = A - 3I v0 = - = 2 - 2 = 0.169102Ą#
() ó# Ą#
ó#Ą#
ó# Ą#ó# Ą#
22
23 2 3
ó# Ą#
ó#Ą#
ó# Ą#
Ś#ó#-0.119573Ś#
ó#1- 2 Ą#
Ł# Ś#
1 1Ł#1Ą#Ł#
ó#Ą#
0 -
ó#Ą#
2
2
Ł#Ś#
Chapter 6 14/38 2008-01-23
11 2 2
1 = vt " x1 = 1, 1, 1 " 1- 2, 2 - 2, 1- 2 = - = -0.040440
[ ]
{}
0
3 2
32 3
(1) = 1 + l = -0.040440 + 4.707107 = 4.666667
x1 ż# 1 1 1 #
v1 ==
{-0.500000, 0.707107, - 0.500000 H" , , -
}
1 #- Ź#
2
t
22#
2
#
x1 " x1
( )
ż##
x2 = A - 3I v1 = , -1,
()11
#Ź#
22
##
T
2 = v1 x2 = - 2
11
(2) = 2 + l = - 2 + 4 + = 4 - a" 3
22
here x2 , 2 and (2) are the exact results.
(ii) Let
1 + 2 1 1 1
#
lopt == 4 + + 4ś# = 4 + = 4.353553
ś#ź#
22
22 2
# #
1 1
Ą#ń# Ą# 1 ń#
01 0
ó#- Ą# ó#- Ą#
2
2 2 2
ó#Ą# ó# Ą#
11 1 1 1
ó#Ą# ó# Ą#
A - loptI = - = 1 - 1
ó#Ą# ó# Ą#
22 2
2 2 2
ó#Ą# ó# Ą#
11 1
ó#Ą# ó# Ą#
00 1 -
-
ó#Ą# ó# Ą#
2
2 2 2
Ł#Ś# Ł# Ś#
Let
x0 = {1, 1, 1}
ż# 1 1 1 #
v0 = , ,Ź#
#
3 3 3
##
1 1 #
ż#1- , 2 - 1 1
x1 = A - loptI v0 = , 1- = {0.084551, 0.373226, 0.084551}
()
#Ź#
2 3 2 2 2
##
1 = vt " x1 = 0.313113
0
(1) = 1 + lopt = 0.313113 + 4.353554 = 4.666667
x1
v1 == {0.215739, 0.952320, 0.215739}
1
2
t
x1 " x1
( )
& & & & & & & & & & & & & & & & & & & &
Chapter 6 15/38 2008-01-23
6.5. INVERSE ITERATION METHOD
6.5.1. The basic algorithm
Ax = x A-1Ax = A-1x , where A is a non-singular matrix
1
A-1x = x

Let
1 1 1
= c = , max =
 max c
hence
A-1x = x
Notice:
Here c and c mean eigenvalues closest to zero
INVERSE METHOD
0. ASSUMPTION x0
xk
1. NORMALIZATION vK =
1
xT " xk 2
( )
k
2. POWER STEP
solution of
linear algebraic
simultaneous
equations
A = LU
ż# LU decomposition
64748
xk +1 = A-1vk Axk +1 = vk xk +1 # k = vk yk step foreward
=
#Ly
#U xk+1 = yk xk step back
# k +1
No
vT A-1vk
k
3. RAYLEIGH QUOTIENT k +1 == vTxk , (k +1) = -1
k +1
vT vk k +1
k
k +1 -k (
(
4. ERROR ESTIMATION k) = , kv) = vk +1 - vk
+1
k +1
?
( ) (v)
5. BRAKE OFF TEST  < ł ,  < łv
k +1 k +1
Yes
6. FINAL RESULTS c H" -1 xc H" xk +1
k+1
Chapter 6 16/38 2008-01-23
Example
Let
1
Ą#4 0ń#
ó# Ą#
ó#1 2 1Ą#
A = ó# 4 Ą#
ó#2 2Ą#
ó#0 1 4Ą#
ó# Ą#
2
Ł# Ś#
Matrix decomposition
Ą# ń#
Ą#ń# 1
ó#2 4 0 Ą#
ó#2 0 0 Ą#
ó# Ą#
ó#Ą#
ó# Ą#
ó#1 63 Ą# 63 2
A = LLT =
ó#0 4 Ą#
ó#4 4 0 Ą#
63
ó# Ą#
ó#Ą#
ó# Ą#
ó# 2 62 Ą#
62
0 0 2
ó# Ą#
ó#0 63 2 63 Ą#
ó# Ą#
Ł#Ś# 63
Ł# Ś#
Let
x0 = 1, 1, 1
{ }
x0 ż# 1 1 1 #
v0 == , ,
1 #Ź#
2
xt " x0 # 3 3 3 #
( )
0
Chapter 6 17/38 2008-01-23
1
Ą#
4 0ń#
ó#Ą#
2
x1 1 0.130369
Ą# ń# Ą# ń# Ą# ń#
ó#Ą#
ż##
1 7 6 7
ó#x Ą# ó#1Ą# ó#0.111745Ą#
ó#1 1 Ą#
Ax1 = 4 = x1 = =
,
#Ź#
ó# Ą#
ó#
2 2Ą# ó# 2 Ą# 3 ó# Ą# #3 3, 3 3 3 3 #
ó# Ą# ó# ó#
ó#Ą#
Ł#x3 Ś# Ł#1Ą# Ł#0.130369Ą#
Ś# Ś#
ó#0 1 4Ą#
ó#Ą#
2
Ł#Ś#
1 = vTx1 = 0.215054 (1) = -1 = 4.649995
0 1
x1 ż# 7 6 7 #
v1 == , , = 0.604708, 0.518321, 0.604708
{}
1 #Ź#
2
t
x1 " x1 # 134 134 134 #
( )
1
Ą#
4 0ń#
ó#Ą#
2
x1 7 0.139334
Ą# ń# Ą# ń# Ą# ń#
ó#Ą#
1 1
ó#x Ą# ó#6Ą# ó#0.094747Ą#
ó#1 1 Ą#
Ax2 = 4 = x2 = 50, 34, 50 =
{}Ą#
ó#
ó#
2 2Ą# ó# 2 Ą# 134 ó# Ą# 31 134
ó# Ą# ó# ó#
ó#Ą#
Ł#x3 Ś# Ł#7Ą# Ł#0.139334Ą#
Ś# Ś#
ó#0 1 4Ą#
ó#Ą#
2
Ł#Ś#
T
 = v1 x2 = 0.217622 (2) = -1 = 4.591342
2 2
x2 1
v2 == 25, 17, 25 = {0.637266, 0.433341, 0.6372659}
{}
1
2
xt " x2 9 19
( )
2
1
Ą#ń#
4 0
ó#Ą#
2
x1 25 0.150477
Ą# ń# Ą# ń# Ą# ń#
ó#Ą#
1
ó#x Ą# ó#17Ą#
ó#1 4 1 Ą#
Ax3 ==ó#0.070716Ą#
x3 =
ó#
2 2Ą# ó# 2 Ą# 9 19 ó# Ą# ó# Ą#
ó# Ą# ó#Ś# ó#
ó#Ą#
Ł#x3 Ś# Ł#25Ą# Ł#0.150477Ą#
Ś#
ó#0 1 4Ą#
ó#Ą#
2
Ł#Ś#
...........................................
 = 0.303684 final = -1 = 3.292893 = 3
final final
v =
{-0.50000, 0.707107, - 0.50000
}
final
Remark:
Convergence is initially slow because of unhappy choice of x0 .
Chapter 6 18/38 2008-01-23

6.5.2. Use of inverse and shift In order to find the eigenvalue closest to a
c
given one
p(l)
p(l-l)
p(l)
Let
 =  + l
CONCEPT
p(l-l)
The same like in the case of
inverse method but:
- A is replaced now by A - lI
-  = -1 + l
l
l
l l
c
min max
l
Inverse
Power
Inverse + shift
Example
1
Ą#4 0ń#
ó# Ą#
ó#1 2 1Ą#
A = ó# 4 Ą#
ó#2 2Ą#
ó#0 1 4Ą#
ó# Ą#
2
Ł# Ś#
Let
l = 3.75
Thus
1 1
Ą#ń#
0
ó#Ą#
4 2
ó#Ą#
~
ó#1 1 1 Ą#
A = A - 3.75I =
ó#
2 4 2Ą#
ó#Ą#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
Error estimation
k+1 - k (

k = , kv) = vk +1 - vk
k+1
Chapter 6 19/38 2008-01-23
Let us assume first a symmetric starting vector
x0 = 1, 1, 1
{ }
then
x0 1
v0 == 1, 1, 1
{ }
1
2
xt " x0 3
( )
0
1 1
Ą#ń#
ó#4 2 0 Ą#
x1 1 1 0.329914
Ą# ń# Ą# ń# Ą# ń# Ą# ń#
ó#Ą#
~
1 4ó#3Ą# ó#0.989743Ą#
ó#x Ą# ó#1Ą#
ó#1 1 1 Ą#
Ax1 == x1 = =
ó# Ą# ó#
ó#2 4 2Ą# ó# 2 Ą# 3 ó# Ą# 7 3 Ą#
ó# Ą# ó# ó# ó#
ó#Ą#
Ł#x3 Ś# Ł#1Ą# Ł#1Ą# Ł#0.329914Ą#
Ś# Ś# Ś#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
0.301511
Ą# ń#
x1 1
ó#0.904534Ą#
v1 == 1, 3, 1 =
{ }
1
ó# Ą#
2
t
x1 " x1 11
( )
ó#
Ł#0.301511Ą#
Ś#
20 21
-1
1 = vTx1 = = 0.952381  = 1 + l = + 3.75 = 4.800000
0 (1)
21 20
error estimation
1() = 1.00 , 1(v) = 2.54 10-1
1 1
Ą#ń#
0
ó#Ą#
4 2
x1 1 5 0.861461
Ą# ń# Ą# ń# Ą# ń# Ą# ń#
ó#Ą#
~
4
ó#x Ą# ó#1Ą# ó#0.172292Ą#
ó#1 1 1 Ą#1ó#3Ą#
Ax2 == x2 = =
ó# Ą# ó# Ą#
ó#
2 4 2Ą# ó# 2 Ą# ó# Ą#
11 7 11
ó# Ą# ó# ó# ó#
ó#Ą#
Ł#x3 Ś# Ł#1Ą# Ł#5Ą# Ł#0.861461Ą#
Ś# Ś# Ś#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
0.700140
Ą# ń#
x2 1
ó#0.140028Ą#
v2 == {5, 1, 5} =
1
ó# Ą#
2
xt " x2 51
( )
2
ó#
Ł#0.700140Ą#
Ś#
52 77
t -
 = v1x2 = = 0.675325,  = 21 + l = + 3.75 = 5.230769
2 (2)
77 52
error estimation
( (
2) = 8.24 10-2 , 2v) = 5.4810-1
Chapter 6 20/38 2008-01-23
1 1
Ą#ń#
ó#4 2 0 Ą#
x1 5
Ą# ń# Ą# ń# Ą#-3
ń# Ą#-0.240048
ń#
ó#Ą#
~
4
ó#x Ą# ó#19Ą# ó#
ó#1 1 1 Ą#1ó#1Ą#
Ax3 == x3 = = 1.520304Ą#
ó# Ą# ó# Ą#
ó#2 4 2Ą# ó# 2 Ą# 51 ó# Ą#
7 51
ó# Ą# ó# ó#-3Ś# Ł#
Ą# ó#-0.240048Ś#
Ą#
ó#Ą#
Ł#x3 Ś# Ł#5Ą# Ł#
Ś#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
Ą#-0.154100
ń#
x3 1
ó# Ą#
v3 == {-3, 19, - 3} = 0.975964
1
ó# Ą#
2
t
x3 " x3 379
( )
ó#-0.154100Ś#
Ą#
Ł#
44 357
-
 = vt x3 = - = -0.123249, (3) = 31 + l = - + 3.75 = -4.363636
3 2
7"51 44
error estimation
( (
3) = 2.20 , 3v) = 6.5710-1
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-3 41 1.203444
ń# Ą# ń# Ą# ń#
ó#Ą#
~
1 4
ó#x Ą# ó#19Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax4 == x4 = =
ó#-31Ą# ó#-0.909922Ą#
ó#2 4 2Ą# ó# 2 Ą# 379 ó# Ą#
7 379
ó# Ą# ó#-3Ś# 41 1.203444
Ą# ó# Ą# ó# Ą#
ó#Ą#
Ł#x3 Ś# Ł# Ł# Ś# Ł# Ś#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
0.623579
Ą# ń#
x4 1
ó# Ą#
v4 == {41, - 31, 41} =
1
ó#-0.471486Ą#
2
xt " x4 4323
( )
4
ó# Ą#
0.623579
Ł# Ś#
1
-
 = vt x4 = -1.258952, 4 = 41 + l = - + 3.75 = 2.955689
4 3
1.258952
error estimation
( (
4) = 2.48 , 4v) = 4.8110-1
Chapter 6 21/38 2008-01-23
1 1
Ą#ń#
ó#4 2 0 Ą#
x1 41
Ą# ń# Ą# ń# Ą#-103
ń# Ą#-0.895172
ń#
ó#Ą#
~
1 4
ó#x Ą# ó# Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax5 == x5 = 195 = 1.694743
ó# Ą# ó# Ą#
ó#2 4 2Ą# ó# 2 Ą# 4323 ó#-31Ą#
7 4323
ó# Ą# ó# Ą# ó# Ą# ó#-0.895172Ś#
Ą#
41 103
ó#Ą#
Ł#x3 Ś# Ł# Ś# Ł# Ś# Ł#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
Ą#-0.423174
ń#
x5 1
ó# Ą#
v5 == {-103, 195, -103} = 0.801154
1
ó# Ą#
2
t
x5 " x5 59243
( )
ó#-0.423174Ś#
Ą#
Ł#
1
-
 = vt x5 = -1.915469, 5 = 51 + l = - + 3.75 = 3.227935
5 4
1.915469
error estimation
( (
5) = 8.4310-2 , 5v) = 2.5110-1
1 1
Ą#
ó#4 2 0ń#
Ą#
x1
Ą# ń# Ą#-103 493 1.157418
ń# Ą# ń# Ą# ń#
ó#Ą#
~
1 4
ó#x Ą# ó# Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax6 == 195 x6 = =
ó#-607Ą# ó#-1.425057Ą#
ó#2 4 2Ą# ó# 2 Ą# 59243 ó# Ą#
7 59243
ó# Ą# ó#-103Ś# 493 1.157418
Ą# ó# Ą# ó# Ą#
ó#Ą#
Ł#x3 Ś# Ł# Ł# Ś# Ł# Ś#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
0.533309
Ą# ń#
x6 1
ó# Ą#
v6 =={493, - 607, 493} =
1
ó#-0.656630Ą#
2
xt " x6 854547
( )
6
ó# Ą#
0.533309
Ł# Ś#
1
t -
 = v5x6 = -2.121268, 6 = 61 + l = - + 3.75 = 3.278584
6
2.121268
error estimation
( (
6) = 1.54 10-2 , 6v) = 1.2310-1
...........................................
1 1
Ą#
ó#4 2 0ń#
Ą#
x1
Ą# ń# Ą#-0.496219 1.097741
ń# Ą# ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax10 == 0.712414 x10 =
Ą# ó#-1.541308Ą#
ó#2 4 2Ą# ó# 2 Ą# ó#
ó# Ą# ó#-0.496219Ś# 1.097741
Ą# ó# Ą#
ó#Ą#
Ł#x3 Ś# Ł# Ł# Ś#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
Chapter 6 22/38 2008-01-23
0.501796
Ą#ń#
x10 ó#Ą#
v10 ==
1
ó#-0.704558Ą#
2
t
x10 " x10 ó# 0.501796 Ą#
( )
Ł#Ś#
1
-1
10 = vt x10 = -2.187631, 10 = 10 + l = - + 3.75 = 3.292884
9
2.187631
error estimation
 v
( (
10 ) = 3.94 10-5 , 10 ) = 6.4310-3
...........................................
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-0.500000 0.500000
ń# Ą# ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax28 == 0.707107 x28 v28 =
2
ó# Ą# ó# Ą# ó#-0.707107Ą#
ó#Ą#
2 4 2
ó# 0.500000 Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó#-0.500000Ą#
ó#Ą#
0
2 4
Ł#Ś#
1
-1
 = vt x28 = -2.187673, 28 = 28 + l = - + 3.75 = 3.292893
28 27
2.187673
error estimation
 v
( (
28) = 1.3510-16 , 28) = 1.3510-8
1 1
Ą#ń#
ó#4 2 0 Ą#
x1 0.500000
Ą# ń# Ą# ń# Ą#-0.500000
ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax29 == x29 v29 = 0.707107
2
ó# Ą#
ó#Ą#
2 4 2ó# Ą# ó#-0.707107Ą#
ó#-0.500000Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó# 0.500000 Ą#
ó#Ą#
0
2 4
Ł#Ś#
1
-1
 = vt x29 = -2.187673, 29 = 29 + l = - + 3.75 = 3.292893
29 28
2.187673
error estimation
 v
( (
29) = 0 , 29) = 1.5810-8
Chapter 6 23/38 2008-01-23
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-0.500000 0.500000
ń# Ą# ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax30 == 0.707107 x30 v30 =
2
ó# Ą# ó# Ą# ó#-0.707107Ą#
ó#Ą#
2 4 2
ó# 0.500000 Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó#-0.500000Ą#
ó#Ą#
0
2 4
Ł#Ś#
1
 = vt x30 = -2.187673, 30 = - + 3.75 = 3.292893
30 29
2.187673
error estimation
 v
( (
30 ) = 1.3510-16 , 30) = 2.7510-8
& & & & & & & & & & &
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-0.500514 1.085564
ń# Ą#ń#
ó#Ą#
~
ó#x Ą# ó#Ą# ó#Ą#
ó#1 1 1 Ą#
Ax50 == 0.707107 x50 =
2
ó#Ą#
2 4 2ó# Ą# ó#Ą# ó#-1.546912Ą#
ó#Ą#
Ł# Ś# Ł#Ś# Ł#Ś#
1 1ó#x3 Ą# ó#-0.500514Ą# ó# 1.102099 Ą#
ó#Ą#
0
2 4
Ł#Ś#
0.496214
Ą#ń#
x50 ó#Ą#
v50 ==
1
t 2
x50 " x50 ó#-0.707097Ą#
()
ó# 0.503772 Ą#
Ł#Ś#
1
-
50 = vt x50 = -2.187662, 50 = 501 + l = - + 3.75 = 3.292882
49
2.187662
error estimation
 v
( (
50 ) = 2.3510-6 , 50) = 4.7710-3
...........................................
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-0.712203 0.023209
ń# Ą# ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax58 == 0.664212 x58 v58 =
2
Ą# ó#-0.588084Ą#
ó#Ą#
2 4 2ó# Ą# ó#
ó# 0.808467 Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó#-0.227135Ą#
ó#Ą#
0
2 4
Ł#Ś#
t -1
 = v57x58 = -1.459722, 58 = 58 + 3.75 = 3.064938
58
Chapter 6 24/38 2008-01-23
error estimation
 v
( (
58 ) = 5.62 10-2 , 58) = 5.22 10-1
1 1
Ą#ń#
ó#4 2 0 Ą#
x1 0.023209
Ą# ń# Ą# ń# Ą#-0.863853
ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax59 == x59 v59 = 0.448093
2
ó# Ą#
ó#Ą#
2 4 2ó# Ą# ó#-0.588084Ą#
ó# 0.230153 Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó# 0.808467 Ą#
ó#Ą#
0
2 4
Ł#Ś#
t -1
 = v58x59 = -0.279919, 59 = 59 + 3.75 = 0.177535
59
error estimation
 v
( (
59 ) = 1.63101 , 59) = 5.9510-1
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-0.863853
ń# Ą#-0.440870
ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax60 == 0.448093 x60 v60 =
2
ó# Ą# ó# Ą# ó#-0.289111Ą#
ó#Ą#
2 4 2
ó# 0.849734 Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó# 0.230153 Ą#
ó#Ą#
0
2 4
Ł#Ś#
t -1
 = v59x60 = 1.515181, 60 = 60 + 3.75 = 4.409987
60
error estimation
 v
( (
60 ) = 9.60 10-1 , 60) = 4.4310-1
...........................................
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-0.708086
ń# Ą#-0.706570
ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax70 == 0.001387 x70 v70 =
2
ó# Ą# ó# Ą# ó#-0.000759Ą#
ó#Ą#
2 4 2
ó# 0.707643 Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó# 0.706125 Ą#
ó#Ą#
0
2 4
Ł#Ś#
1
-1
 = vt x70 = 3.999885, 70 = 70 + l = + 3.75 = 4.000001
70 69
3.999885
error estimation
 v
( (
70 ) = 8.72 10-7 , 70) = 1.29 10-3
...........................................
Chapter 6 25/38 2008-01-23
Finally
1 1
Ą#ń#
ó#4 2 0 Ą#
x1
Ą# ń# Ą#-0.707107
ń# Ą#-0.707107
ń#
ó#Ą#
~
ó#x Ą# ó# Ą# ó# Ą#
ó#1 1 1 Ą#
Ax89 == 0.000000 x89 v89 = 0.000000
2
Ą# ó# Ą#
ó#Ą#
2 4 2ó# Ą# ó#
ó# 0.707107 Ą#
ó#Ą#
Ł# Ś# Ł# Ś# Ł# Ś#
1 1ó#x3 Ą# ó# 0.707107 Ą#
ó#Ą#
0
2 4
Ł#Ś#
1
-1
89 =  = 4.000000 final = final + l = + 3.75 = 4.00000 = second
final
3.999885
v89 = vfinal =
{-0.707107, 0.000000, 0.707107 = vsecond
}
error estimation
 v
( (
89 ) = 0 , 89) = 1.3510-8
Remarks
- for error treshold 10-7 only 86 iterations is sufficient because

(
86 ) = 3.5510-15 ,
v
(
86) = 8.2710-8
- notice: after 25 iterations we also have small estimation error

(
25 ) = 9.1710-15 ,
v
(
25) = 9.86 10-8
however, the results
25 = 3.292893 H" third
v25 =
{-0.500000, 0.707107, - 0.500000 H" vthird
}
correspond to a false solution, namely to the third eigenvalue third , and eigenvector
vthird .
Number of iterations Number of iterations
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
0 2
1
-1
0
-1
-2
-2
-3
-3
-4
-4 -5
-6
-5
-7
-8
-6
-9
-10
-7
-11
-8
-12
-13
-9
-14
-15
-10
-16
28
59
-11 -17
28 59
Logarithm of Lambda's error
Logarithm of eigenvector's error
Chapter 6 26/38 2008-01-23
6
a= 4.707107
a= 4.000000
4
a= 3.292893
2
0
-2
-4
59
-6
0 10 20 30 40 50 60 70 80 90 100
Number of iterations
(ii) Let us use now a non-symetric starting vector
x0 = 1, 1, -1
{ }
then
x0 1
v0 == 1, 1, -1
{}
1
2
xt " x0 3
( )
0
1 1
Ą#ń#
ó#4 2 0 Ą#
x1 19
Ą# ń# Ą# ń# Ą# ń#
ó#Ą#
~
1 4ó# Ą#
ó#x Ą# ó# Ą#
ó#1 1 1 Ą#
Ax1 == 1 x1 =
ó#-1Ą#
ó#2 4 2Ą# ó# 2 Ą# 3 ó# Ą# 7 3
ó# Ą# ó#-1Ś#
ó#-5Ś#
Ą#
ó#Ą#
Ł#x3 Ś# Ł#Ą# Ł#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
x1 1
v1 == 9,
{ -1, - 5
}
1
2
t
x1 " x1 107
( )
52 21
 = vT x1 = = 2.476191  = 1-1 + l = + 3.75 = 4.153846
1 0 (1)
21 52
1 1
Ą#ń#
ó#4 2 0 Ą#
x1 945
Ą# ń# Ą# ń# Ą# ń#
ó#Ą#
~
4
ó#x Ą# ó# Ą#
ó#1 1 1 Ą#1 ó# Ą#
Ax2 == x2 = 9
ó# Ą#
ó#2 4 2Ą# ó# 2 Ą# 107 ó#-1Ą#
7 107
ó# Ą# ó#-5Ś#
Ą# ó#-57Ś#
Ą#
ó#Ą#
Ł#x3 Ś# Ł# Ł#
ó#0 1 1 Ą#
ó#Ą#
2 4Ś#
Ł#
Lambda
Chapter 6 27/38 2008-01-23
1
t -1
2 = v1x2 = 3.530040,  = 2 + l = + 3.75 = 4.033283
(2)
3,530040
& & & & & & & & & & & & & & & & & & & & & .
Finally
After 39 iterations we obtain as before
1
-1
final = 4.000000 final = final + l = + 3.75 = 4.000000 = 2
4
1 1
ż# #
v = {0.707107, 0.000000, 0.707107} H" , 0,
# Ź#
2 2
##
The error level is 10-10 then.
Remark
Due to different starting vector convergence in the (ii) case proved to be much faster than in
the case (i).
Chapter 6 28/38 2008-01-23
6.6. THE GENERALIZED EIGENVALUE PROBLEM
Ax = Bx
e.g. from FEM where A = K, B = M
K  stiffness matrix, M  inertia matrix
If
B = Bt and xtBx > 0
symmetric positive definite
then
t
B = LLt , Lx = y x = L-ty
Ax = LLtx AL-ty = Ly
L-1AL-t = y
1 3y
424
y =  y , = L-1AL-t
Remark
The generalized eigenvalue problem was transformed into the standard one with the
same eigenvalues  preserved.
SOLUTION ALGORITHM BASED ON THE POWER METHOD
(Search for the largest eigenvalue max )
y0
ASSUMPTION
yn
un a"
NORMALIZATION
1
yt " yn 2
( )
n
POWER STEP
L-1AL-tun = yn+1
B = LLt -matrix decomposition (only once)
t
How it is done:
Lxn = un xn - step back
Lyn+1 = Axn yn+1 -step foreward
RAYLEIGH
ut L-1AL-t un
()
n
n = = ut yn+1 = xt Axn
QUOTIENT
n
ut "un n
n
{
=1
ERROR
n - n-1 (
(
n ) = , nu) = un - un-1
ESTIMATION
n
(
(
BRAKE OFF TEST
n ) yes B
<
, nu) < Bu , B , Bu imposed required precision
!
( ) (u)
max H" n
of errors  and  evaluation.
RESULTS
max H"  , x1 H" xn
n
Chapter 6 29/38 2008-01-23
Example
Ax = Bx
7 4 3 x1 8 1 3 x1
Ą# ń# Ą# ń# Ą# ń# Ą# ń#
ó#4 8 2Ą# ó#x Ą# ó#1 6 4Ą# ó#x Ą#
= 
22
ó# Ą# ó# Ą# ó# Ą# ó# Ą#
ó# Ą#
Ł#3 2 6Ś# ó# Ą# ó# 4 4Ś# ó# Ą#
Ł#x3 Ś# Ł#3 Ą# Ł#x3 Ś#
(i) DIRECT APPROACH
1. Decompose B = LLt
8 1 3 2.828427 0 0 2.828427 0.353553 1.060660
Ą# ń# Ą#ń# Ą#ń#
ó#1 6 4Ą# ó#0.353553 2.423840 0 Ą# ó#
= 0 2.423840 1.495561Ą#
ó# Ą# ó#Ą# ó#Ą#
ó# Ą#
0 0.798935Ś#
Ł#3 4 4Ś# ó#Ą# Ą#
Ł#1.060660 1.495561 0.798935Ś# ó# 0
Ł#
2. Find inverse matrices L-1,L-t
0.353555 0 0 0.353553 -0.051571 -0.372836
Ą#ń# Ą# ń#
ó#Ą# ó#
L-1 = , L-t = 0 0.412568 -0.772303Ą#
ó#-0.051571 0.412568 0 Ą# ó# Ą#
ó#-0.372836 -0.772303 1.251663Ś# 1.251663
ó# Ą#
0 0
Ł#Ą# Ł# Ś#
3. Find = L-1AL-t
0.353553 0 0 7 4 3 0.353553 -0.051571 -0.372836
Ą#ń# Ą# ń# Ą#ń#
ó#Ą# ó#4 8 2Ą# ó#
= 0 0.412568 -0.772303Ą#
ó#-0.051571 0.412568 0 Ą# ó# Ą# ó#Ą#
ó#-0.372836 -0.772303 1.251663Ś# Ł#3 2 6Ś# Ł# 0 0
Ą# ó#Ś#
Ł#Ą# ó# 1.251663 Ą#
0.874995 0.455828 -0.687333
Ą#ń#
ó#
= 0.455828 1.210105 -2.031250Ą#
ó#Ą#
ó#-0.687333 -2.031250 10.781520Ś#
Ł#Ą#
~
4. Find eigenvalues  and eigenvectors u of A by any standard method
1 = 11.251110 , 2 = 1.116762 , 3 = 0.498742
Ą#-0.073535 0.717447
ń# Ą#ń# Ą#-0.692722
ń#
ó#Ą# ó#0.669696Ą# ó# Ą#
u1 = , u2 = , u3 = 0.714931
ó#-0.200948Ą# ó#Ą# ó# Ą#
ó#Ą# ó#Ś# ó# Ą#
0.976838 0.094923
Ł#Ś# Ł#0.191774Ą# Ł# Ś#
Chapter 6 30/38 2008-01-23
5. Find the eigenvectors of the original problem x = L-tu
0.353553 -0.051571 -0.372836
Ą#ń# Ą#-0.073535 0.717447 -0.692722
ń#
ó#
0 0.412568 -0.772303Ą# ó#Ą# =
ó#Ą# ó#-0.200948 0.669696 0.714931 Ą#
ó#Ą# ó#Ą#
0 0 1.251663 0.976838 0.191774 0.094923
Ł#Ś# Ł#Ś#
Ą#-0.379836 0.147619 -0.317174
ń# Ą#-0.248290 0.476833 -0.775220
ń#
ó#Ą# ó#Ą#
= !
ó#-0.837319 0.128188 0.221647 Ą# ó#-0.547337 0.414068 0.541741 Ą#
ó#Ą# ó#Ą#
1.222672 0.240036 0.118812 0.799233 0.775357 0.290393
Ł#Ś# Ł#Ś#
In the last matrix all eigenvectors x are normalized
(ii) MORE EFFICIENT APPROACH
2a Solve simultaneous equations
t
Step back Lxn = un xn
2.820427 0.353553 1.060660 x1 u1 x1
Ą# ń# Ą# ń# Ą# ń# Ą# ń#
ó# ó#u Ą# ó#x Ą#
0 2.423840 1.495561Ą# ó#x2 Ą# = ! = L
2 2
ó# Ą# ó# Ą# ó# Ą# ó# Ą#
ó# Ą# ó# Ą#n ó# Ą#n ó# Ą#
0 0 0.798935Ś# Ł#x3 Ś# Ł#u3 Ś# Ł#x3 Ś#
Ł#
Step forward Lyn+1 = Axn yn+1
2.828427 0 0 y1 7 4 3 x1 y1
Ą# ń# Ą# ń# Ą# ń# Ą# ń# Ą# ń#
ó#0.353553 2.423840 0 Ą# ó# Ą# ó#4 8 2Ą# ó#x Ą# ó# Ą#
y2 = ! y2 =L
2
ó# Ą# ó# Ą# ó# Ą# ó# Ą# ó# Ą#
ó# Ą#
Ł#1.060660 1.495561 0.798935Ś# ó# y3 Ą# n+1 ó# 2 6Ś# ó# Ą# ó# y3 Ą#
Ł# Ś# Ł#3 Ą# Ł#x3 Ś# Ł# Ś#
Let
y1 = 1, 1, 1
{ }
y1 ż# 1 1 1 #
u1 == , , = 0.577350, 0.577350, 0.577350
{}
#Ź#
1
t
y1y1 2 # 3 3 3 #
( )
t
Lx1 = u1 x1 =
{-0.040907, - 0.207693, 0.722648
}
Ax1 = 1.050824, - 0.379873, 3.797781
{ }
t
(1) = x1Ax1 = 2.780369
Ly2 = Ax1 y2 = 0.371522, - 0.210916, 4.655135
{ }
y2
u2 == 0.079475, - 0.045119, 0.995815
{}
1
yt y2 2
( )
2
Chapter 6 31/38 2008-01-23
t
Lx2 = u2 x2 =
{-0.340850, - 0.787685, 1.246425
}
Ax2 =
{-1.797415, - 5.172031, 4.880630
}
(2) = xt Ax2 = 10.769220
2
..................
Finally
t
Lx = u x =
{-0.379836, - 0.837319, 1.222672
}
Ax =
{-2.340110, -5.772553, 4.521886
}
max = xtAx =11.251110
6.7. THE JACOBI METHOD
The Jacobi method for finding all of the eigenvalues and eigenvectors of a symmetric,
positive definite matrix A is based on consecutive simple orthogonal transformations in order
to convert this matrix A into the diagonal diagonal form.
Ą# ń#
Ax = x , QtAQ = D diagonal , A =
Ł#aij Ś#
Let p q
1 | |
Ą#ń#
ó#Ą#
1 | |
ó#Ą#
ó#-
p - c - - s - -Ą#
ó#Ą#
| 1 |
ó#Ą#
U =
ó#Ą#
| 1 |
ó#Ą#
q - -s - - c - -Ą#
ó#-
ó#Ą#
|| 1
ó#Ą#
|| 1Ś#
ó#Ą#
Ł#
'
Ą# ń#
A ' = UtAU , A ' =
Ł#aij Ś#
p q
| |
Ą# ń#
II
III
ó# Ą#
| |
ó# Ą#
ó#-
p - a'pp - - a'pq - -Ą#
ó# Ą#
| |
ó# Ą#
A'=
I
ó# Ą#
| |
ó# Ą#
q - a'qp - - a'qq - -Ą#
ó#-
ó# Ą#
| |
ó# Ą#
| |
ó# Ą#
Ł# Ś#
Chapter 6 32/38 2008-01-23
t
After transformation UAU we have
zone III a 'ij = aij
{
except:
ż#
a 'pp = c2app + s2aqq - 2csapq
#
zone I pp, pq, qp, qq terms:#a 'qq = c2aqq + s2app + 2csapq
{
#
#
-
#a 'pq = a 'qp = (c2 s2)apq + cs(app - aqq )
#
ż#
ż#
#a 'pj = capj - saqj
pth and qth rows ( j `" p and j `" q) :
#
#a 'qj = sapj + caqj
# #
#
zone II
#
ż#
# #a 'ip = caip - saiq
pth and qth columns (i `" p and i `" q) :
#a 'iq = saip + caiq
#
#
#
#
6.7.1. Conditions imposed on transformation
1. orthogonality:
t
U-1 = Ut because UU = I c2 + s2 =1
2. a' term anihilation
pq
a 'pq = a 'qp = 0
In order to satisfy the first condition we assume
c = cosŃ
s = sinŃ
hence
cos2 Ń - sin2 Ń apq + sinŃ cosŃ app - aqq = 0
() ( )
Since
1
cos2 Ń - sin2 Ń = cos2Ń, sin ŃcosŃ = sin 2Ń
2
The second condition
1
a 'pq = apq cos 2Ń + app - aqq sin 2Ń = 0
()
2
yield
2apq
tan 2Ń =- 2Ń
app - aqq
Remarks:
- Appropriately chosen rotation of the coordinate system about the angle Ń provides
annihilation of a'
pq
- In fact we need rather cosŃ and sinŃ than Ń itself. They may be found from the following
formulas:
Chapter 6 33/38 2008-01-23
1
ą = app - aqq
()
2
1
2
 = ą + a2 2
()
pq
1
2
# ą ś#
1
c = cosŃ = +
ś#ź#
2 2
# #
1
ż# ą > 0
-apq
#
s = sinŃ = sgn(ą), where sgn(ą ) = ą < 0
#-1
2 cosŃ
#
0 ą = 0
#
SOLUTION PROCEDURE
We have to annihilate all off-diagonal terms, however, subsequent annihilation steps
usually destroy the previously done. Fortunately such process is convergent to the final
solution. The final matrix, through usually not strictly diagonal has off-diagonal terms as
small as required when compared with diagonal ones.
Thus solution procedure is continued until brake-off test is satisfied. It is based either
on the average or on the maximum error norms defined below:
Average norms
1
2
#ś#
n n
1
ś#ź#
2
v = - The average norm of the off-diagonal elements
a ""aij
ś#ź#
n2 - n
j=1
i=1
ś#ź#
j`"i
# #
1
n 2
# 1 ś#
2
w =
"a - The average norm of diagonal elements
aii
ś#ź#
n
# i=1 #
Maximum norms
v = max aij - The maximum norm of off-diagonal elements
m
i, j
i`" j
w = max aii - The maximum norm of diagonal elements
m
i
BRAKE OFF TEST
v d"  w
where: v, w are equal either to v , w or to v , w
a a m m
 is a given threshold e.g.  =10-6
Chapter 6 34/38 2008-01-23
Finally
t
Ut LUt U1 A U1U2 LUk H" QtAQ = D
k 2
1424 1424
3 3
Q
Qt
Hence matrix decomposition is
Qt
Q D
A = Q-TDQ-1 = QDQT
Q - orthogonal matix - its columns are eigenvectors of the matrix A
D - diagonal matrix composed of eigenvalues of the matrix A
If it is desired to compute the eigenvectors along with the eingevalues, this can be
accomplished by initializing the matrix Q as I , and then modifying Q along with
modifications to A in the following way
p-th column q 'ip = cqip - sqiq
q-th column q'iq = sqip + cqiq
All other columns remain unchanged i.e. q 'ij = qij
Columns of the final Q are the eigenvectors of the original matrix A, while its eigenvalues
are given by the diagonal terms of the matrix D.
Example
4 2 3 7
Ą# ń#
ó#2 8 5 1Ą#
ó# Ą#
A =
ó# Ą#
3 5 12 9
ó#7 1 9 7Ą#
Ł# Ś#
required precision is given by  =10-6
SOLUTION PROCEDURE
Average off-diagonal terms of the matrix A
Chapter 6 35/38 2008-01-23
1
2
1
#
2
Ą#ń#
v0 ="2" + 32 + 72 + 52 +12 + 92 Ś# ś# = 5.307228
a ś#ź#
Ł#2
42 - 4
# #
In order to find a reasonable off-diagonal element of the matrix A we seek for the first
element greater then v0 . Thus we have:
a
a41 = a14 = 7 > v0 = 5.307228 p =1, q = 4
a
and anihilated will be element a41 = a14 = 7 . We find then:
11
ą = a11 - a44 = 4 - 7 = -1.500000
() ( )
22
1
2
1
#ś#2
3
# ś#
2 2
2
 = a14 +ą = 72 + - ź#
() ś#ź#
ś# ź#= 7.158911
ś#
2
# #
# #
1
1
#ś#2
# ą ś#2
11 1.5
c = + = + = 0.777666
ś#ź#
ś#ź#
ś#ź#
2 2 2 2 7.158911
()
# #
# #
ą (-a14
) (-1.5
)(-7
)
s == = 0.628678
2 ą c 2 7.158911 -1.5 .777666
() ()
Now we find
ż#a '11 = c2a11 + s2a44 - 2csa14 = 0.777666 2 4 + 0.628678 2 2 0.777666 0.628678 "7 =
" "7
() () - ()()
#
# =-1.658911
#
"7 "
#a '44 = c2a44 + s2a11 + 2csa14 = (0.777666)2 + (0.628678)2 4 + 2(0.777666)(0.628678)"7 =
#
= 12.658910
#
a '12 = a '21 = ca12 - sa42 = 0.777666" 2 - 0.628678"1 = 0.926655
a '13 = a '31 = ca13 - sa43 = 0.777666"3 - 0.628678"9 = -3.325099
a '42 = a '24 = sa12 + ca42 = 0.628678" 2 + 0.777666"1 = 2.035051
a '43 = a '34 = sa13 + ca43 = 0.628678"3 + 0.777666"9 = 8.885027
Ą#-1.658911 0.926655 -3.325099 0
ń#
ó#Ą#
0.926655 8 5 2.035021
ó#Ą#
A' =
ó#-3.325099 5 12 8.885027 Ą#
ó#
0 2.035021 8.885027 12.658910Ą#
Ł#Ś#
Search for Q matrix:
Chapter 6 36/38 2008-01-23
initially Q = I
after the first iteration
q '11 = cq11 - sq14 = 0.777666"1- 0.628678"0 = 0.777666
q '21 = cq21 - sq24 = c "0 - s "0 = 0
q '31 = cq31 - sq34 = c "0 - s "0 = 0
q '41 = cq41 - sq44 = c "0 - s "1 = -0.628678
q '14 = sq11 + cq14 = s "1+ c "0 = 0.628678
q '24 = sq21 + cq24 = s "0 + c "0 = 0
q '34 = sq31 + cq34 = s "0 + c "0 = 0
q '44 = sq41 + cq44 = s "0 + c "1 = 0.777666
0.777666 0 0 0.628678
Ą#ń#
ó#Ą#
0 1 0 0
ó#Ą#
Q =
ó#Ą#
0 0 1 0
ó#
Ł#-0.628678 0 0 0.777666Ą#
Ś#
BRAKE OFF TESTS
Let required precision of the solution is
 =10-6
(i) For the initial matrix
Average norms
1
2
1
Ą#
w0 = 42 + 82 +122 + 72 Ą# = 8.261356 - diagonal
()ń#
a
ó#Ś#
4
Ł#
1
2
1
Ą#
v0 =" 2 22 + 32 + 72 + 52 +12 + 92 Ą# = 5.307228 - off-diagonal
()ń#
a
ó#Ś#
42 - 4
Ł#
v0 5.307228
a
== 0.642416 >  = 10-6
w0 8.261356
a
Maximum norms
w0 = max aii = 12 - diagonal terms
m
i
v0 = max aij = 9 - off-diagonal terms
m
i, j,
i`" j
Chapter 6 37/38 2008-01-23
v0 9
m
= = 0.75 >  = 10-6
w0 12
m
(ii) After the first iteration
Average norms
1
2
1 2
ż#
Ą#
w1 =
(-1.658911 + 82 +122 +12.658912 ń## = 9.63068
)
#
a
#4 Ł#Ś#Ź#
#
1
2
1
ż#
Ą#0.9266552 + 2
v1 =" 2
(-3.325099 + 02 + 52 + 2.0350212 + 8.8850272 ń## = 4.472136
)
#
a
-
#42 4 Ł#Ś#Ź#
#
Maximum norms
w1 = 12.65891 - diagonal terms
m
v1 = 8.885027 - off-diagonal terms
m
v1 4.4721357 v1 8.885027
am
== 0.464363 >  , = = 0.701879 > 
w1 9.63068 w1 12.65891
am
Finally after all iterations the following matrices are obtained:
Ą#
- 3.233881 4.8910-9 - 6.0610-15 1.7210-5 ń#
ó# Ą#
4.8910-9 3.739112 0 - 3.7810-10 Ą#
ó#
A'=
ó#- 6.0610-15
0 23.04466 5.1410-6 Ą#
ó# Ą#
1.7210-5 - 3.7810-10 5.1410-6 7.450091
ó# Ą#
Ł# Ś#
0.580781 0.678728 0.345658 0.287300
Ą#ń#
ó#
ó#-0.203742 0.374957 0.311701 -0.848963Ą#
Ą#
Q =
ó#Ą#
0.365143 -0.617460 0.688355 -0.107607
ó#Ą#
Ł#-0.698465 0.132200 0.556353 0.430280 Ś#
Precision of the final solution
Average norms
1
2
1 2
ż#
Ą#
w =
(-3.233881 + 3.7391122 + 23.044662 + 7.4500912 ń## = 12.359199
)
#
a
#4 Ł#Ś#Ź#
#
Chapter 6 38/38 2008-01-23
1
ż#
v =" 2[(4.8910-9)2 + (-6.0610-15)2 + (1.7210-5)2 +
#
a
-
#42 4
1
#2
+ 02 + (-3.7810-10)2 + (5.1410-6)2] = 7.3310-6
Ź#
#
Brake-off test
v 7.3310-6
a
== 5.9310-7 <  = 10-6
w 12.36
a
Maximum norms
w = 23.04466
m
v =1.7210-5
m
Brake-off test
v 1.7210-5
m
== 7.4610-7 <  = 10-6
w 23.04466
m
Chapter 7 1/7 2008-01-23
7. ILL-CONDITIONED SYSTEMS OF SIMULTANEOUS LINEAR
7. ILL-CONDITIONED SYSTEMS OF SIMULTANEOUS LINEAR
EQUATIONS
EQUATIONS
7.1. INTRODUCTION
Example
2x1 + 3x2 = 5
#
I x1 = x2 =1
Ź#
2x1 + 3.1x2 = 5.1#
2x1 + 3x2 = 5
#
II x1 = 10, x2 = -5
Ź#
1.999x1 + 3x2 = 4.99#
ILL-CONDITIONED SLAE
y
(1, 1)
2x1 + 3.1x2 = 5.1
2x1 + 3x2 = 5
1.999x1 + 3x2 = 4.99
x
(10, -5)
7.2. SOLUTION APPROACH
Questions:
1. What is the phenomenon of ill-conditioning?
2. How ill-conditioning may be measured?
3. What may be done in order to overcome ill-conditioning problem?
AD 1. ILL CONDITIONING PHENOMENON
Thus a very small change in the coefficients gives rise to a large change in the solution.
Such behavior characterizes what is called an ill-conditioned system. Ill-conditioning causes
problem since we cannot carry out computations with infinite precision.
Chapter 7 2/7 2008-01-23
AD 2. HOW TO MEASURE ILL-CONDITIONING
(i) The first approach
Let
Ax = b xT a" A-1b
Axc = b + r r a" Axc - b
xT - xc = -A-1r a" -e
where
xT - true solution
xc - computed solution
r - denote a residuum, due to rounding-off errors r may differ from zero
e - solution error
From the properties of norm
By d" B y
Also, since y = B-1By we have
y
d" By d" B y
y d" B-1 By
B-1
Hence we may write
e
}
r
d" A-1r d" A-1 r ,
A
bA
11
d" A-1b d" A-1 b d" d" ,
1 3
2
AxT b
A-1 b
xT
since
A-1b = xT and A-1r = e
Multiplying both inequalities we get
r e r
1
d" d" A A-1
b xT b
A A-1
Chapter 7 3/7 2008-01-23
Assuming
k = A A-1 conditioning number
we find the folowing evaluation
r e r
1
d" d" k
k b xT b
e r
Here is the relative error of solution, and is the relative residuum.
b
xT
Thus the quality of the computed solution xc is dependent on the value of
k = A A-1 , k e" 1
called the condition number ( cond A). Using the spectral norm of matrix
S
1
2
A =  A*A
( )
{}
S
induced by the Euclidean vector norm
1
N
ż# 2 #2
x = xi Ź#
#
"
# i=1 #
where
 A = max k
( )
k
denotes the special radius, we get
1 max
k = A A-1 S = max =
S
min min
Here k e" 1 (conditioning number) is the measure of the system conditioning (ill- or well-).
Chapter 7 4/7 2008-01-23
(ii) The second approach
Consider the effects of rounding error of the coefficient matrix A
Let
Ac = A + AE ,
where
A c = computed matrix, A = exact matrix, AE =matrix of perturbations
Let
Axc = b xc
c
Then
xT = A-1b = A-1Acxc = A-1 A + AE xc xT - xc = A-1AExc
( )
Hence
AE AE
xT - xc d" A-1 AE xc = k xc = cond A xc
( )
AA
where
cond A a" k = A A-1 ,
( )
from which we have the following evaluation
xT - xc AE
d" k
xc A
Thus the computed solution can vary from the theoretical solution, as a result of round-off
errors, by an amount directly proportional to the conditioning number k.
On computational precision
q H" p -  log k
p - introduced precision
q - obtained precision
 - norm dependent coefficient ( for the spectral norm =1 )
Examples
(i)
2 3
Ą# ń# 1 5.060478
A1 =
ó#2 3.1Ą# k = = =128.04
2 0.039522
Ł# Ś#
log k = 2.107 H" 2 digits
Chapter 7 5/7 2008-01-23
(ii)
2 3
Ą# ń# 4.9994
A2 =
ó#1.999 3Ą# k H" 0.0006 = 8331.33
Ł# Ś#
log k = 3.9207 H" 4 digits
How important are  small perturbations ? Compare given and inversed matrices.
1000 1000
Ą# ń#
2 3 15.5
Ą# ń# Ą# -15 2 3
ń# Ą# ń#
- -
A1 = A2 =
,
ó#2 3.1Ą# , A11 = ó# Ą# ó#1.999 3Ą# , A21 = ó# 666 1 666 2Ą#
Ł# Ś# Ł#-10 10 Ś# Ł# Ś# ó#-
Ł# 3 3Ą#
Ś#
AD 3. HOW TO DEAL WITH ILL CONDITIONED SYSTEMS
1. apply double precision
2. apply iterative residual approach
r(1) = Ax(1) - b - residuum
Let
(1) = xT - x(1) - solution error
(1)
A(1) = AxT - Ax(1) = b - Ax(1) = -r(1) 
The error vector (1) is itself the solution of the given system but with a new right-
hand side -r(1) . After solution we get from there
x(2) = x(1) + (1) r(2) = Ax(2) - b etc.
x(k+1) = x(k) + (k) r(k +1) = Ax(k +1) - b
until
x(k +1) - x(k )
< B
x(k +1)
3. change the method (e.g. the method of discretization)
4. use a regularization method  e.g. the Tikchonov method
Chapter 7 6/7 2008-01-23
Problems
Which of the following matrices gives rise to an ill-conditioned system; estimate loss
of accuracy.
1 2 3
1 Ą# ń# 0
Ą# -1
ń# Ą# ń# Ą# ń#
1 2
ó#4
ó# Ą# ó#1Ą# ó#2Ą#
A2 = 5 6Ą#
b1 = b2 =
A1 = 4 0
ó# Ą#
ó#3 Ą# ó# Ą# ó# Ą#
ó# Ą#
ó#1 1 0 Ą# ó#Ś# ó#Ś#
Ł#7 8 8Ś#
Ł#1Ą# , Ł#1Ą#
Ł# Ś#
1 0 0 0 1 1
Ą#ń# Ą# ń# Ą# ń# Ą# ń#
1 0
2 2Ą#
ó#0 ó#2Ą# ó# Ą# ó#
A3 = 1 0Ą# b3 =
ó#Ą# ó# Ą# Ą# ó# Ą#
1
A4 =ó#12 13 1 b4 =
4Ą#
ó#Ą# ó#Ś# ó# Ą# ó#
Ł#1 1 1Ś# Ł#1Ą# ,
ó#-1 -1 -1Ą# ó#3 Ą#
ó# Ą# ó#Ś#
Ł# Ś# Ł# 4Ą#
7.3. TIKCHONOW SOLUTION APPROACH
Given
ILL-CONDITIONED SIMULTANEOUS ALGEBRAIC EQS
n
"a xj = bi , i, j = 1, 2,..., n
ij
j=1
FIND AN APPROXIMATE SOLUTION CLOSEST TO THE ORIGIN OF THE
CARTESIAN COORDINATE SYSTEM x1,...., xn
2
n n n
#ś#
2
Find min I , I =
ś#ź#
"#"a xj - bi # +ą"x
ij i
xi
i=1 j=1 i=1
THE RESULT DEPENDS ON ą
Example
2x1 + 3x2 = 5
ż#
#
+ 3x2 = 4.99
#1.999x1
22
2 2
I = 2x1 + 3x2 - 5 + 1.999x1 + 3x2 - 4.99 +ą x1 + x2
() ( )
( )
""
I = 0 , I = 0
"x1 "x2
Chapter 7 7/7 2008-01-23
0.00009 +19.975ą
x1 =
2
0.000009 + 25.996ą +ą
-0.000045 + 29.97ą
x2 =
2
0.000009 + 25.996ą +ą
FAMILY OF SOLUTIONS
conditioning number
max
x1 x2
ą
k =
min
0 10 -5 7.51107
0.1 0.7655 1.1484 260.96
1 0.7399 1.1102 27.00
10 0.5549 0.8326 3.60
100 0.1585 0.2379 1.26
1.00
106 2"10-5 3"10-5
0 0 1.00
"""


Wyszukiwarka

Podobne podstrony:
technical english for civil engineers construction?sics
Numerical analysis in the shock synthesis of EuBa$ 2$Cu$ 3$O$ y$
numerical methods
Methodology in language learning
using design patterns in game engines
Methods in EnzymologyF3 09 Quantitation of Protein
Method Man 2 Tears In A Bucket
Superficial Fascia in the Hip Adductor Muscle Group KT method
method numeric Matlab 2
Simulation of Convective Detonation Waves in a Porous Medium by the Lattice Gas Method
Method Man It s in the Game
Determination of monomers in polymers by SPME method

więcej podobnych podstron