IMiR NM4 Review of matrix algebra new

background image

NUMERICAL METHODS

NUMERICAL METHODS

AND PROGRAMMING

AND PROGRAMMING

REVIEW OF MATRIX ALGEBRA.

METHODS OF SOLVING SETS

OF LINEAR EQUATIONS.

Joanna Iwaniec

[ ]

=

×

mn

m

m

n

n

n

m

a

a

a

a

a

a

a

a

a

A

L

M

O

M

M

L

L

2

1

2

22

21

1

12

11

Real / complex / square matrix, equal matrices

background image

Matrix summation, matrix product

Elements of matrix product C = AB, where are defined as
follows:

(

) (

)

p

n

B

n

m

A

×

×

,

Column vector, set of linear equations

background image

Set of linear equations, linear dependence

background image

Identity matrix

background image

Basis for R

n

, solutions of linear system of equations

Inverse / nonsingular / singular matrices

A square matrix

A square matrix

A

A

(

(

R

R

nxn

nxn

) is nonsingular if

) is nonsingular if

A

A

x = b has a unique solution for

x = b has a unique solution for

all values of b. (Note that the definition only applies to squar

all values of b. (Note that the definition only applies to squar

e matrices.)

e matrices.)

or

background image

Inverse / nonsingular / singular matrices

Other equivalent definitions of nonsingular

matrices

A is nonsingular if and only if A

T

is nonsingular.

A is nonsingular if and only if det(A) ≠ 0, where det(A) denotes

the determinant of A.

A is nonsingular if and only if it has an inverse (see below).

Nonsingular matrices are also called invertible matrices.

A is nonsingular if and only Ax = 0 implies x = 0. (In other words,

there exists no nonzero x that satisfies Ax = 0.)

background image

Useful properties of properties of matrix inverses

If a matrix has an inverse, it is unique: if BA = I and CA = I, then

B = C = A

-1

.

The inverse A

-1

satisfies AA

-1

= I. This is an example where the

matrix product of two matrices commutes: AA

-1

= A

-1

A = I: (Recall

that for general matrices, AB BA).

Inverse of the transpose

: if A is square and invertible, then

(A

T

)

-1

= (A

-1

)

T

Because the order of transpose and inverse does not matter we usually
write the inverse of the transpose as A

-T

.

Inverse of product

: if A and B are square and invertible, then

(AB)

-1

= B

-1

A

-1

Determinant of matrix (det(A), |A|)

How to calculate |A|?

Next slide

background image

Inverse of matrix A

Cramer’s Rule

background image

‘Special’ matrices

Lower triangular matrices

A (R

n x n

) is lower triangular if aij = 0, for j > i.

A lower triangular matrix is called unit lower triangular if the diagonal

elements are equal to one.

A lower triangular matrix is nonsingular if and only if aii ≠ 0 for all i.

Upper triangular matrices

A matrix A (R

n x n

) is upper triangular if A

T

is lower triangular, i.e., if

aij = 0 for j < i.

we might define the norm of an m X n matrix A as the square root of
the sum of the squares of the elements of A:

The norm of a matrix serves the same purpose as the norm of a vector - it is

a measure of the size or magnitude of the matrix.

As for vectors, many possible definitions exist. E. g., in analogy with the

Euclidean norm of a vector x,

Matrix norms

This is called the Frobenius norm of A.

background image

Other Useful Norms

For Vectors

- the p - norm

=

=

n

i

i

x

A

1

1

- sum of the absolute values

of the element

p

p

n

i

i

p

x

A

1

1



=

=

i

n

i

i

x

A

=

max

- maximum magnitude norm

For Matrices

=

=

n

i

ij

n

j

A

A

1

1

max

- column sum norm

- row sum norm

=

=

n

j

ij

n

j

A

A

1

1

max

Matrix norms - continued

The norm of a square or rectangular matrix A (R

m x n)

, denoted ||A||,

is defined as:

Properties of matrix norms

background image

Condition number of a matrix

Properties of matrix condition number

Matrix A is:

well-conditioned if its condition number is small,

ill-conditioned if its condition number is large.

background image

EXAMPLE:

Equations that are not properly scaled may

give rise to large condition numbers.

k (A) = || A || || A

-1

|| = 2000 × 0.0005 = 1001(large)

Consider

=

1000

1000

1

1

A

=

0005

.

0

5

.

0

0005

.

0

5

.

0

A

1

EQUILIBRATION:

Scale each row of A by its largest element.

=

1

1

1

1

A

=

5

.

0

5

.

0

5

.

0

5

.

0

A

1

K(A) = ||A|| ||A

-1

|| = 2(1) = 2

Great Improvement in the condition number!

Sets of linear equations – methods

for solution finding

background image

back substitution method (for triangular systems of linear equations)

Cramer’s method, Gaussian elimination, LU, LL

T

, LDL

T

, QR

matrix decomposition methods,

fast but sometimes obtained

solution is not accurate

iterative methods

methods operating on sparse matrices

small computational effort

iterative methods

block methods

Solving sets of linear equations - introduction

Selection of a solution estimation method should depend on:

Selection of a solution estimation method should depend on:

Form of the matrix A,

Type of problem that is

represented by the considered
set of linear equations.

accuracy of obtained solution,

computational effort.

Back substitution

background image

Triangular systems of equations – back

substitution



=

=

+

=

+

+

+

=

+

+

+

+

n

n

nn

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

f

x

u

f

x

u

x

u

f

x

u

x

u

x

u

f

x

u

x

u

x

u

x

u

1

,

1

1

1

,

1

2

2

1

1

,

2

2

22

1

1

1

1

,

1

2

12

1

11

M

O

K

K

Example

=

=

=

+

8

4

7

3

3

5

2

3

3

2

3

2

1

x

x

x

x

x

x

I

II

Triangular systems of equations – back

substitution

nn

n

n

a

b

x

=

1

,

,

2

,

1

,

1

1

K

K

=

=

+

+

n

n

i

a

x

a

x

a

b

x

ii

i

ii

n

in

i

i

n

n

M

2

1

2

1

2

+

=

n

n

D

2

1

2

1

2

=

Number of multiplications and divisions

Number of summations

background image

Cramer’s method

Cramer’s method

=

+

=

+

2

2

22

1

21

1

2

12

1

11

b

x

a

x

a

b

x

a

x

a




=

=

12

21

22

11

1

21

2

11

2

12

21

22

11

2

12

1

22

1

a

a

a

a

b

a

b

a

x

a

a

a

a

b

a

b

a

x

background image

Gaussian elimination

Gaussian elimination

Using row operations, the matrix is reduced to an upper triangular
matrix and the solutions obtained by back substitution.

Partial Pivot Method

Step 1:

We look for the largest element in absolute value (called pivot) in

the first column and interchange this row with the first row.

Step 2:

Multiply the I row by an appropriate number and subtract

this from the II row so that the first element of the second row
becomes zero.

Do this for the remaining rows so that all the elements in the I
column except the top most element are zero.

background image

Now the matrix looks like:

Now look for the pivot in the second column below the first row

and iinterchange this with the second row.

Multiply the II row by an appropriate numbers and subtract this

from remaining rows (below the second row) so that the matrix looks

like :

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

0

0

0

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

0

0

0

0

0

Now look for the pivot in the III column below the second row and

interchange this row with the third row. Proceed till the matrix looks
like :

The solution is obtained by back substitution.

x

x

x

x

x

x

x

x

x

x

x

x

x

x

0

0

0

0

0

0

background image

Gauss Elimination – example 1

=

2

1

1

x

x

x

8

1

4

3

1

2

2

0

1

3

2

1

2

8

1

4

1

3

1

2

1

2

0

1

Augmented

Matrix

2

8

1

4

1

3

1

2

1

2

0

1

1

2

0

1

1

3

1

2

2

8

1

4

5

.

0

0

25

.

0

0

2

1

5

.

1

0

2

8

1

4

8333

.

0

1666

.

0

0

0

2

1

5

.

1

0

2

8

1

4

Solution obtained by back substitution

8333

.

0

1667

.

1

0

0

2

1

5

.

1

0

2

8

1

4

=

+

=

=

=

=

=

9

4

2

5

8

2

2

5

,

1

5

2

5

1667

,

1

8333

,

0

1

2

3

x

x

x

=

=

=

5

2

9

3

2

1

x

x

x

background image

Gaussian elimination – example 2

=

+

=

+

+

=

+

3

2

4

3

2

3

3

2

1

3

2

1

3

2

1

x

x

x

x

x

x

x

x

x



=

=

+

=

+

1

3

7

3

5

3

3

1

3

4

3

2

3

3

2

3

2

3

2

1

x

x

x

x

x

x

x



=

=

+

=

+

4

11

4

11

3

3

1

3

4

3

2

3

3

3

2

3

2

1

x

x

x

x

x

x

Use 1

st

equation to eliminate x

1

from 2

nd

and 3

rd

equations

Use 2

nd

equation to eliminate

x

2

from the 3

rd

equation

1. Input matrix A of order n

×

n and n

×

1 vector B and form the

augmented matrix [A|B].

2. For j = 1 to n do

{ (a) Compute the pivot index j

p

n such that

Gauss Elimination (Pseudo Code)

{ }

ij

n

j

i

pj

A

max

A

=

=

(b) If A

pj

= 0 , print “singular matrix” and exit.

(c) If p > j , interchange rows p and j
(d) For each i > j , subtract A

ij

/A

jj

times row j from row i. }

3. Now the augmented matrix is [C|D] where C is upper triangular.

4. Solve by back substitution

For j = 1= n down to 1 compute:







=

+

=

n

1

j

i

i

ji

j

jj

j

x

D

D

C

1

x

background image

Matrix Determinant with the use of Gauss

Elimination

Gauss Elimination procedure uses two types of row operations.

1. Interchange of rows for pivoting

2. Add

times row j to row k.

The first operation changes the sign of the determinant while the

second has no effect on the determinant.

To obtain the matrix determinant, follow Gauss elimination and

obtain the upper triangular form U, keeping count of (r) the
interchanges of rows. Then:

det A = (-1)

r

U

11

U

22

... U

nn

Matrix Determinant (Pseudo Code)

1. Set r = 0

2. For j = 1 to n do

{ (a) Compute the pivot index j

p

n such that

}

A

{

max

A

ij

n

j

i

pj

=

=

3. det = (-1)

r

A

11

A

22

... A

nn

(b) If A

pj

= 0 , print “Singular Matrix” and exit.

(c) If p > j interchange rows p and j , set r = r +1.

(d) For each i > j , subtract A

ij

/A

jj

times row j and row i

}

background image

Determinant of a Matrix

Some properties of the determinant.

1. det (A

T

) = det (A)

2. det (AB) = det (A) det (B)

3. det (U) = U

11

U

22

U

33

...U

nn

Matrix Determinant - Example

0

r

1

0

1

6

2

0

0

4

1

=

=

A

0

r

1

4

0

6

2

0

0

4

1

=

1

r

6

2

0

1

4

0

0

4

1

=

det = (-1)

1

(1)(4)(5,5) = -22

1

r

5

,

5

0

0

1

4

0

0

4

1

=

background image

LU matrix decomposition methods

LU Decomposition

When Gauss elimination is used to solve a linear system
of equations, the row operations are applied
simultaneously to the right side (

RS

) of the considered

system of equations.

If the system is solved again for a different

RS

vector,

the row operations must be repeated.

The LU decomposition eliminates the need to repeat the
row operators each time a new

RS

vector is used.

background image

The matrix A (Ax = b) is written as a product of an

upper

upper

triangular matrix U

triangular matrix U

and a

lower triangular matrix L

.

Diagonal elements of the upper triangular matrix are chosen to be 1

to get a consistent system of equations.

n = 3

=

1

0

0

U

1

0

U

U

1

L

L

L

0

L

L

0

0

L

23

13

12

33

32

31

22

21

11

33

32

31

23

22

21

13

12

11

A

A

A

A

A

A

A

A

A

9 Equations for 9 unknowns L

11

, L

21

, L

22

, L

31

, L

32

, L

33

,

U

12

, U

13

, U

23.

LU Decomposition

=

+

+

+

+

+

33

32

31

23

22

21

13

12

11

33

23

32

13

31

32

12

31

31

23

22

13

21

22

12

21

21

13

11

12

11

11

L

U

L

U

L

L

U

L

L

U

L

U

L

L

U

L

L

U

L

U

L

L

A

A

A

A

A

A

A

A

A

First column gives (L

11

, L

21

, L

31

) = (A

11

, A

21

, A

31

)

First row gives (U

11

, U

12

, U

13

) = (1, A

12

/L

11

, A

13

/L

11

)

Second column gives (L

12

, L

22

, L

32

) = (0, A

22

- L

21

U

12

, A

32

- L

31

U

12

)

Second row gives (U

21

, U

22

, U

23

) = (0, 1, (A

23

- L

21

U

13

)/L

22

)

Third column gives (L

13

, L

23

, L

33

) = (0, 0, A

33

- L

31

U

13

- L

32

U

23

)

LU Decomposition

background image

=

=

=

=

=

=

=

=

=

23

32

13

31

33

33

12

31

32

32

31

31

23

12

21

22

22

21

21

13

12

11

11

0

0

0

U

L

U

L

A

L

U

L

A

L

A

L

L

U

L

A

L

A

L

L

L

A

L

First column gives (L

11

, L

21

, L

31

) = (A

11

, A

21

, A

31

)

First row gives (U

11

, U

12

, U

13

) = (1, A

12

/L

11

, A

13

/L

11

)

Second column gives (L

12

, L

22

, L

32

) = (0, A

22

- L

21

U

12

, A

32

- L

31

U

12

)

Second row gives (U

21

, U

22

, U

23

) = (0, 1, (A

23

- L

21

U

13

)/L

22

)

Third column gives (L

13

, L

23

, L

33

) = (0, 0, A

33

- L

31

U

13

- L

32

U

23

)

LU Decomposition

(

)

=

=

=

=

=

=

=

=

=

1

0

0

/

1

0

/

/

1

33

32

31

22

13

21

23

23

22

21

11

13

13

11

12

12

11

U

U

U

L

U

L

A

U

U

U

L

A

U

L

A

U

U

LU Decomposition (Pseudo Code)

L

i1

= A

i1

,

i = 1, 2, ... , n

U

1j

= A

1j

/L

11

,

j = 2, 3, ... , n

for j = 2, 3, ... , n-1

n

,

...

1,

j

j,

i

,

U

L

A

L

1

-

j

1

k

kj

ik

ij

ij

+

=

=

=

n

,

...

1,

j

j,

k

,

L

U

L

A

U

jj

1

1

i

ik

ji

jk

jk

+

=

=

=

j

=

=

1

1

kn

nk

nn

nn

U

L

A

L

n

k

background image

LU Decomposition (Example)

[ ] [ ] [ ]

?

]

[

?

2

4

3

1

3

1

1

5

2

=

=

=

U

L

U

L

=

=

=

=

=

=

=

=

=

23

32

13

31

33

33

12

31

32

32

31

31

23

12

21

22

22

21

21

13

12

11

11

0

0

0

U

L

U

L

A

L

U

L

A

L

A

L

L

U

L

A

L

A

L

L

L

A

L

(

)

=

=

=

=

=

=

=

=

=

1

0

0

/

1

0

/

/

1

33

32

31

22

13

21

23

23

22

21

11

13

13

11

12

12

11

U

U

U

L

U

L

A

U

U

U

L

A

U

L

A

U

U

=

=

=

=

+

=

=

=

=

=

23

12

13

33

12

32

31

23

12

22

21

13

12

11

)

3

4

(

3

2

3

4

3

0

3

1

0

0

2

U

U

U

L

U

L

L

L

U

L

L

L

L

L

(

)

=

=

=

=

=

=

=

=

=

1

0

0

)

2

/

5

3

/(

)

2

/

1

)(

1

(

1

1

0

2

/

1

2

/

5

1

33

32

31

23

22

21

13

12

11

U

U

U

U

U

U

U

U

U

=

4

5

,

3

3

0

2

/

1

1

0

0

2

L

=

1

0

0

1

1

0

2

/

1

2

/

5

1

U

LU Decomposition (Example)

=

1

0

0

1

1

0

5

.

0

5

.

2

1

4

5

.

3

3

0

5

.

0

1

0

0

2

2

4

3

1

3

1

1

5

2

[

]

=

=

nn

n3

n2

n1

2n

23

22

21

1n

13

12

11

L

...

L

L

L

U

...

U

L

L

U

...

U

U

L

\

Q

M

M

M

M

M

M

M

U

L

The value of U are stored in the zero space of L .

After each element of A is employed , it is not needed again. So Q

can be stored in place of A.

background image

After LU decomposition of the matrix A, the system AX = B is
solved in 2 steps : (1) Forward substitution, (2) Backward
Substitution

AX = LUX = B

Put UX = Y

Forward Substitution : Y

1

= B

1

/ L

11

Y

2

= (B

2

- L

21

Y

1

) / L

22

Y

3

= (B

3

- L

31

B

1

- L

32

B

2

) / L

33

=

=

3

2

1

3

2

1

33

32

31

22

21

11

Y

Y

Y

L

L

L

0

L

L

0

0

L

B

B

B

LY

=

=

3

2

1

3

2

1

23

13

12

Y

Y

Y

X

X

X

1

0

0

U

1

0

U

U

1

UX

Back Substitution :

X

3

= Y

3

X

2

= Y

2

- X

3

U

23

X

1

= Y

1

- X

2

U

12

- X

3

U

23

As in Gauss elimination, LU decomposition must employ pivoting to

avoid division by zero and to minimize round off errors. The pivoting

is done immediately after computing each column.

background image

Matrix Inverse Using the LU Decomposition

LU decomposition can be used to obtain the inverse of

the original coefficient matrix.

Each column j of the inverse is determined by using a

unit vector (with 1 in the j

th

row ) as the RHS vector

=

1

0

0

1

1

0

5

.

0

5

.

2

1

4

5

.

3

3

0

5

.

0

1

0

0

2

2

4

3

1

3

1

1

5

2

L

U

A

Matrix inverse using LU decomposition-an example

First column of the inverse of A is given by

=

0

0

1

1

0

0

1

1

0

5

.

0

5

.

2

1

4

5

.

3

3

0

5

.

0

1

0

0

2

3

2

1

x

x

x

=

=

25

.

1

1

5

.

0

0

0

1

4

5

.

3

3

0

5

.

0

1

0

0

2

3

2

1

3

2

1

d

d

d

d

d

d

=

=

25

.

1

25

.

0

5

.

0

x

x

x

25

.

1

1

5

.

0

x

x

x

1

0

0

1

1

0

5

.

0

5

.

2

1

3

2

1

3

2

1

The second and third columns are obtained by taking the RS vector as
(0,1,0) and (0,0,1).

background image

Solution of set of linear equations by LU

Decomposition

where:

where:

A

A

presumed to be square,

presumed to be square,

non

non

-

-

singular

singular.

Suppose we can decompose

Suppose we can decompose

A

A

into

into

A

A

=

=

LU

LU

where:

where:

L

L

lower triangular with diagonal elements equal to 1,

lower triangular with diagonal elements equal to 1,

U

U

upper triangular.

upper triangular.

background image

Procedure of Factorization of A

LL

T

Banachiewicz / Cholesky decomposition

For symmetrical, positively definite matrix, Banachiewicz /

Cholesky decomposition into triangular matrices can be defined:

T

LL

A

=

where:

where:

L

L

lower triangular matrix with

lower triangular matrix with

positive

positive

diagonal elements

diagonal elements

(

(

non necessarily equal to 1

non necessarily equal to 1

),

),

We have to compute in succession:



+

+

=

=

=

=

=

=

n

i

i

j

l

l

l

a

l

n

i

l

a

l

ii

i

k

ik

jk

ji

ji

i

k

ik

ii

ii

,

,

2

,

1

,

,

,

2

,

1

,

1

1

1

1

2

K

K

background image

LL

T

Banachiewicz / Cholesky decomposition -

example

=

6

3

2

3

5

2

2

2

4

A

1

2

/

2

,

1

2

/

2

,

2

4

:

1

31

21

11

=

=

=

=

=

=

=

l

l

l

i

(

)

1

2

/

1

1

3

,

2

1

1

5

:

2

32

22

=

=

=

=

=

l

l

i

2

1

1

1

1

6

:

3

33

=

=

=

l

i

=

2

0

0

1

2

0

1

1

2

2

1

1

0

2

1

0

0

2

6

3

2

3

5

2

2

2

4

Iterative Method for solution of Linear

systems

Jacobi iteration

Gauss - Seidel iteration

background image

Jacobi Iteration

AX = B

n = 3

a

11

x

1

+ a

12

x

2

+ a

13

x

3

= b

1

a

21

x

1

+ a

22

x

2

+ a

23

x

3

= b

2

a

31

x

1

+ a

32

x

2

+ a

33

x

3

= b

3

Preprocessing :

1. Arrange the equations so that the diagonal terms of the coefficient
matrix are not zero.

2. Make row transformation if necessary to make the diagonal
elements as large as possible.

Rewrite the equation as

x

1

= 1/a

11

(b

1

- a

12

x

2

- a

13

x

3

)

x

2

= 1/a

22

(b

2

- a

21

x

1

- a

23

x

3

)

x

3

= 1/a

33

(b

3

- a

31

x

1

- a

32

x

2

)

Choose an initial approximation vector (x

0

, x

1

, x

2

).

If no prior information is available then (x

0

, x

1

, x

2

) = (0, 0, 0)

will do.

Iterate :

n

i

x

A

b

a

x

i

j

k

j

ij

i

ii

k

i

=

+

1

),

(

1

1

Stop when

<∈

100

j

i

i

j

i

j

i

x

x

x

Jacobi Pseudo Code

background image

Diagonally Dominant Matrices

=

>

n

i

j

j

ij

ii

a

a

1

ie., the diagonal element of each row is larger than the sum of the
absolute values of the other elements in the row.

Diagonal dominance is a sufficient but not necessary condition for

the convergence of the Jacobi or Gauss-Seidel iteration.

The Gauss-Seidel iteration

In the Jacobi iteration all the components of the new estimate of

the vector

(

)

k

n

k

k

k

x

x

x

x

,...,

,

,

3

2

1

are computed from the current estimate

(

)

k

n

k

k

k

x

x

x

x

,...,

,

,

3

2

1

However when is computed the updated estimates

are already available.

( )

1

+

k

i

x

(

)

1

1

1

2

1

,...,

,

+

+

+

k

i

k

k

i

x

x

x

The Gauss - Seidel iteration takes advantage of the latest

information available where updating an estimate.

background image

Gauss - Seidel

Jacobi

First Iteration

x

1

= (b

1

- a

12

x

2

- a

13

x

3

) / a

11

x

2

= (b

2

- a

21

x

1

- a

23

x

3

) / a

22

x

3

= (b

3

- a

31

x

1

- a

32

x

2

) / a

33

x

1

= (b

1

- a

12

x

2

- a

13

x

3

) / a

11

x

2

= (b

2

- a

21

x

1

- a

23

x

3

) / a

22

x

3

= (b

3

- a

31

x

1

- a

23

x

3

) / a

33

Second Iteration

x

1

= (b

1

- a

12

x

2

- a

13

x

3

) / a

11

x

2

= (b

2

- a

21

x

1

- a

23

x

3

) / a

22

x

3

= (b

3

- a

31

x

1

- a

32

x

2

) / a

33

x

1

= (b

1

- a

12

x

2

- a

13

x

3

) / a

11

x

2

= (b

2

- a

21

x

1

- a

23

x

3

) / a

22

x

3

= (b

3

- a

31

x

1

- a

32

x

2

) / a

33

Matrix algebra in MATLAB

background image

Essentials

Syntax & Logic Help

>>

help <function_name>

help fzero

>>

lookfor <keyword>

lookfor bessel

www.mathworks.com

Support (Documentation) & Forum Sections

Useful Tips

>>

clear (variable)

Clears All Memory or Specified Variable

>>

clc

Clears Command Window Screen

Philosophy of Data Storage

Numerical Values

(Ex. 5, 3.14159, 2+i)

Integers

Floating (Single & Double)

Complex

Character Strings

(Ex. ‘asdf jkl;’)

Structures
Cells
Boolean

(True / False)

background image

Elementary Operators

Assignment (=)

>> X = 1;

>> Y = 3.14159

Addition (+)

>> Z = X + Y

Z = 4.14159

Subtraction (-)

>> Z = X – Y

Z = -2.14159

Multiplication (*)

>> Z = 2 * Y

Z = 6.28318

Division (/)

>> Z = 1 / 4

Z = 0.25

Power (^)

>> Z = 5 ^ 3

Z = 125

Vector Operators – Scalar Operations

Assignment

Column

>> X = [ 1 ; 2 ; 3 ];

Row

>> Y = [ 1 , 2 , 3 ];
>> Y = [ 1 2 3 ];

Unique Commands

– linspace( Initial , Final , # Points )

>> Z = linspace( 5 , 20 , 4 )

-Z = [ 5 , 10 , 15 , 20 ];

– logspace( Initial , Final , # Points )
– Initial:Step:Final

>> Z = 1:-0.25:0
>> Z = [ 1 , 0.75 , 0.5 , 0.25 , 0 ]

Index

>> Z = X(3)

Z = 3

>> Z = X(4)

–OR–

>> Z = X(0)

ERROR !!

>> Z = X(1:2)

Z = [ 1 ; 2 ]

>> Z = X([ 1 3 ])

Z = [ 1 ; 3 ]

background image

Vector Operators – Scalar Operations

Addition (+)

>> Z = X + 2

Z = [ 3 ; 4 ; 5 ]

Subtraction (-)

Multiplication (*)

>> Z = 2 * X

Z = [ 2 ; 4 ; 6 ]

Division (/)

>> Z = X / 2

Z = [ 0.5 ; 1 ; 1.5 ]

Summation

>> X = [ 1 , 2 , 3 , 4 ];
>> Z = sum( X );

Z = 10

Product

>> Y = [ 1 ; 2 ; 3 ; 4 ];
>> Z = prod( Y )

Z = 24

=

n

i

i

x

1

=

n

i

i

x

1

Vector Operators

Inner Product

Euclidean Norm

>> X = [ 1 , 2 , 3 ];
>> Y = [ 1 ; 2 ; 3 ];
>> Z = X * Y

Z = 14

[

]

=

=

=

n

i

i

i

n

i

n

i

y

x

y

y

y

x

x

x

y

x

1

1

1

M

M

L

L

v

v

=

=

=

n

i

i

x

x

x

x

1

2

v

v

v

>> X = [ 1 , 2 , 3 ];
>> Z = X * X’;

Z = 14

Cross Product

Cross Product

>> X = [ 1 , 2 , 3 ];

>> X = [ 1 , 2 , 3 ];

>> Y = [ 3 , 2 , 1 ];

>> Y = [ 3 , 2 , 1 ];

>> Z = cross( X , Y )

>> Z = cross( X , Y )

[

]

2

1

2

1

1

3

1

3

3

2

3

2

x

y

y

x

x

y

y

x

x

y

y

x

y

x

=

×

r

r

background image

Vector Operators – Element Wise

Addition (+)

>> X = [ 1 , 2 , 3 ];

>> Y = [ 2 , 4 , 6 ];

>> Z = X + Y

Z = [ 3 , 6 , 9 ];

Subtraction (-)

Multiplication (.*)

>> X = [ 1 , 2 , 3 ];
>> Y = [ 2 , 4 , 6 ];
>> Z = X.*Y

Z = [ 2 , 8 , 18 ];

Division (./)
Power (.^)

>> Z = X.^2

Z = [ 1 , 4 , 9 ];

Numerical Integration

Algorithms

Trapezoidal Rule

Simpson’s Rule

Etc.

Data ( y = x^2 )

Data ( y = x^2 )

(

(

-

-

6, 36 )

6, 36 )

(

(

-

-

3 , 9 )

3 , 9 )

( 0 , 0 )

( 0 , 0 )

( 3 , 9 )

( 3 , 9 )

( 6 , 36 )

( 6 , 36 )

Code

Code

>> X = [

>> X = [

-

-

6 ,

6 ,

-

-

3 , 0 , 3 , 6 ];

3 , 0 , 3 , 6 ];

>> Y = [ 36 , 9 , 0 , 9 , 36 ];

>> Y = [ 36 , 9 , 0 , 9 , 36 ];

>> Z = sum(

>> Z = sum(

trapz

trapz

( X , Y ) );

( X , Y ) );

Z = 162

Z = 162

>> Z = quad( inline(

>> Z = quad( inline(

x.^2

x.^2

) ,

) ,

-

-

6

6

6 )

6 )

Z = 144

Z = 144

Numerical Integration

0

10

20

30

40

-6

-4

-2

0

2

4

6

X

Y

background image

Standard Deviation

>> Y = std( X )

Sort

>> Y = sort( X )

Find

>> Y = X( find( X > 0.5 ) )

Statistics Functions

Mean

>> X = rand(10);
>> Y = mean( X )

Median

>> Y = mean( X )

Maximum

>> Y = max( X )

Minimum

n

x

x

n

i

i

=

=

1

(

)

n

x

x

n

i

i

=

=

1

2

σ

Matrix Operators – Scalar Operations

Assignment

Assignment

>> X = [ 1 , 2 ; 3 , 4 ];

Concatenation

Concatenation

>> X = 1:3;
>> Y = [ X ; 2*X ]
Y = [ 1 , 2 , 3 ; 2 , 4 , 6 ]

Index

Index

>> Z = X(1,2)

–OR–

>> Z = X(3)

Z = 3

>> Z = X(2,3)

–OR–

>> Z = X(5)

ERROR !!

Unique Functions

Identity Matrix

eye( Column , Row )

>> X = eye(2,2)

X = [ 1 , 0 ; 0 , 1 ]

Zeros & Ones Matrix

zeros( Column , Row )

ones( Column , Row )

Random # ( 0 – 1 )

rand( Column , Row )

>> X = rand( 2 , 2 )

X = [ 0.5234 , 0.9246 ;

0.2862 , 0.7378 ]

background image

Matrix Operators – Scalar / Element Wise

Scalar Operations

– Addition (+)

>> Z = X + 2

Z = [3 , 4 ; 5 , 6]

– Subtraction (-)

– Multiplication (*)

>> Z = 2 * X

Z = [ 2 , 4 ; 6 , 8 ]

– Division (/)

Element Wise Operations

– Addition (+)

>> X = [ 1 , 2 ; 3 , 4 ];

>> Y = [ 4 , 3 ; 2 , 1 ];

>> Z = X+Y

Z = [ 5 , 5 ; 5 , 5 ]

– Subtraction (-)

– Multiplication

>> Z = X.*Y

Z = [ 4 , 6 ; 6 , 4 ]

– Division

Matrix Operations - A x = b

b = ?

>> A = [ 1 , 2 ; 3 , 4 ];

>> x = [ 1 ; 2 ];

>> b = A*x

b = [ 5 ; 11 ];

x = ?

>> A = [ 1 , 2 ; 3 , 4 ];

>> b = [ 5 ; 11 ];

>> x = A\b

x = [ 1 ; 2 ];

=

=

=

=

=

=

n

i

n

i

i

n

i

i

n

i

i

n

i

b

b

b

x

x

x

x

x

x

x

M

M

M

M

M

M

L

L

M

O

M

N

M

L

L

M

N

M

O

M

L

L

v

1

1

i

n,

1

i

j,

1

i

1,

1

n

n,

i

n,

n,1

n

i,

i

i,

i,1

n

1,

i

1,

1,1

A

A

A

A

A

A

A

A

A

A

A

A

A

Algorithms

Gaussian

Elimination

LU Factorization

Etc.

background image

Matrix Operations

Inverse

>> X = [ 1 2 3 ; 2 3 1 ; 3 1 2];
>> Y = inv(X);
>> Z = X*Y

Z = [ 1 0 0 ; 0 1 0 ; 0 0 1 ]

Determinant

>> Z = det( X )

Z = -18

– EXAMPLE (2 x 2)

– EXAMPLE (3 x 3)

2

,

1

1

,

2

2

,

2

1

,

1

x

x

x

x

=

X

1

,

2

2

,

2

3

,

1

1

,

2

2

,

3

3

,

1

3

,

3

1

,

2

2

,

1

1

,

3

3

,

2

2

,

1

3

,

2

2

,

3

1

,

1

3

,

3

2

,

2

1

,

1

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

x

+

+

=

L

X

Eigenstates

>> [ V , D ] = eig(Z)

Eigenvalues

D = [-1.73, 1.73, 6.00]

Eigenvectors

V = [ 0.79

0.21 0.58;

-0.21 -0.79 0.58;
-0.58 0.58 0.58]

x

Ax

α

=

Linear Regression

Data …

– ( 2, 3 )

– ( 4, 7 )

– ( 5, 10 )

– ( 6, 10 )

– ( 7, 12 )

QR Factorization

>> X = [ 2 ; 4 ; 5 ; 6 ; 7 ] ;

>> Y = [ 3 ; 7 ; 10 ; 10 ; 12 ];

>> A = [ X , ones(size(X)) ];

>> COEF = A \ Y

COEF = [ 1.7838 , -0.1622 ]

Polyfit

>> COEF = polyfit( X , Y , 1 )

COEF = [ 1.7838 , -0.1622 ]

background image

Concept of Transform

Instead of Nonlinear Regression …

Linearize the Equation

Exponentials

Y = a*e^(b*X)

• ln(Y) = ln(a) + b*X

Power Law

• Y = a*X^b …

• ln(Y) = ln(a) + b*ln(X)

Character Strings

Assignment

>> X = ‘HELLO’;
>> Y = 2.7183;
>> Z = num2str( Y )

Z = ‘2.7183’

Concatenation

>> X = strcat( ‘HELLO’ , ‘ WORLD’ )

-OR-

>> X = [‘HELLO’ ‘WORLD’]

X = ‘HELLOWORLD’

Comparison

>> X = ‘HI’

>> Y = ‘HE’

>> Z = strcmp( X, Y )

Z = 0

background image

Structures

Assignment ( Variable.Property = XYZ )

>> Person(1).Name = ‘BOB’

>> Person(1).Addres = ’99 Sunset Blvd’

>> Person(1).Phone = 3334444

>> Person(2).Name = ‘JANE’

>> Person(2).Address = ‘1750 Kirby Dr’

… etc.

Cells

Assignment

>> X = { 1:4 , ‘HI’ ; [ 1 , 0 ; 0 , 1 ] , 3.1459 };

Index

>> Z = X{1,1}

-OR-

>> Z = X{1}

Z = [ 1 2 3 4 ]

>> Z = X{1,2}(1)

Z = ‘H’

>> Z = X{2,1}(1,2)

Z = 0

background image

Nonlinear Root Finding

FZERO

( exp(1/x) sin(x)/x^2)

>> FUN = inline(‘exp(1/x) sin(x)/x^2’ );

>> X0 = 3;

>> X = fzero( FUN , X0 )

X = 3.1416

Algorithms

– Bisection Method

– Newton’s Method

ROOTS – Polynomials

( Y = -2X^3 + 4.7*X + 0.25 )

>> X = [ -2 , 0 , 4.7 , 0.25 ];

>> Z = roots( X );

Z = [1.56 -1.51 -0.05]

1

2

3

4

5

6

7

8

9

10

-150

-100

-50

0

50

100

150

y=e sin(x)x

Summary of MATLAB functions

background image

Summary of MATLAB functions

Summary of MATLAB functions

background image

Lecture prepared on the basis of:

http://ocw.mit.edu/OcwWeb/Mechanical-Engineering/2-
29Spring-2003/LectureNotes/ index.htm

www.owlnet.rice.edu/~ceng303/lectures/matlab/Matlab_
1.ppt

Prof. S. Boyd: Solving general linear equations using
Matlab,

www.stanford.edu/class/ee263/sle_matlab.pdf

Matlab 6.5 help

Thank you for your attention!


Wyszukiwarka

Podobne podstrony:
A Quick Review Of Commutative Algebra [Short Lecture] S Ghorpade (2000) WW
Review of The New Economic Sociology
Insider Baseball by Joan Didion The New York Review of Books
Doran & Lasenby, Geometric Algebra New Foundations, New Insights
Review of Wahl&Amman
Grzegorz Ziółkowski Review of MEN IN BLACK
Book Review of The Color Purple
A Review of The Outsiders Club Screened on?C 2 in October
A review of molecular techniques to type C glabrata isolates
Short review of the book entitled E for?stasy
Matrix algebra
Book Review of The Burning Man
SHSBC403 A Review of Study
Part8 Review of Objectives, Points to Remember

więcej podobnych podstron