Linear Algebra

This covers

Solving linear Systems

There are lots of built-in functions to perform linear algebra. The matrix product has already been discussed. The operation

    >A\b

takes a NxN matrix A and a Nx1 vector b and returns the vector x such that Ax=b. If in

    >A\B

B is a NxM matrix, then the systems A\B[,i] are solved simultanuously. An error is issued if the determinant of A turns out to be too small relative to the internal epsilon.

There is also a more precise version, which uses a residual iteration. This usually yields very good results. It is of course slower

    >xlgs(A,b)

You may add an additional maximal number of iterations.

    >inv(A)

computes the invers matrix of A. This is a utility function defined as

    >A\id(cols(A))

There are also more primitive functions, like

    >lu(A)

for NxM matrices A. Probably, this function is only useful for mathematic insiders. However, we need to explain it here in detail. The function returns multiple values (see Multiple Assignments) You can assign its return values to variables with

    >{Res,ri,ci,det}=lu(A)

If you use only

    >Res=lu(A)

all other output is lost. To explain the output of lu, lets start with Res. Res is a NxM matrix containing the LU-decomposition of A; i.e., L.U=A with a lower triangle matrix L and an upper triangle matrix U. L has ones in the diagonal, which are omitted so that L and U can be stored in Res. det is of course the determinant of A. ri contains the indices of the rows of Res, since during the algorithm the rows may have been swept. ci is not important, if A is nonsingular. If A is singular, however, Res contains the result of the Gauss algorithm, and ci contains 1 and 0 such that the columns with 1 form a basis for the columns of A.

To make an example

    >A=random(3,3);
    >{LU,r,c,d}=lu(A);
    >LU1=LU[r];
    >L=band(LU1,-2,-1)+id(3); R=band(LU1,0,2);
    >B=L.R,

will yield the matrix A[r]. To get A, one must compute the inverse permutation r1

    >{rs,r1}=sort(r); B[r1]

will be A.

Once we have an LU-decomposition of A, we can use it to quickly solve linear systems A.x=b. This is equivalent to A[r].x=b[r], and LU[r] is a decomposition of A[r], thus x can be computed with

    >{LU,r}=lu(A);
    >lusolve(LU[r],b[r])

There is also a more exact version

    >xlusolve(A,b)

wich can be used if A is in LU form. E.g., it works for upper triangular matrices A. E.g.,

    >A=random(10,10); A=band(A,0,10);
    >xlusolve(A,b)

This function may be used for exact evaluation of arithmetic expressions.

lu is used by several functions in UTIL. E.g.,

    >kernel(A)

is a basis of the kernel of A; i.e., the vectors x with Ax=0.

    >image(A)

is a basis of the vectors Ax. You may add an additional value parameter eps=... to kernal and image, which replaces the internal epsilon in these functions. These function normalize the matrix with

    >norm(A)

which returns the maximal row sum of abs(A).

There is an implementation of the singular value decomposition. The basic function is

    >{U,w,V}=svd(A)

As you see, it returns three values. A must be an mxn real matrix, and U will be an mxn matrix, w a 1xn vector and V an nxn matrix. The columns of U and V are orthogonal. We have A=U.W.V', where W the a nxn diagonal matrix, having w in its diagonal, i.e.

    >A=U.diag(size(V),0,w).V'

This decomposition can be used in many circumstances. The file SVD.E (loaded at system start) contains the applications svdkernel and svdimage, which compute orthogonal basis of the kernel and image of A. Moreover, svddet computes the determinant of A, and svdcondition a condition number.

    >fit(A,b)
    >svdsolve(A,b)

finds a solution of A.x=b for singular matrices A (even non-rectangular) by minimizing the norm of A.x-b. The function svdsolve is more stable and should be preferred. By the way, U, w and V can be used to compute solutions of A.x=b with

    >x=V.diag(size(V),0,1/w).U'

if w contains no zeros. This is a similar procedure as with the lu function above. By the way,

>svdsolve(A,id(cols(A));

will compute the so called pseudo-inverse of A. There is also svddet, which might be more stable than det.

Eigenvalues

The primitive function for computing eigenvalues is

    >charpoly(A)

which computes the characteristic polynomial of A. It is used by

    >eigenvalues(A)

to compute the eigenvalues of A. Then

    >eigenspace(A,l)

computes a basis of the eigenspace of l. This function uses kernel, and will fail, when the eigenvalue are not exact enough.

    >{l,x}=xeigenvalue(A,l)

will improve the eigenvalue l, which must be a simple eigenvalue. It returns the improved value l and an eigenvector. You can provide an extra parameter, which must be an approximation of the eigenvector.

    >{l,X}=eigen(A)

returns the eigenvalues of A in l and the eigenvectors in X. There is an improved but slower version eigen1, which will succeed more often then eigen. There is also the svdeigen routine, which uses singular value decomposition to determine the kernel.

    >jacobi(a)

will use Jacobi's method to compute the eigenvalues of a symmetric real nxn matrix a.

A special feature is the generation of Toeplitz matrices with toeplitz. The parameter of this function is a 1x2n-1 vector r and the output a matrix R with R(i,j)=r(n-i+j), i.e. the last row of R agrees with the first n elements of r, and all columns above are shifted one to the left from bottom up. You can solve a linear system with a toeplitz matrix, using

    >toeplitzsolve(r,b)

where b is a nxq vector. The result satisfies toeplitz(r).x=b.

The Internal Epsilon

    >epsilon()

is an internal epsilon, used by many functions and the operator ~= which compares two values and returns 1 if the absolute difference is smaller than epsilon. This epsilon can be changed with the statement

    >setepsilon(value)