Chapter 22: Vectors and Matrices

In this chapter we look at built-in functions which support computation with vectors and matrices.

22.1 The Dot Product Conjunction

Recall the composition of verbs, from Chapter 08. A sum-of-products verb can be composed from sum and product with the @: conjunction.

P=:2 3 4 Q=:1 0 2 P * Q +/ P * Q P (+/ @: *) Q
2 3 4
1 0 2
2 0 8
10
10

There is a conjunction . (dot, called "Dot Product"). It can be used instead of @: to compute the sum-of-products of two lists.

P Q P (+/ @: *) Q P (+/ . *) Q
2 3 4
1 0 2
10
10

Evidently, the . conjunction is a form of composition, a variation of @: or @. We will see below that it is more convenient for working with vectors and matrices.

22.2 Scalar Product of Vectors

Recall that P is a list of 3 numbers. If we interpret these numbers as coordinates of a point in 3-dimensional space, then P can be regarded as defining a vector, a line-segment with length and direction, from the origin at 0 0 0 to the point P. We can refer to the vector P.

With P and Q interpreted as vectors, then the expression P (+/ . *) Q gives what is called the "scalar product" of P and Q. Other names for the same thing are "dot product", or "inner product", or "matrix product", depending on context. In this chapter let us stick to the neutral term "dot product", for which we define a function dot:

dot =: +/ . * P Q P dot Q
+/ .*
2 3 4
1 0 2
10

A textbook definition of scalar product of vectors P and Q may appear in the form:

      (magnitude P) * (magnitude Q) * (cos alpha)

where the magnitude (or length) of a vector is the square root of sum of squares of components, and alpha is the smallest non-negative angle between P and Q. To show the equivalence of this form with P dot Q, we can define utility-verbs ma for magnitude-of-a-vector and ca for cos-of-angle-between-vectors.

   ma  =: %: @: (+/ @: *:)
   ca  =: 4 : '(-/ *: b,(ma x.-y.), c) % (2*(b=.ma x.)*(c=. ma y.))'

We expect the magnitude of vector 3 4 to be 5, and expect the angle between P and itself to be zero, and thus cosine to be 1.

ma 3 4 P ca P
5
1

then we see that the dot verb is equivalent to the textbook form above

P Q P dot Q (ma P)*(ma Q)*(P ca Q)
2 3 4
1 0 2
10
10

22.3 Matrix Product

The verb we called dot is "matrix product" for vectors and matrices.

M =: 3 4 ,: 2 3 V =: 3 5 V dot M M dot V M dot M
3 4 
2 3
3 5
19 27
29 21
17 24 
12 17

There is a precondition which must be met if we are to compute Z =: A dot B. The precondition is that the last dimension of A must match the first dimension of B.

   A =: 2 3 5 $ 1
   B =: 5 4   $ 2

$ A $ B Z =: A dot B $ Z
2 3 5
5 4
10 10 10 10 
10 10 10 10 
10 10 10 10 
 
10 10 10 10 
10 10 10 10 
10 10 10 10
2 3 4

The example shows that the last-and-first dimensions disappear from the result. If the two dimensions do not match then an error is signalled.

$ B $ A B dot A
5 4
2 3 5
error

22.4 Generalisation

The "Dot Product" conjunction forms the dot-product verb with (+/ . *). Other verbs can be formed on the pattern (u.v).

For example, consider a relationship between people: person i is a child of person j, represented by a square boolean matrix true at row i column j. Using verbs +. (logical-or) and *. (logical-and). We can compute a grandchild relationship with the verb (+./ . *.).

   g   =: +. / . *.

Taking the "child" relationship to be the matrix C:

   C =: 4 4 $ 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0

Then the grandchild relationship is, so to speak, the child relationship squared.

C G =: C g C
0 0 0 0 
1 0 0 0 
1 0 0 0 
0 1 0 0
0 0 0 0 
0 0 0 0 
0 0 0 0 
1 0 0 0

We can see from C that person 3 is a child of person 1, and person 1 is a child of person 0. Hence, as we see in G person 3 is a grandchild of person 0.

22.5 Symbolic Arithmetic

As arguments to the "Dot Product" conjunction we could supply verbs to perform symbolic arithmetic. Thus we might symbolically add the strings 'a' and 'b' to get the string 'a+b'. Here is a small collection of utility functions to do some limited symbolic arithmetic on strings.

   pa     =: ('('&,) @: (,&')')   
   cp     =: [ ` pa @. (+./ @: ('+-*' & e.))
   symbol =: (1 : (':';'< (cp > x.), u., (cp > y.)')) " 0 0
   
   splus  =: '+' symbol 
   sminus =: '-' symbol 
   sprod  =: '*' symbol 
   
   a =: <'a'
   b =: <'b'
   c =: <'c'

a b c a splus b a sprod b splus c
+-+ 
|a| 
+-+
+-+ 
|b| 
+-+
+-+ 
|c| 
+-+
+---+ 
|a+b| 
+---+
+-------+ 
|a*(b+c)| 
+-------+

As a variant of the symbolic product, we could elide the multiplication symbol to give an effect more like conventional notation:

   sprodc =: '' symbol 

a sprod b a sprodc b
+---+ 
|a*b| 
+---+
+--+ 
|ab| 
+--+

Now for the dot verb, which we recall is (+/ . *), a symbolic version is:

   sdot =: splus / . sprodc

To illustrate:

   S =: 3 2 $ < "0 'abcdef'
   T =: 2 3 $ < "0 'pqrstu'

S T S sdot T
+-+-+ 
|a|b| 
+-+-+ 
|c|d| 
+-+-+ 
|e|f| 
+-+-+
+-+-+-+ 
|p|q|r| 
+-+-+-+ 
|s|t|u| 
+-+-+-+
+-----+-----+-----+ 
|ap+bs|aq+bt|ar+bu| 
+-----+-----+-----+ 
|cp+ds|cq+dt|cr+du| 
+-----+-----+-----+ 
|ep+fs|eq+ft|er+fu| 
+-----+-----+-----+

22.5.1 The Dot Product Conjunction Revisited

Recall from Chapter 07 that a dyadic verb v has a left and right rank. Here are some utility functions to extract the ranks from a given verb.

   RANKS   =: 1 : 'x. b. 0'
   LRANK   =: 1 : '1 { (x. RANKS)'   NB. left rank only

+ RANKS + LRANK
0 0 0
0

The general scheme for dyadic verbs of the form (u.v) is:

       u.v  means u @ (v " ((1+L), _))   where L = (v LRANK)

or equivalently,

      u.v   means (u @: v) " (1+L, _)

and so we see how (.) and (@:) differ. Here is an example:

   L  =: + LRANK
   LR =: 1+L , _  

M M < . + M M < @: + M LR M (< @: +)" LR M
3 4 
2 3
+---+---+ 
|6 7|5 6| 
|6 7|5 6| 
+---+---+
+---+ 
|6 8| 
|4 6| 
+---+
1 _
+---+---+ 
|6 7|5 6| 
|6 7|5 6| 
+---+---+

22.6 Determinant

22.6.1 Minors

The "minors" of a matrix, with respect to the first column, are obtained by deleting the first column and then deleting each row in turn. The following function is taken from the Dictionary.

   mi  =: }."1@ (1&([\.))     

For example:

S =: 3 3 $ <"0 'abcdefghi' mi S
+-+-+-+ 
|a|b|c| 
+-+-+-+ 
|d|e|f| 
+-+-+-+ 
|g|h|i| 
+-+-+-+
+-+-+ 
|e|f| 
+-+-+ 
|h|i| 
+-+-+ 
 
+-+-+ 
|b|c| 
+-+-+ 
|h|i| 
+-+-+ 
 
+-+-+ 
|b|c| 
+-+-+ 
|e|f| 
+-+-+

22.6.2 Determinant

The monadic verb (- / . *) computes the determinant of a matrix.

   det =: - / . *

M det M (3*3)-(2*4)
3 4 
2 3
1
1

For a square matrix, the determinant is unchanged by transposing, but not so for a non-square matrix.

   N =: 3 2 $ 2 1 0 3 4 5 

$ M det M det |: M $ N det N det |: N
2 2
1
1
3 2
_12
0

Symbolically:

   sdet =: sminus / . sprodc

S sdet S
+-+-+-+ 
|a|b|c| 
+-+-+-+ 
|d|e|f| 
+-+-+-+ 
|g|h|i| 
+-+-+-+
+----------------------------------+ 
|(a(ei-hf))-((d(bi-hc))-(g(bf-ec)))| 
+----------------------------------+

The determinant of a matrix is the alternating sum (-/) of first column multiplied by determinants (recursively) of corresponding minors. The following function dex is a version of determinant showing the recursion explicitly:

   fc  =: {. " 1      NB. first column
   
   dex =: 3 : 0
if.    2 > {: $ y.      
do.    -/ , y. 
else.  -/ (fc y.) * (dex"_1 mi y.)
end.
)
   
   N =: 3 3 $ 2 1 0 3 4 5 6 2 3

N mi N dex"_1 mi N dex N det N
2 1 0 
3 4 5 
6 2 3
4 5 
2 3 
 
1 0 
2 3 
 
1 0 
4 5
2 3 5
25
25

22.6.3 Singular Matrices

A matrix is said to be singular if the rows (or columns) are not linearly independent, that is, if one row (or column) can be obtained from another by multiplying by a constant. A singular matrix has a zero determinant. In the following example A is a (symbolic) singular matrix, with m the constant multiplier.

A =: 2 2 $ 'a';'b';'ma';'mb' sdet A
+--+--+ 
|a |b | 
+--+--+ 
|ma|mb| 
+--+--+
+-------+ 
|amb-mab| 
+-------+

We see that the resulting term (amb-mab) must be zero for all a, b and m.

22.7 Matrix Divide

The built-in verb %. (percent dot) is called "Matrix Divide". It can be used to find solutions to systems of simultaneous linear equations. For example, consider the equations written conventionally as:

          3x + 4y = 11
          2x + 3y =  8

Rewriting as a matrix equation, we have, informally,

          M dot (x,y) = V

where M is the matrix of coefficients and V is the vector of right-hand-side values:

M =: 3 4 ,: 2 3 V =: 11 8
3 4 
2 3
11 8

The vector of unknowns (x,y) can be found by dividing vector V by matrix M.

M V xy =: V %. M M dot xy
3 4 
2 3
11 8
1 2
11 8

There are preconditions which must be satisfied in order to compute Z =: A %. B. Consider the following example in two dimensions:

   A =: 3 2 $ 3 5 10 14 18 28
   B =: 3 2 $ 1 0 2 4 5 3

A B Z =: A %. B B dot Z
 3  5 
10 14 
18 28
1 0 
2 4 
5 3
3 5 
1 1
 3  5 
10 14 
18 28

If we write:

   'r s' =: $ B
   't u' =: $ Z

then since we know that A equals B dot Z then the dimensions of A must be (r,u), from the precondition for dot mentioned above. Hence the first dimension of A must equal the first of B. If this condition is not met, an error results:

E =: |: B $A $E A %. E
1 2 5 
0 4 3
3 2
2 3
error

The second precondition for Z=: A %. B is that B must be non-singular, that is, the determinant of B must be non-zero. Otherwise an error is reported. For example:

F =: 3 2 $ 1 det F A %. F B det B A %. B
1 1 
1 1 
1 1
0
error
1 0 
2 4 
5 3
_13
3 5 
1 1

22.7.1 Identity Matrix

A (non-singular) matrix M divided by itself yields an "identity matrix", I say, such that (M dot I) = M.

M I =: M %. M M dot I
3 4 
2 3
1 0 
0 1
3 4 
2 3

The identity matrix is always square even if the original matrix is not.

A K =: A %. A A dot K
 3  5 
10 14 
18 28
          1 _4.44089e_14 
1.95399e_14            1
 3  5 
10 14 
18 28

In the last example we see terms in e_14, very close to zero. We can repeat the computation, first converting A to extended-precision with the built-in verb x: (see Chapter 19).

A =: x: A K =: A %. A A dot K
 3  5 
10 14 
18 28
1 0 
0 1
 3  5 
10 14 
18 28

22.8 Matrix Inverse

The monadic verb %. computes the inverse of a matrix That is, %. M is equivalent to I %. M for a suitable identity matrix I:

M I =: M %. M I %. M %. M
3 4 
2 3
1 0 
0 1
 3 _4 
_2  3
 3 _4 
_2  3

For a vector V, the inverse W has the reciprocal magnitude and the same direction. Thus the product of the magnitudes is 1 and the cosine of the angle between is 1.

V W =: %. V (ma V) * (ma W) V ca W
11 8
0.0594595 0.0432432
1
1

This brings us to the end of Chapter 22


NEXT
Table of Contents


Copyright © Roger Stokes 2000. This material may be freely reproduced, provided that this copyright notice and provision is also reproduced.

last updated 19Feb00