Rutherford - Inverses of Boolean Matrices
Rutherford - Inverses of Boolean Matrices
by D. E. RUTHERFORD (Received 5 April 1962) 1. This note is concerned with square matrices, denoted by capital letters, whose elements belong to a Boolean algebra with null element 0 and all element 1. Such matrices, which have important applications in the theory of electric circuits, can be compounded in the three following ways. A A B = C o aunbij = Cy, AvB = C o aijKjbi] = cij, AB = C <> U (aijribjk) = cik. =
3
More than a quarter of a century ago, J. H. M. Wedderburn [1] showed that the equations (1), (2), (3), (4) of the present paper were necessary and sufficient conditions for the existence of a Boolean matrix X satisfying the matrix equations
AX= XA = I,
in which / i s the Boolean matrix [<5fj]. Such a matrix X, if it exists, is therefore in the Boolean sense a two-sided inverse of A. More recently R. D. Luce [2] has shown that a Boolean matrix A possesses a two-sided inverse if and only if A is an orthogonal matrix in the sense that AAT = I, where AT denotes the transpose of A, and that, in this case, ATis a two-sided inverse. The question examined in the following pages is whether a matrix A might possess one-sided inverses or whether AX = / implies XA = I. It will be shown that the latter alternative is the correct one and that consequently one half of Wedderburn's conditions are superfluous. Thus (1) and (2), or alternatively (3) and (4), are necessary and sufficient conditions that the matrix A should possess an inverse. We define the determinant of a Boolean matrix A to be \A | = UaK.n ... na,,n,
a
m o r e closely resembles t h e p e r m a n e n t \ A \ = aUl ... anin t h a n t h e d e t e r m i n a n t \ A \ YJ (sign <J) aUl ... anin of elementary algebra, but, since the concept of sign is entirely absent in
a
Boolean algebra, such a distinction would be illusory. The Boolean determinant as here defined has many properties reminiscent of elementary algebra. For instance expansion by a row or by a column is permissible. We shall not enlarge on this here but we mention that such determinants have important practical applications in the theory of switching circuits. Quite a different definition is given by Wedderburn. If we denote the complement of a.k by a'ik and write aij^dijnl r\a'ik),
50
D. E. RUTHERFORD
then Wedderburn calls | A | the determinant of A, where A is the matrix composed of the elements a,7. This definition has the desirable properties that \ AB\ = | / l | n | B | and that | A | = 0 if two columns of A are identical, but it does not permit expansion by a row or column of A according to the familiar formula. It is stated by Wedderburn that a necessary and sufficient condition that A should possess an inverse is that | A | = 1. 2. Let A = an
'In
"nl
a,7=i a = i , ...,n\
aur^akJ = 0 (i ^ fc).
(i)
(2)
We may express the left member of this equation as a union of intersections, a typical intersection being of the form However, any such term will vanish in virtue of (2) unless t h e ^ , ...,jn are distinct. The remaining terms comprise all the terms and no others of the Boolean determinant of A. Consequently
fl
in
1-
(3)
aun
from (2). This equation taken in conjunction with (3) demonstrates that the complement of atj is a'n = U akj.
INVERSES OF BOOLEAN MATRICES By taking complements of both sides of (1) we deduce that
j
51
\k*i
We can now write the expression on the left as a union of intersections each of which must vanish and each of which is of the form
in which the ku ..., kn cannot all be distinct since none of them can take some particular value /. Any selection of ku ..., kn is indeed possible provided we stipulate that at least two of them have the same value. Then the union of all such terms which have kt = kz = k must vanish and the repeated use of the distributive law yields
which, in view of (3), yields akir\ak2 = 0. In the same manner we can prove that akinakj = 0 (i?j). (4)
It follows that (3) and (4) are consequences of (1) and (2). Conversely, (1) and (2) are consequences of (3) and (4), as may be seen by repeating the foregoing argument using the transposed matrix AT in place of A. We have therefore established the following result:
THEOREM.
The relations (1) and (2) imply and are implied by the relations (3) and (4).
where B, C are square Boolean matrices each of order n. For notational convenience it is desirable to write the second factor on the left as CT rather than C. The matrix equation (5) implies the following relations between the matrix elements: U 0 y n c y ) = l ( = l , ...,);
1
(6)
U(6 y nc w ) = 0
j
(i*k).
(7)
52 then evidently
It is now clear from (6) and (8) that the matrix A, which can be denned as B A C, satisfies (1) and (2) and must also satisfy (3) and (4). Furthermore, since a-ti %. bu and a^ ^ c,j, we can obtain from (3) the relations
U6 y =l, Uc y =l
i
0"=l,-,).
4. We conclude with a few remarks showing the relationship of the foregoing to Wedderburn's remark concerning \~A\. According to our definition, alV ^ a,j. From the isotone property it is evident that
in which Au is the cofactor of au in | A |. Thus 1 1 \ = 1 implies (3). A similar argument shows that | ~A \ = 1 implies that Ua;j- = 1. Now a,-y g a\k for all values of j except k, wherej
as aik ^ a'u for any / not equal to k. Hence 0' = 1 = U au a^va'a = (aana,,)'.
j
Thus 0 = aiknaH if k I, which is (4). We have now proved that (3) and (4), and therefore (1) and (2), are consequences of | A | = 1. Conversely, (1) and (4) show that a'u = U aik, whence ai} = f\ a'ik, showing that ~A=A.
53
On the other hand we proved earlier that (1) and (2) imply that | A | = 1. It follows that, if (1), (2), (4) are valid, then \ A \ = \ A \ = 1. It follows that | A | = 1 is a necessary and sufficient condition that A should possess an inverse.
REFERENCES
1. J. H. M. Wedderburn, Boolean linear associative algebra, Ann. of Math. 35 (1934), 185-194. 2. R. D. Luce, A note on Boolean matrix theory, Proc. Amer. Math. Soc. 3 (1952), 382-388.
ST SALVATOR'S COLLEGE ST ANDREWS