0% found this document useful (0 votes)
26 views

Characterizations of Sign Patterns of Inverse-Positive Matrices

This document summarizes a paper that characterizes sign patterns of inverse-positive matrices. The paper generalizes a previous result by allowing matrices to have zero entries, and establishes five additional equivalent characterizations. One characterization uses a graph-theoretical approach to provide an efficient way to recognize these sign patterns. The paper proves a theorem showing the equivalence of several properties related to inverse-positive matrices, including conditions on the sign pattern of the matrix and existence of related stochastic matrices.

Uploaded by

张儒
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Characterizations of Sign Patterns of Inverse-Positive Matrices

This document summarizes a paper that characterizes sign patterns of inverse-positive matrices. The paper generalizes a previous result by allowing matrices to have zero entries, and establishes five additional equivalent characterizations. One characterization uses a graph-theoretical approach to provide an efficient way to recognize these sign patterns. The paper proves a theorem showing the equivalence of several properties related to inverse-positive matrices, including conditions on the sign pattern of the matrix and existence of related stochastic matrices.

Uploaded by

张儒
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Characterizations of Sign Patterns of Inverse-Positive Matrices

Miroslav Fiedler*
Czechoslovak Academy of Sciences
Praha 1, Zitnu Ulice 25, Czechoslovakia
and
Auburn University
Auburn, Alabama

and

Robert Grone
Auburn University
Auburn, Alabama

Suhnitktl by Richard A. Brualdi

ABSTRACT

A result of Johnson, Leighton, and Robinson characterizing sign patterns of real


matrices with nonzero entries whose inverses are (entrywise) positive is generalized.
The restriction to matrices with nonzero entries is removed, and additional five
equivalent conditions are established. One of them, using a graph-theoretical ap
preach, expedites a simple criterion for recognition of such sign patterns.

1. INTRODUCTION

For any c E R, the field of real numbers, we define the sign of c via

1 if c>O,
s(c)= 0 if c=O,
i -1 if ct0,

and for any real matrix A=(aii) we will define the sign pattern s(A) of A by

s(A)=(s(aij)).

Two real matrices A, B for which s(A) = s( B) will be called sign-equal.

*Work of the first author partially supported under NSF grant number MCS 7912494.

LINEAR ALGEBRA AND ITS APPLKATIONS 40:2X-245 (1981) 237


b Elsevier North Holland, Inc., 1981
52 Vanderbilt Ave., New York, NY 10017 0024.3795/81/060237+9$02.50
238 MIROSLAV FIEDLER AND ROBERT GRONE

In [4], Johnson, Leighton, and Robinson investigated the set of all nonzero
sign patterns of square matrices whose inverse is positive (entrywise). They
proved a remarkable theorem that a matrix A without zero entries is sign-equal
- -
to an inverse-positive matrix iff A cannot be expressed in the form

where I’, Q are permutation matrices and A,, >O, A,, tO while at least one
of these blocks is nonvoid (A,,, A,, being not necessarily square).
It is the purpose of the present paper to strengthen this theorem by
allowing A to have zero entries as well as by adding five more equivalent
characterizations of these sign patterns. One of them provides a graph-theoretic
characterization which affords an answer to a question raised in [4] about
efficiently recognizing these sign patterns.

2. NOTATION AND PRELIMINARIES

All matrices in this paper will be real. If A is a matrix, we denote by A”


the transpose of A, and by AC the matrix obtained by replacing all negative
entries of A with zeros.
We shall denote by e the column vector of all ones, and by J the n X n
matrix all of whose entries equal one. A matrix D is doubly stochastic if it is
(entrywise) nonnegative and satisfies De=D’e=e.
An n X n matrix A is said to be irreducible if A cannot be expressed as

where A,,, A,, are square of order at least one and P is an n X II permutation
matrix. In other words, A is irreducible iff for no nonvoid proper subset
MC{l,.z,..., n} do we have a ii =O whenever iEM and j @ ‘$4.
Another characterization of irreducibility of A uses the graph-theoretic
approach. The directed graph G(A) of A = (a ii) is the graph with the set of
vertices {1,2,..., n} and the set of edges {(i, j)] (I I/#O). A directed graph is
strongly comrected if it contains a directed path from any vertex to any other
vertex. The following is well known [l]:

(2.1) A matrix A is irreducible iff its directed graph G(A) is strongly


connected.
SIGN PATTERNS OF INVERSE-POSITIVE MATRICES 239

In the sequel, we shall need the following notion. An n X n (n > 3) square


matrix C=( cii) is called cyclic if there exist mutually distinct indices
tl, z2,. . , i, E { 1,. . . , n}, t>3, such that

ci,i,=ci,i3= . . . =ci, ,i, =ci,i,=l,

and ci, = 0 in all other cases. In other words, C is cyclic iff it is a 0- 1 matrix
whose directed graph consists of one cycle and maybe some isolated vertices.
For such a cyclic matrix C, Z(C) will denote the matrix Z(C)=(dii),
dii =l for i=i,, k=l,..., t, d,, =O in all other cases.
In the following proposition, IA] will denote, for a matrix A=(u,~), the
matrix IAl=(]niil).

(2.2) Let A=(aii) be an irreducible nXn matrix, aii=O, i=l,...,n.


Then there exist cyclic matrices C,, . . . , C,,, for whose sign patterns


i
jl ‘i
I=‘(lAI).

Proof. In a strongly connected graph, every edge is contained in a cycle.


Thus it suffices to assign to each such cycle in the directed graph G(A) a
cyclic matrix Ci. n

An n X n matrix A is said to be fully indecomposable if PA is irreducible


for all permutation matrices P. In other words, A is fully indecomposable iff A
cannot be expressed in the form

where A,,, A,, are square of order at least one and P, Q are permutation
matrices.
The following is well known [5]:

(2.3) Let A be a fully indecomposable matrix. Then there exists Q


permutation matrix P such that PA has all diagonal entries different from
ZJ?TO.’
240 MIROSLAV FIEDLER AND ROBERT GRONE

Finally, we recall the notion of an M-matrix. A square matrix A is an


M-matrix if it can be expressed as

A=kI-B

where B> 0 and k> p( B), the spectral radius (defined as the maximum
modulus of an eigenvalue) of B. We shall need the following characterization
[31:

(2.4) A square matrix A with all offdiagonal entries nonpositive is an


M-matrix iff for some positive vector u, Au 2 0.

3. RESULTS

We shall prove now a theorem which comprises several characterizations


of sign patterns of inverse-positive matrices. It is clear that the assumption
about full indecomposability of S is not restrictive.

THEOREM. Let S be a fully indecomposable nXn matrix with entries in


(0, 1, - l}. Then the following are equivalent:

(i) there exists a matrix A such that s(A) = S and A- ’ > 0;


(ii) S cannot be expressed in the form

(S,, need not be square) where P, Q are permutation matrices and S,, 20,
S,, ~0, at least one of these two blocks being nonvoid;
(iii) the matrix

is irreducible;
(iv) there exists a matrix A such that s(A) = S and Ae = ATe= 0;
(v) there exists a matrix A such that s(A) = S, Ae = ATe= 0, and the rank
of A is n-l;
(vi) there exists a doubly stochastic matrix D such that s( D-(1/n)]) = S;
(vii) there exist doubly stochastic matrices D,, D, such that s( D, -D,)= S.
SIGN PATl-ERNS OF INVERSE-POSITIVE MATRICES 241

Proof We shall prove that (i)=+(ii)-(iii)+(iv)*(v)*(i), (iv)=(vi)=


(vii)*(iv).
(i)=+(ii): Let S satisfy (i) and not (ii), i.e.

Sll 52
s=p

i s21 I
s22
Q,

where S,, >O, S,, GO, and at least one of these blocks is nonvoid. Let P*AQ*,
A being the matrix from (i), be partitioned conformally:

P*AQ’=( :;: ;:: j.

Let the partitioning

be transposed from that of A. Then

A,, >O, A,, 60, A-'>O. (1)


Clearly, neither a block row nor a block column of P*AQ* can be void.
Premultiplying

A,,%, +A,,f322 =O

by B,,, postmultiplying

B,,A,l +B22A21 =o

by 42, and subtracting, we obtain

B,,Al,B22 -L322A21B12 =o.

By (1) A,, =O as well as A 21 = 0. This, however, contradicts the assumption


that S is fully indecomposable.
(ii)-(iii): Suppose that the matrix
242 MIROSLAV FIEDLER AND ROBERT GRONE

is not irreducible. This means that all entries of Z in some nonvoid and proper
subset M of the set of rows of Z and in the complementary set of columns of
Z are equal to zero. Allowing the rows and columns of S to be permuted, we
can assume that the subset A4 is formed by the last k rows among the first n
rows of Z and by the last 1 rows of Z. Let

be a partitioning of S with S,, being k X 1. Since then

‘0 0 Sll s,, +
0 0 s21 s,,
z= ’
-s,T, -ST1 0 0

\ -s,T -s,T, 0 0

it follows that S,t = 0 and (- ST2)+ =O. In other words, S,, ~0, S,, 20. Both
S,,, S,, cannot be void, since then S,, would be either OX0 and the mnnber
k + I of elements in M would be 0, or n X n and k-t 1 would be 2n, a
contradiction with the fact that M is a nonvoid and proper subset of the set of
rows of Z. Therefore, (ii) is not true.
(iii) = (iv). The matrix

being irreducible and having zeros in the main diagonal, if follows from (2.2)
that there exist cyclic 2n X 2n matrices C,, . . . , C, such that

If C, are partitioned conformally with Z,

0
G=i c,,
‘il

0
I

i=l,...,t,

then, by the definition of a cyclic matrix,

(Gil -C,?Je=O.
SIGN PATTERNS OF INVERSE-POSITIVE MATRICES 243

On the other hand,

,s(S)=,s(A)

fo1

since tto two symmetric entries in Z are both different from zero, and the
sane is tnte of C,, i = 1,. . . , t. Thtts the tttatrix A satisfies (iv).
(iv)=(v): Th e proof of this implication is due to A. Berman and
D. Saunders [2]. Let Ac=ATc>=O, s( A)=S. By (2.:3), we can suppose without
loss of generality that all diagonal etttries of S, and thtts of A, are different
from zero. Since evett then S is irredttcible, there exist, by (2.2). cyclic
ttratrices C,, . , CT,such that for the off-diagonal part H of A,

Let 111be the matrix

AI= i I(C,)- i c,
1-l i=l

in the notation of Section 2. Clearly,

(2)

and for some t>O,

s(A+EM)=s(A)

for all P E [O, 51. Let us show that there exists an co E [O, <] for which the
tttatrix A +t~“~\/l has rank n- 1. Since (A+.M)c=O for a11 E, the rank of
11 + FM is always at most n - 1. By (2) and (2.4), :!I is a singular M-tnatrix Lvitlt
a simple eigenvalue 0. Consequently, M has rank II - 1. Let ,24, be att
( H ~ 1) X ( n - 1) nonsingular sttlmattix of M, and A 1 the correspottcling
submatrix of A. Since there are at most II - 1 values F for which clet( A, +
&LVfl)=O, the rank of A + ~~14is n - 1 for sotne F” E [O, E].
244 MIROSLAV FIEDLER AND ROBERT GRONE

(v)=(i): Let A be a matrix of rank n- 1 for which s(A)=S, Ae=A’e=O.


Therefore, the adjoint matrix of A satisfies

adjA=aJ#O.

Since we may permute rows as well as columns of A, we may assume that


“ii #O. If E,, denotes the matrix with a 1 in the (1, 1)-position and O’s
elsewhere, we see by expansion by minors that

det(A+eEii)=eu

and therefore, for E#O,

1 ,
(A+eEii)-l=zadj(A+~EII).

Choosing E sufficiently small and positive, we obtain (A f&E,,)-’ >O and


s(A+eE,,)=s(A). Thus, the matrix A-t&E,, satisfies (i).
(iv)*(vi): Let A satisfy s(A)=S, Ae=ATe=O. There exists an E>O such
that the matrix

D= $+eA
n

is nonnegative. Then D is doubly stochastic and

s(D-$j=s(A)=S.

(vi) * (vii): Trivial.


(vii)*(iv): If D,, D, are doubly stochastic matrices satisfying s( D, -D,)
= S, then A = D, -D, satisfies (iv).
The proof is complete. n

REMARK. The characterization (iii) being purely combinatoriai, one can


use well-known procedures (e.g. graph-theoretical) for recognizing whether
the given sign pattern S is a sign pattern of an inverse-positive matrix or not.
This answers a question raised in [4].

REFERENCES

1’ A. Beman and R. Plemmons, Nonnegative Matrices in the Mathematical Sciences,


Academic, 1979.
SIGN PATTERNS OF INVERSE-POSITIVE MATRICES 245

2 A. Berman and B. D. Saunders, Matrices with zero-line-sums and maximal rank,


Linear Algebra and Appl., 40:229-235 (1981).
3 M. Fiedler and V. Pt& On matrices with nonpositive off-diagonal elements and
positive principal minors, Czechoslooak Math. J. 12:382-400 (1962).
4 C. R. Johnson, F. T. Leighton, and H. A. Robinson, Sign patterns of inverse-positive
matrices, Linear Algebra and Appl. 24:75-83 (1979).
5 H. Schneider, The concepts of irreducibility and full indecomposability in the
works of Frobenius, Kijnig and Markov, Linear Algebra and Appl. 18: 139- 162
(1977).

You might also like