Combinatorial Matrix Classes 1st Edition Richard A Brualdi

crallsweatg5 12 views 63 slides May 17, 2025
Slide 1
Slide 1 of 63
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63

About This Presentation

Combinatorial Matrix Classes 1st Edition Richard A Brualdi
Combinatorial Matrix Classes 1st Edition Richard A Brualdi
Combinatorial Matrix Classes 1st Edition Richard A Brualdi


Slide Content

Combinatorial Matrix Classes 1st Edition Richard
A Brualdi download
https://ebookbell.com/product/combinatorial-matrix-classes-1st-
edition-richard-a-brualdi-1492584
Explore and download more ebooks at ebookbell.com

Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Combinatorial Matrix Theory And Generalized Inverses Of Matrices
Ravindra B Bapat
https://ebookbell.com/product/combinatorial-matrix-theory-and-
generalized-inverses-of-matrices-ravindra-b-bapat-4158860
Combinatorial Matrix Theory Richard A Brualdi Herbert J Ryser
https://ebookbell.com/product/combinatorial-matrix-theory-richard-a-
brualdi-herbert-j-ryser-42981578
Combinatorial Matrix Theory Andres Encinas Margarida Mitjana Eds
https://ebookbell.com/product/combinatorial-matrix-theory-andres-
encinas-margarida-mitjana-eds-6987218
Applications Of Combinatorial Matrix Theory To Laplacian Matrices Of
Graphs Jason J Molitierno
https://ebookbell.com/product/applications-of-combinatorial-matrix-
theory-to-laplacian-matrices-of-graphs-jason-j-molitierno-4393150

A Combinatorial Approach To Matrix Theory And Its Applications Richard
A Brualdi Dragos M Cvetkovic
https://ebookbell.com/product/a-combinatorial-approach-to-matrix-
theory-and-its-applications-richard-a-brualdi-dragos-m-
cvetkovic-4702266
Combinatorics And Random Matrix Theory Jinho Baik
https://ebookbell.com/product/combinatorics-and-random-matrix-theory-
jinho-baik-7400434
Compact Matrix Quantum Groups And Their Combinatorics Amaury Freslon
https://ebookbell.com/product/compact-matrix-quantum-groups-and-their-
combinatorics-amaury-freslon-50905202
Combinatorial Game Theory Richard J Nowakowski Bruce M Landman
https://ebookbell.com/product/combinatorial-game-theory-richard-j-
nowakowski-bruce-m-landman-45150316
Combinatorial Optimization 7th International Symposium Isco 2022
Virtual Event May 1820 2022 Revised Selected Papers Ivana Ljubi
https://ebookbell.com/product/combinatorial-optimization-7th-
international-symposium-isco-2022-virtual-event-may-1820-2022-revised-
selected-papers-ivana-ljubi-48671898

ENCYCLOPEDIA OF MATHEMATICS AND ITS APPLICATIONS
FOUNDED BY G.-C. ROTA
Editorial Board
P. Flajolet, M. Ismail, E. Lutwak
Volume 108
Combinatorial Matrix Classes

ENCYCLOPEDIA OF MATHEMATICS AND ITS APPLICATIONS
FOUNDED EDITOR G.-C. ROTA
Editorial Board
P. Flajolet, M. Ismail, E. Lutwak
40 N. White (ed. Matroid Applications
41 S. SakaiOperator Algebras in Dynamical Systems
42 W. Hodges Basic Model Theory
43 H. Stahl and V. Totik General Orthogonal Polynomials
45 G. Da Prato and J. Zabczyk Stochastic Equations in Infinite Dimensions
46 A. Bj¨orneret al. Oriented Matroids
47 G. Edgar and L. Sucheston Stopping Times and Directed Processes
48 C. SimsComputation with Finitely Presented Groups
49 T. Palmer Banach Algebras and the General Theory of *-Algebras I
50 F. Borceux Handbook of Categorical Algebra I
51 F. Borceux Handbook of Categorical Algebra II
52 F. Borceux Handbook of Categorical Algebra III
53 V. F. KolchinRandom Graphs
54 A. Katok and B. Hasselblatt Introduction to the Modern Theory of Dynamical Systems
55 V. N. Sachkov Combinatorial Methods in Discrete Mathematics
56 V. N. Sachkov Probabilistic Methods in Discrete Mathematics
57 P. M. Cohn Skew Fields
58 R. Gardner Geometric Tomography
59 G. A. Baker, Jr., and P. Graves-Morris Pad´e Approximants, 2nd edn
60 J. KrajicekBounded Arithmetic,Propositional Logic, and Complexity Theory
61 H. Groemer Geometric Applications of Fourier Series and Spherical Harmonics
62 H. O. FattoriniInfinite Dimensional Optimization and Control Theory
63 A. C. Thompson Minkowski Geometry
64 R. B. Bapat and T. E. S. Raghavan Nonnegative Matrices with Applications
65 K. EngelSperner Theory
66 D. Cvetkovic, P. Rowlinson and S. Simic Eigenspaces of Graphs
67 F. Bergeron, G. Labelle and P. Leroux Combinatorial Species and Tree-Like Structures
68 R. Goodman and N. Wallach Representations and Invariants of the Classical Groups
69 T. Beth, D. Jungnickel, and H. Lenz Design Theory I, 2nd edn
70 A. Pietsch and J. Wenzel Orthonormal Systems for Banach Space Geometry
71 G. E. Andrews, R. Askey and R. Roy Special Functions
72 R. TicciatiQuantum Field Theory for Mathematicians
73 M. SternSemimodular Lattices
74 I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations I
75 I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations II
76 A. A. IvanovGeometry of Sporadic Groups I
77 A. SchinzelPolymomials with Special Regard to Reducibility
78 H. Lenz, T. Beth, and D. Jungnickel Design Theory II, 2nd edn
79 T. Palmer Banach Algebras and the General Theory of *-Algebras II
80 O. Stormark Lie’s Structural Approach to PDE Systems
81 C. F. Dunkl and Y. Xu Orthogonal Polynomials of Several Variables
82 J. P. Mayberry The Foundations of Mathematics in the Theory of Sets
83 C. Foiaset al. Navier–Stokes Equations and Turbulence
84 B. Polster and G. Steinke Geometries on Surfaces
85 R. B. Paris and D. Kaminski Asymptotics and Mellin–Barnes Integrals
86 R. McEliece The Theory of Information and Coding, 2nd edn
87 B. Magurn Algebraic Introduction to K-Theory
88 T. Mora Solving Polynomial Equation Systems I
89 K. BichtelerStochastic Integration with Jumps
90 M. Lothaire Algebraic Combinatorics on Words
91 A. A. Ivanov and S. V. Shpectorov Geometry of Sporadic Groups II
92 P. McMullen and E. Schulte Abstract Regular Polytopes
93 G. Gierzet al. Continuous Lattices and Domains
94 S. FinchMathematical Constants
95 Y. JabriThe Mountain Pass Theorem
96 G. Gasper and M. Rahman Basic Hypergeometric Series, 2nd edn
97 M. C. Pedicchio and W. Tholen (eds. Categorical Foundations
98 M. IsmailClassical and Quantum Orthogonal Polynomials in One Variable
99 T. Mora Solving Polynomial Equation Systems II
100 E. Olivieri and M. E. Vares Large Deviations and Metastability
102 L. W. Beineke and R. J. Wilson (eds. Topics in Algebraic Graph Theory
103 O. J. Staffans Well-Posed Linear Systems
105 M. Lothaire Applied Combinatorics on Words

ENCYCLOPEDIA OF MATHEMATICS AND ITS APPLICATIONS
Combinatorial Matrix Classes
RICHARD A. BRUALDI
University of Wisconsin, Madison

cambridge university press
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo
Cambridge University Press
The Edinburgh Building, Cambridge CB2 2RU, UK
Published in the United States of America by Cambridge University Press,
New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521865654
CR. A. Brualdi 2006
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without
the written permission of Cambridge University Press.
First published 2006
Printed in the United Kingdom at the University Press, Cambridge
A catalog record for this publication is available from the British Library
ISBN-13 978-0-521-86565-4 hardback
ISBN-10 0-521-86565-4 hardback

Contents
Preface pageix
1 Introduction 1
1.1 Fundamental Concepts 1
1.2 Combinatorial Parameters 5
1.3 Square Matrices 8
1.4 An Existence Theorem 12
1.5 An Existence Theorem for Symmetric Matrices 14
1.6 Majorization 15
1.7 Doubly Stochastic Matrices and Majorization 19
References 23
2 Basic Existence Theorems for Matrices with
Prescribed Properties 25
2.1 The Gale–Ryser and Ford–Fulkerson Theorems 25
2.2 Tournament Matrices and Landau’s Theorem 32
2.3 Symmetric Matrices 39
References 42
3TheClass A(R, S)of (0,1 45
3.1 A Special Matrix inA(R, S) 45
3.2 Interchanges 50
3.3 The Structure MatrixT(R, S) 57
3.4 Invariant Sets 62
3.5 Term Rank 69
3.6 Widths and Multiplicities 79
3.7 Trace 86
3.8 Chromatic Number 91
3.9 Discrepancy 97
3.10 Rank 100
3.11 Permanent 115
3.12 Determinant 121
References 129
v

vi Contents
4 More on the ClassA(R, S)of (0,1 135
4.1 Cardinality ofA(R, S) and the RSK Correspondence 135
4.2 Irreducible Matrices inA(R, S) 163
4.3 Fully Indecomposable Matrices inA(R, S) 169
4.4A(R, S)andZ
+
(R, S) with Restricted Positions 174
4.5 The Bruhat Order onA(R, S) 190
4.6 The Integral LatticeL(R, S) 204
4.7 Appendix 210
References 215
5TheClass T(R)of Tournament Matrices 219
5.1 Algorithm for a Matrix inT(R) 219
5.2 Basic Properties of Tournament Matrices 222
5.3 Landau’s Inequalities 227
5.4 A Special Matrix inT(R) 230
5.5 Interchanges 234
5.6 Upsets in Tournaments 238
5.7 Extreme Values of ˜υ(R) and ¯υ(R) 247
5.8 Cardinality ofT(R) 256
5.9 The ClassT(R; 2) of 2-Tournament Matrices 267
5.10 The ClassA(R,∗)
0of (0,1)-Matrices 274
References 282
6 Interchange Graphs 285
6.1 Diameter of Interchange GraphsG(R, S) 285
6.2 Connectivity of Interchange Graphs 291
6.3 Other Properties of Interchange Graphs 295
6.4 The ∆-Interchange GraphG
∆(R) 300
6.5 Random Generation of Matrices inA(R,S)andT (R) 305
References 308
7 Classes of Symmetric Integral Matrices 311
7.1 Symmetric Interchanges 311
7.2 Algorithms for Symmetric Matrices 314
7.3 The ClassA(R)
0 322
7.4 The ClassA(R) 331
References 334
8 Convex Polytopes of Matrices 337
8.1 Transportation Polytopes 337
8.2 Symmetric Transportation Polytopes 348
8.3 Term Rank and Permanent 356
8.4 Faces of Transportation Polytopes 365
References 376

Contents vii
9 Doubly Stochastic Matrices 379
9.1 Random Functions 379
9.2 Basic Properties 380
9.3 Faces of the Assignment Polytope 385
9.4 Graph of the Assignment Polytope 403
9.5 Majorization Polytopes 417
9.6 A Special Subpolytope of Ω
n 426
9.7 The Even Subpolytope of Ω
n 433
9.8 Doubly Substochastic and Superstochastic Matrices 448
9.9 Symmetric Assignment Polytope 453
9.10 Doubly Stochastic Automorphisms 457
9.11 Diagonal Equivalence 464
9.12 Applications of Doubly Stochastic Matrices 471
9.13 Permanent of Doubly Stochastic Matrices 482
9.14 Additional Related Results 495
References 500
Master Bibliography 511
Index 536

Preface
In the preface of the bookCombinatorial Matrix Theory
1
(CMT
cussed my plan to write a second volume entitledCombinatorial Matrix
Classes. Here 15 years later (including 6, to my mind, wonderful years
as Department of Mathematics Chair at UW-Madison), and to my great
relief, is the finished product. What I proposed as topics to be covered in
a second volume were, in retrospect, much too ambitious. Indeed, after
some distance from the first volume, it now seems like a plan for a book
series rather than for a second volume. I decided to concentrate on topics
that I was most familiar with and that have been a source of much research
inspiration for me. Having made this decision, there was more than enough
basic material to be covered. Most of the material in the book has never
appeared in book form, and as a result, I hope that it will be useful to both
current researchers and aspirant researchers in the field. I have tried to be
as complete as possible with those matrix classes that I have treated, and
thus I also hope that the book will be a useful reference book.
I started the serious writing of this book in the summer of 2000 and
continued, while on sabbatical, through the following semester. I made
good progress during those six months. Thereafter, with my many teaching,
research, editorial, and other professional and university responsibilities, I
managed to work on the book only sporadically. But after 5 years, I was
able to complete it or, if one considers the topics mentioned in the preface
of CMT, one might say I simply stopped writing. But that is not the way
I feel. I think, and I hope others will agree, that the collection of matrix
classes developed in the book fit together nicely and indeed form a coherent
whole with no glaring omissions. Except for a few reference to CMT, the
book is self-contained.
My primary inspiration for combinatorial matrix classes has come from
two important contributors, Herb Ryser and Ray Fulkerson. In a real sense,
with their seminal and early research, they are the “fathers” of the sub-
ject. Herb Ryser was my thesis advisor and I first learned about the class
A(R, S), which occupies a very prominent place in this book, in the fall of
1962 when I was a graduate student at Syracuse University (New York).
1
Authored by Richard A. Brualdi and Herbert J. Ryser and published by Cambridge
University Press in 1991.
ix

x Preface
In addition, some very famous mathematicians have made seminal con-
tributions that have directly or indirectly impacted the study of matrix
classes. With the great risk of offending someone, let me mention only
Claude Berge, Garrett Birkhoff, David Gale, Alan Hoffman, D. K¨onig, Vic-
tor Klee, Donald Knuth, H.G. Landau, Leon Mirsky, and Bill Tutte. To
these people, and all others who have contributed, I bow my head and say
a heartfelt thank-you for your inspiration.
As I write this preface in the summer of 2005, I have just finished my
40th year as a member of the Department of Mathematics of the University
of Wisconsin in Madison. I have been fortunate in my career to be a member
of a very congenial department that, by virtue of its faculty and staff,
provides such a wonderful atmosphere in which to work, and that takes
teaching, research, and service all very seriously. It has also been my good
fortune to have collaborated with my graduate students, and postdoctoral
fellows, over the years, many of whom have contributed to one or more of
the matrix classes treated in this book. I am indebted to Geir Dahl who
read a good portion of this book and provided me with valuable comments.
My biggest source of support these last 10 years has been my wife Mona.
Her encouragement and love have been so important to me.
Richard A. Brualdi
Madison, Wisconsin

1
Introduction
In this chapter we introduce some concepts and theorems that are impor-
tant for the rest of this book. Much, but not all, of this material can be
found in the book [4]. In general, we have included proofs of theorems only
when they do not appear in [4]. The proof of Theorem 1.7.1 is an excep-
tion, since we give here a much different proof. We have not included all
the basic terminology that we make use of (e.g. graph-theoretic terminol-
ogy), expecting the reader either to be familiar with such terminology or
to consult [4] or other standard references.
1.1 Fundamental Concepts
Let
A=[a
ij](i=1,2,...,m;j=1,2,...,n)
be a matrix ofmrows andncolumns. We say thatAis ofsizemby
n, and we also refer toAas anmbynmatrix. Ifm=n, thenAis a
squarematrix ofordern. The elements of the matrixAare always real
numbers and usually are nonnegative real numbers. In fact, the elements
are sometimes restricted to be nonnegative integers, and often they are
restricted to be either 0 or 1. The matrixAis composed ofmrow vectors
α
1,α2,...,αmandncolumn vectorsβ 1,β2,...,βn, and we write
A=





α
1
α2
.
.
.
α
m





=[β
1β2... βn].
It is sometimes convenient to refer to either a row or column of the
matrixAas alineofA. We use the notationA
T
for the transpose of the
matrixA.IfA=A
T
, thenAis a square matrix and issymmetric.
1

2 Introduction
A zero matrix is always designated byO, a matrix with every entry
equal to 1 byJ, and an identity matrix byI. In order to emphasize the
size of these matrices we sometimes include subscripts. ThusJ
m,ndenotes
thembynmatrix of all 1’s, and this is shortened toJ
nifm=n.The
notationsO
m,n,On,andI nhave similar meanings.
AsubmatrixofAis specified by choosing a subset of the row index set
ofAand a subset of the column index set ofA.LetI⊆{1,2,...,m}and
J⊆{1,2,...,n}.Let
¯
I={1,2,...,m}\Idenote thecomplementofIin
{1,2,...,m}, and let
¯
J={1,2,...,n}\Jdenote the complement ofJin
{1,2,...,n}. Then we use the following notations to denote submatrices of
A:
A[I,J]=[a
ij:i∈I,j∈J],
A(I,J]=[a
ij:i∈
¯
I,j∈J],
A[I,J)=[a
ij:i∈I,j∈
¯
J],
A(I,J)=[a
ij:i∈
¯
I,j∈
¯
J],
A[I,·]=A[I,{1,2,...,n}],
A[·,J]=A[{1,2,...,m},J],
A(I,·]=A[
¯
I,{1,2,...,n}],and
A[·,J)=A[{1,2,...,m},
¯
J].
These submatrices are allowed to be empty. IfI={i}andJ={j},
then we abbreviateA(I,J)byA(i, j).
We have the following partitioned forms ofA:
A=




A[I,J]
A[I,J)
A(I,J]A(I,J)




,A=




A[I,·]
A(I,·]




,
and
A=

A[·,J]A[·,J)
 .
Then! permutation matrices of ordernare obtained fromI
nby arbi-
trary permutations of its rows (or of its columns). Letπ=(π
1,π2,...,πn)
be a permutation of{1,2,...,n}. Thenπcorresponds to the permutation
matrixP
π=[p ij] of ordernin whichp iπi
=1(i=1,2,...,n) and all
otherp
ij= 0. The permutation matrix corresponding to the inverseπ
−1
of

1.1 Fundamental Concepts 3
πisP
T
π
. It thus follows thatP
−1
π
=P
T
π
, and thus an arbitrary permutation
matrixPof ordernsatisfies the matrix equation
PP
T
=P
T
P=I n.
LetAbe a square matrix of ordern. Then the matrixPAP
T
is similar to
A.IfweletQbe the permutation matrixP
T
, thenPAP
T
=Q
T
AQ.The
row vectors of the matrixP
πAareα π1
,απ2
,...,απm
. The column vectors
ofAP
πareβ π

1,βπ

2,...,βπ

nwhereπ
−1
=(π

1


2
,...,π

n
). The column
vectors ofAP
T
areβ π1
,βπ2
,...,βπn
.ThusifPis a permutation matrix,
the matrixPAP
T
is obtained fromAbysimultaneous permutations of its
rows and columns. More generally, if Ais anmbynmatrix andPandQ
are permutation matrices of ordersmandn, respectively, then the matrix
PAQis a matrix obtained fromAbyarbitrary permutations of its rows
and columns.
LetA=[a
ij] be a matrix of sizembyn.Thepattern(ornonzero
pattern)ofAis the set
P(A)={(i, j):a
ij≤=0,i=1,2,...,m, j=1,2,...,n}
of positions ofAcontaining a nonzero element.
With thembynmatrixA=[a
ij] we associate a combinatorial config-
uration that depends only on the pattern ofA.LetX={x
1,x2,...,xn}
be a nonempty set ofnelements. We callXann-set.Let
X
i={x j:aij≤=0,j=1,2,...,n}(i=1,2,...,m).
The collection ofmnot necessarily distinct subsetsX
1,X2,...,Xmof the
n-setXis theconfigurationassociated withA.IfPandQare permu-
tation matrices of ordersmandn, respectively, then the configuration
associated withPAQis obtained from the configuration associated withA
by relabeling the elements ofXand reordering the setsX
1,X2,...,Xm.
Conversely, given a nonempty configurationX
1,X2,...,Xmofmsubsets
of the nonemptyn-setX={x
1,x2,...,xn}, we associate anmbyn
matrixA=[a
ij] of 0’s and 1’s, wherea ij= 1 if and only ifx j∈X i
(i=1,2,...,m;j=1,2,...,n).
The configuration associated with thembynmatrixA=[a
ij] fur-
nishes a particular way to represent the structure of the nonzeros ofA.
We may view a configuration as ahypergraph[1] with vertex setXand
hyperedgesX
1,X2,...,Xm. This hypergraph may have repeated edges,
that is, two or more hyperedges may be composed of the same set of ver-
tices. Theedge–vertex incidence matrixof a hypergraphHwith vertex
setX={x
1,x2,...,xn}and edgesX 1,X2,...,Xmis thembynmatrix
A=[a
ij] of 0’s and 1’s in whicha ij= 1 if and only ifx jis a vertex of edge
X
i(i=1,2,...,m;j=1,2,...,n). Notice that the hypergraph (configura-
tion) associated withAis the original hypergraphH.IfAhas exactly two
1’s in each row, thenAis the edge–vertex incidence matrix of amultigraph

4 Introduction
where a pair of distinct vertices may be joined by more than one edge. If
no two rows ofAare identical, then this multigraph is agraph.
Another way to represent the structure of the nonzeros of a matrix is
by a bipartite graph. LetU={u
1,u2,...,um}andW={w 1,w2,...,wn}
be sets of cardinalitymandn, respectively, such thatU∩W=
∅.The
bipartite graphassociated withAis the graph BG(A) with vertex set
V=U∪Wwhose edges are all the pairs{u
i,wj}for whicha ijΩ=0.
The pair{U, W}is thebipartitionof BG(A).
Now assume thatAis a nonnegative integral matrix, that is, the el-
ements ofAare nonnegative integers. We may then associate withAa
bipartite multigraphBMG(A) with the same vertex setVbipartitioned
as above intoUandW. In BMG(A) there area
ijedges of the form
{u
i,wj}(i=1,2,...,m;j=1,2,...,n). Notice that ifAis a (0,1
matrix, that is, each entry is either a 0 or a 1, then the bipartite multigraph
BMG(A) is a bipartite graph and coincides with the bipartite graph BG(A).
Conversely, let BMG be a bipartite multigraph with bipartitioned vertex set
V={U, W}whereUandWare as above. Thebipartite adjacency matrix
of BMG, abbreviatedbi-adjacency matrix,isthe mbynmatrixA=[a
ij]
wherea
ijequals the number of edges of the form{u i,wj}(themultiplicity
of (u
i,vj)) (i=1,2,...,m;j=1,2,...,n). Notice that BMG(A)isthe
original bipartite multigraph BMG.
AnmbynmatrixAis calleddecomposableprovided there exist non-
negative integerspandqwith 0<p+q<m+nand permutation matrices
PandQsuch thatPAQis a direct sumA
1⊕A2whereA 1is of sizepby
q. The conditions onpandqimply that the matricesA
1andA 2may be
vacuous
1
but each of them contains either a row or a column. The matrix
Aisindecomposableprovided it is not decomposable.The bipartite graph
BG(A) is connected if and only ifAis is indecomposable.
Assume that the matrixA=[a
ij] is square of ordern. We may represent
its nonzero structure by a digraphD(A). The vertex set ofD(A) is taken to
be ann-setV={v
1,v2,...,vn}. There is an arc (v i,vj)fromv itovjif and
only ifa
ijΩ=0(i, j=1,2,...,n). Notice that a nonzero diagonal entry ofA
determines an arc ofD(A) from a vertex to itself (adirected loopordi-loop).
IfAis, in addition, a nonnegative integral matrix, then we associate with
Aageneral digraphGD(A) with vertex setVwhere there area
ijarcs of
the form (v
i,vj)(i, j=1,2,...,n). IfAis a (0,1 A)isa
digraph and coincides withD(A). Conversely, let GD be a general digraph
with vertex setV.Theadjacency matrixof GD(A) is the nonnegative
integral matrixA=[a
ij] of ordernwherea ijequals the number of arcs of
the form (v
i,vj) (themultiplicityof (v i,vj)) (i, j=1,2,...,n). Notice that
GD(A) is the original general digraph GD.
Now assume that the matrixA=[a
ij] not only is square but is also
symmetric. Then we may represent its nonzero structure by thegraph
1
Ifp+q=1,thenA 1has either a row but no columns or a column but no rows. A
similar conclusion holds forA
2ifp+q=m+n−1.

1.2 Combinatorial Parameters 5
G(A). The vertex set ofG(A)isann-setV={v
1,v2,...,vn}. There is an
edge joiningv
iandv jif and only ifa ij=0(i, j=1,2,...,n). A nonzero
diagonal entry ofAdetermines an edge joining a vertex to itself, that is, a
loop. The graphG(A) can be obtained from the bipartite graph BG(A)by
identifying the verticesu
iandw iand calling the resulting vertexv i(i=
1,2,...,n). IfAis, in addition, a nonnegative integral matrix, then we
associate withAageneral graphGG(A) with vertex setVwhere there are
a
ijedges of the form{v i,vj}(i, j=1,2,...,n). IfAis a (0,1
GG(A) is a graph and coincides withG(A). Conversely, let GG be a general
graph with vertex setV.Theadjacency matrixof GG is the nonnegative
integral symmetric matrixA=[a
ij] of ordernwherea ijequals the number
of edges of the form (v
i,vj) (themultiplicityof{v i,vj})(i, j=1,2,...,n).
Notice that GG(A) is the original general graph GG. A general graph with
no loops is called amultigraph.
The symmetric matrixAof ordernissymmetrically decomposablepro-
vided there exists a permutation matrixPsuch thatPAP
T
=A 1⊕A2
whereA 1andA 2are both matrices of order at least 1; ifAis not sym-
metrically decomposable, thenAissymmetrically indecomposable.The
matrixAis symmetrically indecomposable if and only if its graphG(A)is
connected.
Finally, we remark that if a multigraph MG is bipartite with vertex
bipartition{U, W}andAis the adjacency matrix of MG, then there are
permutation matricesPandQsuch that
PAQ=
σ
OC
C
T
O
λ
whereCis the bi-adjacency matrix of MG (with respect to the bipartition
{U, W}).
2
We shall make use of elementary concepts and results from the theory
of graphs and digraphs. We refer to [4], or books on graphs and digraphs,
such as [17], [18], [2], [1], for more information.
1.2 Combinatorial Parameters
In this section we introduce several combinatorial parameters associated
with matrices and review some of their basic properties. In general, by a
combinatorial property or parameter of a matrixwe mean a property or
parameter which is invariant under arbitrary permutations of the rows and
columns of the matrix. More information about some of these parameters
can be found in [4].
LetA=[a
ij]beanmbynmatrix. Theterm rankofAis the maximal
numberρ=ρ(A) of nonzero elements ofAwith no two of these elements
on a line. Thecovering numberofAis the minimal numberκ=κ(A)of
2
IfGis connected, then the bipartition is unique.

6 Introduction
lines ofAthat contain (that is,cover) all the nonzero elements ofA. Both
ρandκare combinatorial parameters. The fundamental minimax theorem
of K¨onig (see [4]) asserts the equality of these two parameters.
Theorem 1.2.1
ρ(A)=κ(A).
A set of nonzero elements ofAwith no two on a line corresponds in the
bipartite graph BG(A) to a set of edges no two of which have a common
vertex, that is, pairwise vertex-disjoint edges or amatching. Thus Theorem
1.2.1 asserts that in a bipartite graph, the maximal number of edges in a
matching equals the minimal number of vertices in a subset of the vertex
set that meets all edges.
Assume thatm≤n.ThepermanentofAis defined by
per(A)=

a
1i1
a2i2
...amim
where the summation extends over all sequencesi 1,i2,...,imwith 1≤i 1<
i
2<···<i m≤n. Thus per(A) equals the sum of all possible products of
melements ofAwith the property that the elements in each of the products
occur on different lines. The permanent ofAis invariant under arbitrary
permutations of rows and columns ofA,thatis,
per(PAQ)=per(A),ifPandQare permutation matrices.
IfAis a nonnegative matrix, then per(A)>0 if and only ifρ(A)=m.
Thus by Theorem 1.2.1, per(A) = 0 if and only if there are permutation
matricesPandQsuch that
PAQ=
σ
A
1Ok,l
A21A2
λ
for some positive integerskandlwithk+l=n+ 1. In the case of a square
matrix, the permanent function is the same as the determinant function
apart from a factor±1 preceding each of the products in the defining sum-
mation. Unlike the determinant, the permanent is, in general, altered by
the addition of a multiple of one row to another and the multiplicative
law for the determinant, det(AB ) = det(A) det(B), does not hold for the
permanent. However, theLaplace expansionof the permanent by a row or
column does hold:
per(A)=
n

j=1
aijper(A(i, j)) (i=1,2,...,m);
per(A)=
m

i=1
aijper(A(i, j)) (j=1,2,...,n).

1.2 Combinatorial Parameters 7
We now define the widths and heights of a matrix. In order to simplify
the language, we restrict ourselves to (0,1)-matrices. LetA=[a
ij]bea
(0,1)-matrix of sizembynwithr
i1’s in rowi(i=1,2,...,m). We call
R=(r
1,r2,...,rm) the row sum vector ofA.Letαbe an integer with
0≤α≤r
i,i=1,2,...,m. Consider a subsetJ⊆{1,2,...,n}such that
each row sum of themby|J|submatrix
E=A[·,J]
ofAis at least equal toα. Then the columns ofEdetermine anα-set
of representativesofA. This terminology comes from the fact that in the
configuration of subsetsX
1,X2,...,XmofX={x 1,x2,...,xn}associated
withA(see Section 1.1), the setZ={x
j:j∈J}satisfies
|Z∩X
i|≥α(i=1,2,...,m).
Theα-widthofAequals the minimal number⊕
α=⊕α(A) of columns ofA
that form anα-set of representatives ofA. Clearly,⊕
α≥|α|, but we also
have
0=⊕
0<⊕1<···<⊕ r (1.1)
whereris the minimal row sum ofA. The widths ofAare invariant under
row and column permutations.
LetE=A[·,J] be a submatrix ofAhaving at leastα1’s in each row
and suppose that|J|=⊕(α). ThenEis aminimalα-width submatrixof
A.LetFbe the submatrix ofEcomposed of all rows ofEthat contain
exactlyα1’s. ThenFcannot be an empty matrix. Moreover,Fcannot
have a zero column, because otherwise we could delete the corresponding
column ofEand obtain anmby⊕
α−1 submatrix ofAwith at leastα1’s
in each row, contradicting the minimality of⊕
α. The matrixFis called a
criticalα-submatrixofA. Each criticalα-submatrix ofAcontains the same
number⊕
αof columns, but the number of rows need not be the same. The
minimal numberδ
α=δα(A) of rows in a criticalα-submatrix ofAis called
theα-multiplicityofA. We observe thatδ
α≥1 and that multiplicities
ofAare invariant under row and column permutations. Since a critical
α-submatrix cannot contain zero columns, we haveδ
1≥⊕(1).
Let the matrixAhave column sum vectorS=(s
1,s2,...,sn), and let
βbe an integer with 0≤β≤s
j(1≤j≤n). By interchanging rows
with columns in the above definition, we may define theβ-heightofAto
be the minimal numbertof rows ofAsuch that the correspondingtbyn
submatrix ofAhas at leastβ1’s in each column. Since theβ-height ofA
equals theβ-width ofA
T
, one may restrict attention to widths.
We conclude this section by introducing a parameter that comes from
the theory of hypergraphs [1]. LetAbe a (0,1 mbyn.
A(weak)t-coloringofAis a partition of its set of column indices intot
setsI
1,I2,...,Itin such a way that if rowicontains more than one 1,
then{j:a
ij=1}has a nonempty intersection with at least two of the

8 Introduction
setsI
1,I2,...,It(i=1,2,...,m). The setsI 1,I2,...,Itare called the
color classesof thet-coloring. The (weak)chromatic numberγ(A)ofA
is the smallest integertfor whichAhas at-coloring [3]. In hypergraph
terminology the chromatic number is the smallest number of colors in a
coloring of the vertices with the property that no edge with more than one
vertex is monochromatic. Thestrong chromaticnumber of a hypergraph is
the smallest number of colors in a coloring of its vertices with the property
that no edge contains two vertices of the same color. Thestrong chromatic
numberγ
s(A)ofA equals the strong chromatic number of the hypergraph
associated withA.IfAis the edge–vertex incidence matrix of a graphG,
then both the weak and strong chromatic numbers ofAequal the chromatic
number ofG.
1.3 Square Matrices
We first consider a canonical form of a square matrix under simultaneous
permutations of its rows and columns.
LetA=[a
ij] be a square matrix of ordern. ThenAis calledreducible
provided there exists a permutation matrixPsuch thatPAP
T
has the
form ⊕
A
1A12
OA 2

whereA
1andA 2are square matrices of order at least 1. The matrix
Aisirreducibleprovided that it is not reducible. A matrix of order 1 is
always irreducible. Irreducibility of matrices has an equivalent formulation
in terms of digraphs. LetDbe a digraph. ThenDisstrongly connected(or
strong) provided that for each ordered pair of distinct verticesx, ythere is
a directed path fromxtoy. A proof of the following theorem can be found
in [4].
Theorem 1.3.1LetAbe a square matrix of ordern.ThenAis irreducible
if and only if the digraphD(A)is strongly connected.
If a digraphDis not strongly connected, its vertex set can be parti-
tioned uniquely into nonempty sets each of which induces a maximal strong
digraph, called astrong componentofD. This leads to the following canoni-
cal form with respect to simultaneous row and column permutations [4].
Theorem 1.3.2LetAbe a square matrix of ordern. Then there exist a
permutation matrixPof ordernand an integert≥1such that
PAP
T
=





A
1A12... A1t
OA 2... A2t
.
.
.
.
.
.
.
.
.
.
.
.
O O ... A
t





. (1.2)

1.3 Square Matrices 9
whereA
1,A2,...,Atare square, irreducible matrices. In(1.2), the matri-
cesA
1,A2,...,Atwhich occur as diagonal blocks are uniquely determined
to within simultaneous permutations of their rows and columns, but their
ordering in(1.2)is not necessarily unique. ff
Referring to Theorem 1.3.2, we see that the digraphD(A) is composed
of strongly connected graphsD(A
i)(i=1,2,...,t) and some arcs which
go from a vertex inD(A
i)toavertexinD(A j) wherei<j. The matrixA
is irreducible if and only ift= 1. The matricesA
1,A2,...,Atin (1.2
called theirreducible componentsofA. By Theorem 1.3.2 the irreducible
components ofAare uniquely determined to within simultaneous permu-
tations of their rows and columns. The matrixAis irreducible if and only
if it has exactly one irreducible component.
The canonical form (1.2Ais given as a block upper triangular ma-
trix, but by reordering the blocks so that the diagonal blocks occur in the
orderA
t,...,A2,A1, we could equally well give it as a block lower triangu-
lar matrix. Thuswe may interchange the use of lower block triangular and
upper block triangular.
We now consider a canonical form under arbitrary permutations of rows
and columns.
Again letAbe a square matrix of ordern.Ifn≥2, thenAis called
partly decomposableprovided there exist permutation matricesPandQ
such thatPAQhas the form

A
1A12
OA 2

whereA
1andA 2are square matrices of order at least 1. According to our
definition of decomposable applied to square matrices, the matrixAwould
be decomposable provided in additionA
12=O. A square matrix of order
1 is partly decomposable provided it is a zero matrix. The matrixAisfully
indecomposableprovided that it is not partly decomposable. The matrix
Ais fully indecomposable if and only if it does not have a nonempty zero
submatrixO
k,lwithk+l≥n. Each line of a fully indecomposable matrix
of ordern≥2 contains at least two nonzero elements. The covering number
κ(A) of a fully indecomposable matrix of ordernequalsn. Moreover, if
n≥2 and we delete a row and column ofA, we obtain a matrix of order
n−1 which has covering number equal ton−1. Hence from Theorem 1.2.1
we obtain the following result.
Theorem 1.3.3LetAbe a fully indecomposable matrix of ordern.Then
the term rankρ(A)ofAequalsn.Ifn≥2, then each submatrix ofAof
ordern−1has term rank equal ton−1. ff
A collection ofnelements (or the positions of those elements) of the
square matrixAof ordernis called adiagonalprovided no two of the
elements belong to the same row or column; the diagonal is anonzero

10 Introduction
diagonalprovided none of its elements equals 0. Thus Theorem 1.3.3 asserts
that a fully indecomposable matrix has a nonzero diagonal and, ifn≥2,
each nonzero element belongs to a nonzero diagonal.
We have the following canonical form with respect to arbitrary row and
column permutations [4].
Theorem 1.3.4LetAbe a matrix of ordernwith term rank equal ton.
Then there exist permutation matricesPandQof ordernand an integer
t≥1such that
PAQ=





B
1B12... B1t
OB 2... B2t
.
.
.
.
.
.
.
.
.
.
.
.
O O ... B
t





, (1.3)
whereB
1,B2,...,Btare square, fully indecomposable matrices. The ma-
tricesB
1,B2,...,Btwhich occur as diagonal blocks in(1.3)are uniquely
determined to within arbitrary permutations of their rows and columns, but
their ordering in(1.3)is not necessarily unique. π
The matricesB
1,B2,...,Btin (1.3 fully indecomposable
componentsofA. By Theorem 1.3.4 the fully indecomposable components
ofAare uniquely determined to within arbitrary permutations of their rows
and columns. The matrixAis fully indecomposable if and only if it has
exactly one fully indecomposable component.
As with the canonical form (1.2 Ais given
as a block upper triangular matrix, but by reordering the blocks so that
the diagonal blocks occur in the orderB
t,...,B2,B1, we could equally well
give it as a block lower triangular matrix.
A fully indecomposable matrix of ordern≥2 has an inductive structure
which can be formulated in terms of (0,1)-matrices (Theorem 4.2.8 in [4]).
Theorem 1.3.5LetAbe a fully indecomposable(0,1)-matrix of ordern≥
2. There exist permutation matricesPandQand an integerm≥2such
that
PAQ=









A
1O O ... O E 1
E2A2O ... O O
OE
3A3... O O
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
O O O ... A
m−1 O
O O O ... E
mAm









(1.4)
where each of the matricesA
1,A2,...,Amis fully indecomposable and each
ofE
1,E2,...,Emcontains at least one1. π
Note that a matrix of the form (1.4
statement of the theorem is always fully indecomposable.

1.3 Square Matrices 11
LetA=[a
ij] be a fully indecomposable matrix of ordern. ThenAis
callednearly decomposableprovided whenever a nonzero element ofAis re-
placed with a zero, the resulting matrix is not fully indecomposable. Nearly
decomposable matrices have an inductive structure which we formulate in
terms of (0, 1)-matrices [4].
Theorem 1.3.6LetAbe a nearly decomposable(0,1)-matrix of ordern≥
2. Then there exist permutation matricesPandQand an integermwith
1≤m<nsuch that
PAQ=













100 ...00
110 ...00
011 ...00
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
F1
000 ...10
000 ...11
F2 A1













where
(i)A
1is a nearly decomposable matrix of orderm,
(ii)the matrixF
1contains exactly one1and this1is in its first row and
last column,
(iii)the matrixF
2contains exactly one1and this1is in its last row and
last column,
(iv)ifmΓ=1, thenm≥3and the element in the last row and last column
ofA
1is a0.
A simple induction shows that forn≥3, a nearly decomposable (0, 1)-
matrix of ordernhas at most 3(n−1) 1’s [4].
The matrixAof ordernis said to havetotal supportprovided each of
its nonzero elements belongs to a nonzero diagonal. A zero matrix has total
support as it satisfies the definition vacuously. IfAΓ=O, then we have the
following result [4].
Theorem 1.3.7LetAbe a square, nonzero matrix of ordern.ThenAhas
total support if and only if there are permutation matricesPandQsuch
thatPAQis a direct sum of fully indecomposable matrices. IfAhas total
support, thenAis fully indecomposable if and only if the bipartite graph
BG(A) is connected.
In the canonical form (1.3
off-diagonal blocksB
ij(i<j) are zero matrices.

12 Introduction
1.4 An Existence Theorem
Consider nonnegative integral vectors
R=(r
1,r2,...,rm)andR

=(r

1
,r

2
,...,r

m
),
and
S=(s
1,s2,...,sn)andS

=(s

1
,s

2
,...,s

n
),
where
r

i
≤ri(i=1,2,...,m)ands

j
≤sj(j=1,2,...,n).
Consider also anmbynnonnegative integral matrix
C=[c
ij](i=1,2,...,m;j=1,2,...,n).
With this notation we have the following matrix existence theorem of
Mirsky [13, 14].
Theorem 1.4.1There exists anmbynintegral matrixA=[a
ij]whose
elements and row and column sums satisfy
0≤a
ij≤cij (i=1,2,...,m;j=1,2,...,n),
r

i

n

j=1
aij≤ri (i=1,2,...,m),
s

j

m

i=1
aij≤sj (j=1,2,...,n)
if and only if for allI⊆{1,2,...,m}andJ⊆{1,2,...,n},

i∈I,j∈J
cij≥max




i∈I
r

i


j∈⊆J
sj,

j∈J
s

j


i∈⊆I
ri



. (1.5)
This theorem is derived by Mirsky using transversal theory but it can
also be regarded as a special case of more general supply–demand theorems
in network flow theory [8, 4]. As a result, the theorem remains true if the
integrality assumptions onR, R

,S,andS

are dropped and the conclusion
asserts the existence of a real nonnegative matrix. The integrality assump-
tions on other theorems in this and the next section can be dropped as well,
and the resulting theorems remain valid for the existence of real matrices.
The following corollary is an important special case of Theorem 1.4.1.
Corollary 1.4.2LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn)be non-
negative integral vectors satisfyingr
1+r2+···+r m=s1+s2+···+s n.There

1.4 An Existence Theorem 13
exists anmbynnonnegative integral matrixA=[a
ij]whose elements and
row and column sums satisfy
0≤a
ij≤cij (i=1,2,...,m;j=1,2,...,n),
n

j=1
aij=ri (i=1,2,...,m),
m

i=1
aij=sj (j=1,2,...,n)
if and only if for allI⊆{1,2,...,m}andJ⊆{1,2,...,n},

i∈I

j∈J
cij≥

j∈J
sj−

iΩΥI
ri, (1.6)
equivalently,

i∈I

j∈J
cij≥

i∈I
ri−

jΩΥJ
sj, (1.7)

The following corollary is an important special case.
Corollary 1.4.3LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn)be non-
negative integral vectors satisfyingr
1+r2+···+r m=s1+s2+···+s n.Let
pbe a positive integer. There exists anmbynnonnegative integral matrix
A=[a
ij]whose elements and row and column sums satisfy
0≤a
ij≤p(i=1,2,...,m;j=1,2,...,n),
n

j=1
aij=ri(i=1,2,...,m)and
m

i=1
aij=sj(j=1,2,...,n)
if and only if for allI⊆{1,2,...,m}andJ⊆{1,2,...,n},
p|I||J|≥

j∈J
sj−

iΩΥI
ri, (1.8)
equivalently,
p|I||J|≥

i∈I
ri−

jΩΥJ
sj. (1.9)

If in this corollary we assume thatr
1≥r2≥ ··· ≥r mands 1≥s2
≥···≥s n, then (1.9
pkl+
m

i=k+1
ri−
l

j=1
sj≥0(k=0,1,...,m;l=0,1,...,n).(1.10)
The special case of this corollary withp= 1 is due to Ford and Fulkerson
[8] and will be given a short proof in Chapter 2.

14 Introduction
1.5 An Existence Theorem for Symmetric
Matrices
Theorem 1.4.1 can be used to obtain existence theorems for symmetric in-
tegral matrices. The key for doing this is the following lemma of Fulkerson,
Hoffman, and McAndrew [9] for matrices with zero trace and its extension
to matrices with no restriction on the trace due to Brualdi and Ryser [4].
We observe that ifA=[a
ij] is a symmetric integral matrix of orderneach
of whose main diagonal elements equals 0, then
n

i=1
n

j=1
aij=2

1≤i<j≤n
aij,
an even number.
Lemma 1.5.1LetR=(r
1,r2,...,rn)be a nonnegative integral vector
and letpbe a positive integer. Assume that there is a nonnegative integral
matrixB=[b
ij]of ordernwhose row and column sum vectors equalR
and whose elements satisfyb
ij≤p(i, j=1,2,...,n). Then there exists
a symmetric, nonnegative integral matrixA=[a
ij]of ordernwhose row
and column sum vectors equalRand whose elements satisfya
ij≤p(i, j=
1,2,...,n).Ifr
1+r2+···+r nis an even integer andBhas zero trace,
thenAcan also be chosen to have zero trace.
Corollaries 1.4.2 and 1.4.3 and Lemma 1.5.1 immediately give the next
theorem [4]. IfAis a symmetric matrix of ordern, then so isPAP
T
for
every permutation matrixPof ordern. Thus the assumption thatRis
nonincreasing in the theorem is without loss of generality.
Theorem 1.5.2LetR=(r
1,r2,...,rn)be a nonincreasing, nonnegative
integral vector and letpbe a positive integer. There exists a symmetric,
nonnegative integral matrixA=[a
ij]whose row and column sum vectors
equalRand whose elements satisfya
ij≤p(i, j=1,2,...,n)if and only if
pkl−
l

j=1
rj+
n

i=k+1
ri≥0(k, l=0,1,...,n).
(Note that the domain ofkandlin the preceding inequality may be
replaced with 0≤l≤k≤n.)
Corollary 1.5.3LetR=(r
1,r2,...,rn)be a nonincreasing, nonnegative
integral vector.
(i)There exists a symmetric(0,1)-matrix whose row sum vector equals
Rif and only if
kl−
l

j=1
rj+
n

i=k+1
ri≥0(k, l=0,1,...,n).

1.6 Majorization 15
(ii)There exists a symmetric(0,1)-matrix with zero trace whose row sum
vector equalsRif and only if
kl−min{k, l}−
l

j=1
rj+
n

i=k+1
ri≥0(k, l=0,1,...,n).

For symmetric matrices with zero trace we also have the following theo-
rem of Chungphaisan [6] that follows from Theorem 1.4.1 and Lemma 1.5.1.
The equivalence of the two conditions in Theorem 1.5.4 is shown in [4].
Theorem 1.5.4LetR=(r
1,r2,...,rn)be a nonincreasing, nonnegative
integral vector such thatr
1+r2+···+r nis an even integer, and letpbe
a positive integer. There exists a symmetric, nonnegative integral matrix
A=[a
ij]of ordernwith zero trace whose row sum vector equalsRand
whose elements satisfya
ij≤p(i, j=1,2,...,n)if and only if
k

i=1
ri≤
k

i=1
min{r i,p(k−1)}+
n

i=k+1
min{r i,pk}(k=1,2,...,n),
equivalently,
pk(k−1)≥
k

i=1
ri−
n

i=k+1
min{r i,pk}(k=1,2,...,n).
Theorem 1.5.4 contains the following theorem of Erdo´´s and Gallai [7]
as a special case.
Theorem 1.5.5LetR=(r
1,r2,...,rn)be a nonincreasing, nonnegative
integral vector such thatr
1+r2+···+r nis an even integer. There exists
a symmetric(0,1)-matrix with zero trace whose row sum vector equalsRif
and only if
k

i=1
ri≤k(k−1) +
n

i=k+1
min{k, r i}(k=1,2,...,n).
The theorems of Chungphaisan and of Erdo´´s and Gallai are usually
stated in graph-theoretic language of degree sequences. One can consult
[16] for a survey on degree sequences.
1.6 Majorization
LetX=(x 1,x2,...,xn)andY=(y 1,y2,...,yn) be twon-vectors of real
numbers which are assumed to be nonincreasing:
x
1≥x2≥···≥x nandy 1≥y2≥···≥y n.

16 Introduction
ThenXismajorized byY, denotedXY, provided that their partial
sums satisfy
k

i=1
xi≤
k

i=1
yi(1≤k≤n)
with equality fork=n; equivalently
n

i=k+1
xi≥
n

i=k+1
yi(0≤k≤n−1)
with equality fork=0.
Letτ=
λ
n
i=1
xi=
λ
n
i=1
yi.IfXandYin addition have positive
integral elements, thenXandYarepartitionsofτ. Majorization is a
partial order on nonincreasingn-vectors, or nonincreasing vectors with the
same sumτ. In the latter case, then-vector (τ/n,τ/n,...,τ/n)isthe
unique minimal element with respect to majorization, and then-vector
(τ,0,...,0) is the unique maximal element.
If the vectorsXandYhave different sizes, then we can still compare
XandYfor majorization by appending 0’s to one of the vectors so as to
obtain vectors of the same size. Also the assumption thatXandYare
nonincreasing vectors can be eliminated provided we first reorder the com-
ponents of the vectors so as to be in nonincreasing order before we compute
partial sums. Thus ifX=(x
1,x2,...,xn)andY =(y 1,y2,...,yn) are two
n-vectors, andX
σ
=(x
[1],x
[2],...,x
[n])andY
σ
=(y
[1],y
[2],...,y
[n])are
obtained fromXandY, respectively, by rearranging components in non-
increasing order, thenXismajorizedbyYprovidedX
σ
is majorized by
Y
σ
. Under these circumstances majorization is not a partial order since
XYandYXimply only thatXandYare rearrangements of one
another.
Now letR=(r
1,r2,...,rm) be a nonnegative integral vector of sizem,
and letnbe an integer withr
k≤n(k=1,2,...,m). Then theconjugateof
Ris the nonnegative integral vectorR

=(r

1
,r

2
,...,r

n
)ofsize
3
nwhere
r

k
=|{i:r i≥k, i=1,2,...,m}|(k=1,2,...,n).
The conjugate ofRhas the same sum asRand is always nonincreasing.
Moreover, it does not depend on the order of the elements ofR. We also
note that
n

i=k+1
r

i
=
n

i=1
(ri−k)
+
(k≥1),
where for a real numbera,a
+
= max{0,a}.
3
There is a certain arbitrariness in the size of the conjugateR

in that it can be any
integer which is at least as large as the maximum element ofR. This only affects the
number of trailing 0’s inR

.

1.6 Majorization 17
The conjugate ofRcan be pictured as follows. Consider an array ofm
rows which hasr
i1’s in the initial positions of rowi(i=1,2,...,m). The
conjugateR

ofRis the vector of column sums of this array. For example,
letR=(5,4,2,1,1) andn= 5. Then using the array
11111
1111
11
1
1
we see thatR

=(5,3,2,2,1).
Conjugation is related to majorization as given in the following elemen-
tary result.
Theorem 1.6.1LetX=(x
1,x2,...,xn)andY=(y 1,y2,...,yn)be non-
increasing, nonnegative integral vectors withXY.ThenY

X

.
Proof.BecauseXYwe have
n

i=p+1
xi≥
n

i=p+1
yi(p=0,1,...,n−1).
Letkbe a positive integer, and letpbe the largest index for whichx
p>k.
Then
k

i=1
x

i
=
n

i=1
min{x i,k}=pk+
n

i=p+1
xi
≥pk+
n

i=p+1
yi

n

i=1
min{y i,k}=
k

i=1
y

i
.
It now follows thatY

X

. 
The following result shows how a majorization between a pair of vectors
implies a majorization between pairs of vectors derived from them [10].
Theorem 1.6.2LetX=(x
1,x2,...,xn)andY=(y 1,y2,...,yn)be non-
increasing, nonnegative integer vectors withXY.LetX

be obtained
fromXby reducingppositive elements in positionsi
1,i2,...,ipby1.Also
letY

be obtained fromYby reducingppositive elements in positions
j
1,j2,...,jpby1.Ifi 1≤j1,i2≤j2,...,ip≤jp, thenX

Y

.
Proof.We prove the result by induction onp. First assume thatp=1
and seti
1=iandj 1=j. By hypothesis,i≤j.LetX

=(x

1
,x

2
,...,x

n
)

18 Introduction
be obtained fromXby replacingx
iwithx i−1, and letY

=(y

1
,y

2
,...,y

n
)
be obtained fromYby replacingy
jwithy j−1. IfX

andY

are nonin-
creasing vectors, thenX

Y

follows fromXYandi≤j.
IfX

andY

are not both nonincreasing, we proceed as follows. First
we arrangeX

andY

to be nonincreasing by assuming that componenti

ofXhas been reduced by 1 to getX

and that componentj

ofYhas been
reduced by 1 to getY

. Herei

≥i,j

≥j,and
x
i=xi+1=···=x i
∆>xi

+1,yj=yj+1=···=y j
∆>yj

+1,(1.11)
wherex
i

+1=0ifi

=nandy j

+1=0ifj

=n.Ifi

≤j

, then as above
X

Y

follows fromXY. Suppose thati

>j

. Then we have
1≤i≤j≤j

<i

≤n. (1.12)
Assume to the contrary thatX

Y

does not hold. Then there is an
integerksatisfying
j

≤k<i

andx 1+x2+···+x k=y1+y2+···+y k. (1.13)
FromXYwe obtain
x
1+x2+···+x k−1≤y1+y2+···+y k−1,and
x
1+x2+···+x k+1≤y1+y2+···+y k+1.
(1.14)
From (1.13
x
k≥ykandx k+1≤yk+1, (1.15)
and from (1.11
x
k=xk+1. (1.16)
Now (1.15 y
k≥yk+1imply
y
k=xk=yk+1. (1.17)
Ifk=j

, this contradicts (1.11 j

<kand we have from (1.17
and (1.13
x
1+x2+···+x k−1=y1+y2+···+y k−1. (1.18)
We may now repeat the argument withkreplaced withk−1. After a finite
number of steps, (1.11
p=1.
We now proceed by induction onp. Suppose thati
1<i2<···<i p
andj 1<j2<···<j p.LetX
∆∆
andY
∆∆
be obtained fromXandYby
reducing componentsi
2,...,ipofXand componentsj 2,...,jpofYby 1.
By the induction assumption we haveX
∆∆
Y
∆∆
. We may rearrangeX
∆∆
andY
∆∆
into nonincreasing order without disturbing thei 1position ofX
∆∆

1.7 Doubly Stochastic Matrices and Majorization 19
or thej
1position ofY
ffff
. Applying the argument used forp= 1 to these
rearrangements shows thatX

Y

. ff
1.7 Doubly Stochastic Matrices and Majori-
zation
A nonnegative matrixA=[a ij] of ordernis calleddoubly stochastic
provided each of its line sums equals 1:
n

j=1
aij=1 (i=1,2,...,n),and
n

i=1
aij=1 (j=1,2,...,n).(1.19)
The equations (1.19
AJ
n=JnA=J n.
From this it follows that the product of two doubly stochastic matrices of
ordernis also a doubly stochastic matrix of ordern, and that aconvex
combination
cA+(1− c)B(1≤c≤1)
of two doubly stochastic matricesAandBis also doubly stochastic.
The set (convex polytope), in realn
2
-dimensional space
n
2
, of doubly
stochastic matrices of ordernis denoted by Ω
n. The matrix (1/n)J neach
of whose elements equals 1/nis a doubly stochastic matrix in Ω
n.The
integral doubly stochastic matrices of ordernare precisely the permutation
matrices of ordern. Recall that anextreme pointof a convex polytope Φ is
a pointxof Φ that cannot be expressed as a nontrivial convex combination
of points of Φ:x=cy+(1−c)zwhere 0<c<1andy, z ∈Ω
nwithy=z.
The nonnegativity property of doubly stochastic matrices implies that if a
doubly stochastic matrixAis expressed as a nontrivial convex combination
of doubly stochastic matricesBandC, then the patterns ofBandCare
contained in the pattern ofA. We also note that ifAis a doubly stochastic
matrix, andPandQare permutation matrices, thenPAQis also a doubly
stochastic matrix.
Each permutation matrix of ordernis an extreme point of Ω
n.The
theorem of Birkhoff [5] asserts that Ω
nhas no other extreme points.
Theorem 1.7.1The extreme points ofΩ
nare precisely the permutation
matrices of ordern.IfAis a doubly stochastic matrix of ordern, then
there exist permutation matricesP
1,P2,...,Ptand positive real numbers
c
1,c2,...,ctwithc 1+c2+···+c t=1such that
A=c
1P1+c2P2+···+c tPt.

20 Introduction
Proof.Since the permutation matrices are extreme points of Ω
nthe
two assertions in the theorem are equivalent. LetA=[a
ij] be a doubly
stochastic matrix of ordernand assume thatAis an extreme point of Ω
n.
Consider the bipartite graph BG(A) representing the nonzero pattern of
A. Suppose this graph has a cycle. By alternating +1’s and−1’s on this
cycle, we can construct a matrixCof ordernwith elements equal to 0,±1
such that the nonzero pattern ofCis contained in the nonzero pattern of
Aand each line sum ofCequals 0. It follows that there is a small positive
number⊕such thatA±⊕Care doubly stochastic matrices. Since
A=
1
2
(A+⊕C)+
1
2
(A−⊕C),
Ais not an extreme point of Ω
n.ThusBG(A) does not have any cycles
and hence is a forest of trees. Since each line sum ofAequals 1, it follows
easily that each of these trees is a tree with exactly one edge, and therefore
thatAis a permutation matrix. 
The following corollary is an immediate consequence of Theorem 1.7.1.
Corollary 1.7.2The term rank of a doubly stochastic matrix of ordern
equalsn. 
This corollary can be proved directly by first showing that the covering
number of a doubly stochastic matrix of ordernequalsnand then invoking
Theorem 1.2.1 of K¨onig. This leads to a different proof of Theorem 1.7.1
(see [4]).
We now develop a connection between majorization of vectors and
doubly stochastic matrices.
Lemma 1.7.3LetY=(y
1,y2,...,yn)be a vector of sizen,andletA=
[a
ij]be a doubly stochastic matrix of ordern. Then the vectorX=YA=
(x
1,x2,...,xn)is majorized byY.
Proof.We may choose permutation matricesPandQsuch thatYP
andXQare in nonincreasing order. SinceX=YAimplies that (XQ)=
(YP)(P
T
AQ) and sinceP
T
AQis also a doubly stochastic matrix, we may
assume thatx
1≥x2≥···≥x nandy 1≥y2≥···≥y n.
Letkbe an integer with 1≤k≤n. Then
k

j=1
xj=
k

j=1
n

i=1
yiaij=
n

i=1
yieik, (1.20)
where
0≤e
ik=
k

j=1
aij≤1and
n

i=1
eik=k. (1.21)

1.7 Doubly Stochastic Matrices and Majorization 21
Sincee
in=1(i=1,2,...,n), we see that
λ
n
j=1
xj=
λ
n
i=1
yi.Using
(1.20 Y,weget
k

j=1
xj−
k

i=1
yj=
n

i=1
yieik−
k

i=1
yi
=
n

i=1
yieik−
k

i=1
yi+yk
ζ
k−
n

i=1
eik
θ
=
k

i=1
(yi−yk)(eik−1) +
n

i=k+1
eik(yi−yk)
≤0.
HenceXY. ff
We now turn to the converse of Lemma 1.7.3. Letnbe a fixed positive
integer, and letiandjbe integers with 1≤i<j≤n.LetT
(i,j)
n
be
the permutation matrix corresponding to the permutation of{1,2,...,n}
which interchangesiandjand leaves every other integer fixed. A doubly
stochastic matrix of the form
cI
n+(1− c)T
(i,j)
n
(0≤c≤1) (1.22
is called anelementary doubly stochastic matrix. Note that ifc= 0 then
(1.22 T
(i,j)
n
which interchangesiandj;ifc=1,
thenT
(i,j)
n
=In.
IfY=(y
1,y2,...,yn), thenY(cI n+(1− c)T
(i,j)
n
)isthen-vector
(y
1,...,yi−1,cyi+(1− c)y j,yi+1,...,yj−1,(1−c)y i+cyj,yj+1,...,yn)
which differs fromYin at most two components.
We next prove a lemma due to Muirhead [15] and Hardy, Littlewood,
and P´olya [11] which gives a strong converse of Lemma 1.7.3.
Lemma 1.7.4LetX=(x
1,x2,...,xn)andY=(y 1,y2,...,yn)be non-
increasing vectors of sizensuch thatXY.IfXandYdiffer int
components, then there arek≤t−1elementary doubly stochastic matrices
A
1,A2,...,Aksuch thatX=YA 1A2...Ak.
Proof.IfX=Y, thenA
1A2...Akis an empty product equal toI n.
Now assume thatX≤=Y. Thent≥2, and there exists an indexisuch
thatx
i<yiand then an indexj>isuch thatx j>yj.Weletiandjbe
the smallest and largest such indices, respectively. Also let
⊕= min{y
i−xi,xj−yj}andc=1−

yi−yj
.

22 Introduction
Then 0<c<1 and the vector Y

=Y(cI n+(1−c)T
i,j
n
)=(y

1
,y

2
,...,y

n
)
satisfies
Y

=(y 1,...,yi−1,yi−⊕, yi+1,...,yj−1,yj+⊕, yj+1,...,yn).
By Lemma 1.7.3,Y

Y. We also observe thaty

i
=xiif⊕=y i−xi
andy

j
=xjif⊕=x j−yj. ThereforeY

differs fromXin at mostt−1
components. ThatXY

easily follows from the facts thatXY, that
Y

andYdiffer in only the two componentsiandjwherei<j, that
y

i
+y

j
=yi+yj, and thaty

i
≥xi. We conclude thatXY

and that
XandY

differ in at mostt−1 components. It follows inductively that
X=Y

AwhereAis a product ofk

≤t−2 elementary doubly stochastic
matrices. HenceX=Y(cI
n+(1− c)T
(i,j)
n
)A, from which the lemma now
follows. ff
A consequence of the proof of Lemma 1.7.4 is the following corollary for
nonnegative integral vectors.
Corollary 1.7.5LetX=(x
1,x2,...,xn)andY=(y 1,y2,...,yn)be non-
increasing, nonnegative integral vectors of sizensuch thatXY.Then
there exist nonincreasing, nonnegative integral vectors
X
0=X, X1,...,Xp−1,Xp=Y
of sizensuch that
X
0X1··· X p−1Xp
andX iandX i+1differ in exactly two coordinates. ff
Combining Lemmas 1.7.3 and 1.7.4 we get the following theorem of
Hardy, Littlewood, and P´olya [11].
Theorem 1.7.6LetXandYbe two vectors of sizen.ThenXYif
and only if there is a doubly stochastic matrixAsuch thatX=YA.
The theory of majorization is thoroughly developed in [12].

References
[1] C. Berge,Graphs and Hypergraphs, North-Holland, Amsterdam, 1973
(translation and revision ofGraphes et hypergraphes, Dunod, Paris,
1970).
[2] B. Bollob´as,Modern Graph Theory, Grad. Texts Math.184, Springer,
1998.
[3] R.A. Brualdi and R. Manber, Chromatic number of classes of matrices
of zeros and ones,Discrete Math.,50(1984
[4] R.A. Brualdi and H.J. Ryser,Combinatorial Matrix Theory, Cam-
bridge U. Press, Cambridge, 1991.
[5] G. Birkhoff, Tres observaciones sobre el algebra lineal,Univ. Nac. Tu-
cum´an Rev. Ser. A, (1946
[6] V. Chungphaisan, Conditions for sequences to ber-graphic,Discrete
Math.,7(1974
[7] P. Erdo´´s and T. Gallai, Graphs with prescribed degrees of vertices (in
Hungarian),Mat. Lapok, 11(1960
[8] L.R. Ford, Jr. and D.R. Fulkerson,Flows in Networks, Princeton U.
Press, Princeton, 1962.
[9] D.R. Fulkerson, A.J. Hoffman, and M.H. McAndrew, Some prop-
erties of graphs with multiple edges,Canad. J. Math.,17(1965),
166–177.
[10] D.R. Fulkerson and H.J. Ryser, Multiplicities and minimal widths for
(0,1)-matrices,Canad. J. Math.,14(1962
[11] G.H. Hardy, J.E. Littlewood, and G. P´olya,Inequalities, Cambridge
U. Press, Cambridge, 1934 (1st ed.), 1952 (2nd ed.).
[12] A.W. Marshall and I. Olkin,Inequalities: Theory of Majorization and
Its Applications, Academic Press, Orlando, 1979.
23

24 Introduction
[13] L. Mirsky, Combinatorial theorems and integral matrices,J. Combin.
Theory,5(1968
[14] L. Mirsky,Transversal Theory, Academic Press, New York, 1971.
[15] R.F. Muirhead, Some methods applicable to identities and inequalities
of symmetric algebric functions ofnletters,Proc. Edinburgh Math.
Soc.,21(1903
[16] G. Sierksma and H. Hoogeven, Seven criteria for integer sequences
being graphic,J. Graph Theory,15(1991
[17] D.B. West,Introduction to Graph Theory, 2nd ed., Prentice-Hall,
Upper Saddle River, 2001.
[18] R.J. Wilson,Introduction to Graph Theory, Longman, London, 1985.

2
Basic Existence Theorems
for Matrices with
Prescribed Properties
In this chapter we prove some existence theorems for matrices with special
properties. We do not attempt to give a detailed account of such theorems.
Rather, we concentrate mainly on existence theorems that guarantee the
nonemptiness of the classes of matrices that we study in depth in this
book. Some existence theorems for special matrices are derived in [5] as
consequences of fundamental network flow theorems. We do not invoke
network flow theory, preferring in this chapter to derive our theorems using
more elementary mathematical arguments.
2.1 The Gale–Ryser and Ford–Fulkerson
Theorems
LetA=[a ij]beanm bynnonnegative matrix. Let
r
i=ai1+ai2+···+a in(i=1,2,...,m)
be the sum of the elements in rowiofA, and let
s
j=a1j+aj2+···+a mj(j=1,2,...,n)
be the sum of the elements in columnjofA. Then, as used in Chapter 1,
R=(r
1,r2,...,rm)
is therow sum vectorofA,and
S=(s
1,s2,...,sn)
25

26 Basic Existence Theorems
is thecolumn sum vectorofA. The vectorsRandSsatisfy the fundamental
equation
r
1+r2+···+r m=s1+s2+···+s n, (2.1)
since both sides equal the sumτ=τ(A) of all the elements ofA.
Now letR=(r
1,r2,...,rm)andS =(s 1,s2,...,sn) be nonnegative
vectors satisfying (2.1 τ, the common value in (2.1
not zero. ThembynmatrixA=[a
ij] where
a
ij=
r
isj
τ
(i=1,2,...,m;j=1,2,...,n)
has row sum vectorRand column sum vectorS, and all of its elements are
positive ifRandSare positive vectors.
We can also inductively construct anmbynnonnegative matrix
A=[a
ij] with row sum vectorRand column sum vectorS.Ifm=1,
then
A=

s
1s2... sn

is the unique such matrix. Ifn= 1, then
A=

r
1r2... rm

T
is the unique matrix. Now assume thatm>1and n>1. We leta 11=
min{r
1,s1}. First suppose thata 11=r1.Leta 12=...=a 1n=0,and
defineR

=(r2,...,rm)andS

=(s 1−r1,s2,...,sn). Then
r
2+···+r m=(s 1−r1)+s 2+···+s n.
Proceeding inductively, there exists anm−1byn matrixA

with row sum
vectorR

and column sum vectorS

. The matrix


r
10...0
A

 
is a nonnegative matrix with row sum vectorRand column sum vectorS.If
a
11=s1, a similar construction works. The matrix inductively constructed
in this way has at mostm+n−1 positive elements.
We denote by
N(R, S)
theset of all nonnegative matrices with row sum vectorRand column sum
vectorS. Partially summarizing, we have the following theorem.
Theorem 2.1.1LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn)be non-
negative vectors. ThenN(R, S)is nonempty if and only if the equation
(2.1)holds. ff

2.1 Gale–Ryser and Ford–Fulkerson Theorems 27
We observe that ifRandSare nonnegative integral vectors satisfying
the fundamental equation (2.1
2.1.1 produces a nonnegative integral matrixAwith row sum vectorRand
column sum vectorS.Let
Z
+
(R, S)
denote theset of all nonnegative integral matrices with row sum vectorR
and column sum vectorS.
Theorem 2.1.2LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn)be non-
negative integral vectors. ThenZ
+
(R, S)is nonempty if and only if(2.1)
holds. ff
If we impose other requirements on the elements ofA, then the fun-
damental equation (2.1 A
with the required row and column sum vectors. For instance, there is no
4 by 3 matrix with row sum vectorR=(3,3,2,1) and column sum vector
S=(4,4,1) each of whose elements equals 0 or 1. We denote by
A(R, S)
theclass
1
of all(0,1)-matrices with row sum vectorRand column sum
vectorS.
Gale [11] and Ryser [20] independently proved the basic theorem for the
existence of a (0,1 Rand column sum vector
S. Ryser used induction and direct combinatorial reasoning; Gale used the
theory of network flows. This theorem is also derived in [5] using network
flows and is also derived in [3]. See also [22]. We give the simple inductive
proof in [4]. There is no loss in generality in assuming thatRandSare
nonincreasing vectors.
Theorem 2.1.3LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn)be non-
increasing vectors of nonnegative integers. ThenA(R, S)is nonempty if
and only ifSis majorized by the conjugateR

ofR, that is,
SR

. (2.2)
Proof.Assume there is anmbyn(0,1)-matrixAwith row sum vector
Rand column sum vectorS. Thens
1+s2+···+s n=r1+r2+···+r m.
IfAis partitioned as

B
1
B2

(B
1ismbyl),
then the number of 1’s inB
1does not exceed the number of 1’s in the first
lcolumns of the matrix obtained fromAby sliding the 1’s in each row as
1
By tradition, beginning with Ryser,A(R, S) is referred to as a class, rather than a
set, of matrices.

28 Basic Existence Theorems
far to the left as possible; that is,
λ
l
i=1
si≤
λ
l
i=1
r

i
(l=1,2,...,n−1).
Hence (2.2
We prove the converse by contradiction. Choose a counterexample with
m+nminimal, and for thatm+n, withτ=r
1+r2+···+r mminimal.
Case1:
λ
l
i=1
si=
λ
l
i=1
r

i
for somelwith 0<l<n. It is an immediate
consequence that
R
1=(r

1
,...,r

m
) = (min{l, r 1},...,min{l, r m})andS 1=(s 1,...,sl),
we haveS
1R

1
, and for the vectors
R
2=(r1−r

1
,...rm−r

m
),andS 2=(s l+1,...,sn),
we haveS
2R

2
. By minimality there exist a (0,1)-matrixA 1with row
sum vectorR
1and column sum vectorS 1, and a (0,1 A 2with row
sum vectorR
2and column sum vectorS 2. Hence the matrix

A
1
A2

has row sum vectorRand column sum vectorS, a contradiction.
Case2:
λ
l
i=1
si<
λ
l
i=1
r

i
for alllwith 0<l<n. By minimality,r m≥1
ands
n≥1. LetR

=(r1,...,rm−1,rm−1) andS

=(s 1,...,sn−1,sn−1).
ThenS

R
∗∩
. By minimality again there is anmbynmatrixA

=[a

ij
]
with row sum vectorR

and column sum vectorS

.Ifa

mn
= 0, then
changing this 0 to a 1 we obtain a matrix with row sum vectorRand
column sum vectorS, a contradiction. Suppose thata

mn
= 1. Since
r
m−1≤n−1, there is aqsuch thata

mq
= 0. Sinces q≥sn, there is a
psuch thata

pq
=1anda

pn
= 0. Interchanging 0’s with 1’s in this 2 by 2
matrix gives a matrixA
ffff
with row sum vectorR

and column sum vector
S

, and we get a contradiction as before. ff
If we assume thatr
i≤n(i=1,2,...,m), thenSR

if and only if
m

i=1
(ri−k)
+

n

j=k+1
sj(0≤k≤n−1), (2.3)
with equality ifk=0.
Consider the classA(R, S) of all (0,1
R=(r
1,r2,...,rm) and column sum vectorS=(s 1,s2,...,sn). The-
orem 2.1.3 gives a simple necessary and sufficient condition (2.2
nonemptiness ofA(R, S). This condition requires the checking of only
n−1 inequalities and one equality. A different condition was obtained by
Ford and Fulkerson [8] by using a theorem in network flows. While this
condition requires more inequalities than those in (2.2
worthwhile in that it involves a quantity that arises in the investigation of
certain parameters associated withA(R, S). We now discuss this condition
of Ford and Fulkerson.

2.1 Gale–Ryser and Ford–Fulkerson Theorems 29
LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn) be nonnegative integral
vectors with elements summing to the same integer
τ=r
1+r2+···+r m=s1+s2+···+s n.
IfK⊆{1,2,...,m}andL⊆{1,2,...,n}, we define
t
K,L=|K||L|+

i∈
¯
K
ri−

j∈J
sj.
The numberst
K,Lare integers and we show below that they are nonnegative
ifA(R, S) is nonempty.
Thestructure matrixassociated withRandSis the matrix
T=T(R, S)=[t
kl](k=0,1,...,m;l=0,1,...,n)
of sizem+1byn+ 1 where
t
kl=kl+
m

i=k+1
ri−
l

j=1
sj(k=0,1,...,m;l=0,1,...,n).(2.4)
Under the assumption thatRandSare nonincreasing,
t
kl= min{t K,L:K⊆{1,2,...,m},L⊆{1,2,...,n},|K|=k,|L|=l}.
Some of the elements ofTare
t
00=τ,t0n=tm0=0,t1n=n−r 1,tm1=m−s 1,andt mn=mn−τ.
We next observe that, ifA(R, S) is nonempty, the elements of the struc-
ture matrix are nonnegative integers, by explaining their combinatorial sig-
nificance. LetAbe a matrix inA(R, S) and partitionAas

A
1
X
YA2

,
whereA
1is of sizekbyland where either or both ofA 1andA 2may be
empty. Then a straightforward calculation shows that
t
kl=N0(A1)+N 1(A2),
the numberN
0(A1) of 0’s inA 1plus the numberN 1(A2) of 1’s inA 2.The
theorem of Ford and Fulkerson asserts that under the assumption thatR
andSare nonincreasing, the nonnegativity ofTis also sufficient for the
nonemptiness ofA(R, S). It can be shown directly that the nonnegativity
ofTis equivalent to the condition thatSis majorized by the conjugate
R

ofR(see e.g. [17]). Rather than do this, we use the proof technique
of Theorem 2.1.3 as given in [4]. This method of proof was earlier used in

30 Basic Existence Theorems
[14]. A sequenceU=(u
1,u2,...,un) of integers isnearly nonincreasing
provided
u
i≥uj−1(1≤i<j≤n).
Thus the sequenceUis nearly nonincreasing if and only if there is a sequence
H=(h
1,h2,...,hn) of 0’s and 1’s such thatU+His nonincreasing. For
example, (5,5,4,5,5,3,4) is a nearly nonincreasing sequence. Michael [16]
proved the theorem under the assumption thatRandSare only nearly
nonincreasing. We prove the theorem with no monotonicity assumption on
Rand with the assumption thatSis nearly nonincreasing.
Theorem 2.1.4LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn)be non-
negative integral vectors such thatSis nearly nonincreasing andr
1+r2+
···+r
m=s1+s2+···+s n.ThenA(R, S)is nonempty if and only if
t
kl≥0(k=0,1,...,m;l=0,1,...,n), (2.5)
that is, the structure matrixTis a nonnegative matrix.
Proof.The necessity of (2.5
the sufficiency by contradiction. As in the proof of Theorem 2.1.3, we
choose a counterexample withm+nminimal, and for thatm+n, with
τ=r
1+r2+···+r mminimal. Clearlyt 00=τ>0 (otherwise takeAto
be the zero matrixO)andt
mn>0 (otherwise takeAto be the matrixJ
of all 1’s).
Case 1:t
kl= 0 for somekandlwith (k, l)≤=(0,n)or(m,0). It is an im-
mediate consequence that the structure matrices forR
1=(r1−l,...,rk−l),
S
1=(s l+1,...,sn) and forR 2=(r k+1,...,rm),S2=(s 1−k,...,sl−k)
satisfy the nonnegativity criteria (2.5S
1andS 2are nearly mono-
tone, by minimality there exist a (0, 1)-matrixA
1with row sum vectorR 1
and column sum vectorS 1, and a (0,1)-matrixA 2with row sum vectorR 2
and column sum vectorS 2. Hence the matrix

J
k,l
A1
A2O

has row sumsRand column sumsS, a contradiction.
Case 2:t
kl>0 for allkandlwith (k, l)≤=(0,n)or(m,0). By min-
imality,r
m≥1ands n≥1. LetR

=(r 1,...,rm−1,rm−1), and let
S

=(s 1,...,sn−1,sn−1). ThenS

is a nearly nonincreasing vector, and
the structure matrix forR

andS

is nonnegative. By minimality again
there is a matrixA

=[a

ij
] with row sumsR

and column sumsS

.If
a

mn
= 0, then changing this 0 to a 1 we obtain a matrix with row sums
Rand column sumsS, a contradiction. Suppose thata

mn
= 1. Since
r
m−1≤n−1, there is aqsuch thata

mq
= 0. SinceSis nearly nonin-
creasing,s
q≥sn−1 and hence there is apsuch thata

pq
=1anda

pn
=0.

2.1 Gale–Ryser and Ford–Fulkerson Theorems 31
Interchanging 0 with 1 in this 2 by 2 matrix gives a matrixA
∆∆
with row
sumsR

and column sumsS

, and we get a contradiction as before.∆
Assuming thatRandSare nonincreasing, Fulkerson [9] gave condi-
tions for the existence of a matrixA=[a
ij]inA (R, S) such thata ii=0
(i=1,2,...,min{m, n}). These conditions also involve the elements of
the structure matrixT. We obtain this theorem as a special case of
Theorem 1.4.1.
Theorem 2.1.5LetR=(r
1,r2,...,rm)andS=(s 1,s2,...,sn)be non-
increasing, nonnegative integral vectors satisfyingr
1+r2+···+r m=
s
1+s2+···+s n. There exists anmbyn(0,1)-matrixA=[a ij]with row sum
vectorRand column sum vectorSsuch thata
ii=0(i=1,2,...,min{m, n})
if and only if
t
kl−min{k, l}≥0(k=0,1,...,m;l=0,1,...,n). (2.6)
Proof.Define thembyn(0,1)-matrixC=[c
ij]byc ii= 0 only for
i=1,2,...,min{m, n}. By Corollary 1.4.2 the matrixAspecified in the
theorem exists if and only if

i∈I

j∈J
cij+

iΩΥI
ri−

j∈J
sj≥0 (2.7
for allI⊆{1,2,...,m}andJ⊆{1,2,...,n}.Let|I|=kand|J|=l.The
sum
λ
i∈I
λ
j∈J
cij=kl−|I∩J|is minimal when|I∩J|is maximal. Since
RandSare nonincreasing, for fixedkandlthe expression on the left side
in (2.7 I={1,2,...,k}andJ={1,2,...,l}andinthis
case it has value
kl−min{k, l}+
m

i=k+1
ri−
l

j=1
sj=tkl−min{k, l}.

We close this section by recording the following inequalities that must
be satisfied if there exists a matrix inA(R, S) whereR=(r
1,r2,...,rm)
andS=(s
1,s2,...,sn):
m

i=1
r
2
i
+
n

j=1
s
2
j
≤τ

max{m, n}+
τ
max{m, n}

,
m

i=1
r
2
i
+
n

j=1
s
2 j
≤τmax{m, n}+
min{m,n}

i=1
risi.
The first of these inequalities is due to Khintchine [12] and the second
is due to Mat´u˘s (see [15]).

32 Basic Existence Theorems
2.2 Tournament Matrices and Landau’s
Theorem
A nonnegative matrixA=[a ij] of ordernis ageneralized tournament
matrixprovided
a
ii=0 (i=1,2,...,n)
and
a
ij+aji=1 (1≤i<j≤n),
that is, provided
A+A
T
=Jn−In.
LetR=(r
1,r2,...,rn)andS=(s 1,s2,...,sn) be the row sum vector and
column sum vector ofA, respectively. Then
r
i=

j=i
aij=

j=i
(1−a ji)=(n−1)−

j=i
aji=(n−1)−s i.
Hence
r
i+si=n−1(i=1,2,...,n).
In particular, the column sum vectorSof a generalized tournament matrix
is determined by its row sum vectorR. In addition, the sum of all the
elements of a generalized tournament matrix of ordernis
n

i=1
ri=
n

i=1
si=
n

i=1
n

j=1
aij=

n
2

,
an integer determined solely byn.LetIbe a nonempty subset of{1,2,...,
n}. Then the principal submatrixA[I]ofA is also a generalized tournament
matrix, and hence

i∈I
ri≥

|I|
2

and

i∈I
si≥

|I|
2

. (2.8)
IfAis a generalized tournament matrix of ordern, then so isPAP
T
for
each permutation matrixPof ordern. Thus there is no loss in generality
in assuming that the row sum vectorRis nondecreasing, and hence that
the column sum vectorSis nonincreasing. IfRis nondecreasing, then (2.8
is equivalent to
k

i=1
ri≥

k
2

and
n

i=n−k+1
ri≤

n
2



k
2

(k=1,2,...,n).
Atournament matrixis a generalized tournament matrix each of whose
elements equals 0 or 1. IfA=[a
ij] is a tournament matrix, thena ijaji=0
for alliandj. The digraphD(A) of a tournament matrixAis called a

2.2 Tournament Matrices and Landau’s Theorem 33
tournament. We use the notation T(A) for the tournament corresponding
to the tournament matrixA. A tournament is a digraph obtained from
the complete graphK
nwith vertex set{x 1,x2,...,xn}by assigning an
orientation to each of its edges.
A principal submatrixA[I] of a tournament matrixAis also a tourna-
ment matrix and its digraph is asubtournamentof the tournamentT(A).
We can think of a tournament (or tournament matrix) as representing the
outcomes of a round-robin tournament involvingnteams with each team
playing once against the othern−1 teams and with each game resulting in
a winner and a loser (no ties allowed). An arc from the team represented
by vertexx
ito the team represented by vertexx j(a 1 in position (i, j)of
A) signifies that in the tournament,x
ihas beatenx j.Thusr iequals the
number of wins of the teamx
i; it is for this reason thatr iis called the
scoreof teamx
iandR=(r 1,r2,...,rn) is also called thescore vectorof
a tournament. In a generalized tournament matrixA=[a
ij],aijcan be
interpreted as the a priori probability that teamx
ibeats teamx j(whence
the assumption thata
ij+aji=1foriΩ=j); the row sum vector then gives
the expected number of wins of each player.
Generalized tournament matrices and tournament matrices are related
by the following theorem [19]. LetT
nandT
g
n
denote theset of all tourna-
ment matricesand theset of all generalized tournament matrices, respec-
tively, of ordern.T
g
n
is a convex set in realn(n−1)-dimensional space
2

n(n−1)
.
Theorem 2.2.1Letnbe a positive integer. ThenT
g
n
is a convex polytope
whose extreme points are the tournament matricesT
n.
Proof.LetA=[a
ij]beinT
g
n
.Letαbe the number of 0’s inA. Then
α≤n(n+1)/2 with equality if and only ifAis a tournament matrix. We
argue by backwards induction onα. Suppose thatα<n(n+1)/2. Since
a
ij+aji= 1 wheneveriΩ=j, there is a tournament matrixT=[t ij] such
that for alliΩ=j,a
ij= 1 implies thatt ij=1. Let
c= min{a
ij:tij=1,1≤i, j≤n}.
Thenc>0andB=(A−cT)/(1−c)isinT
g
n
and has at least one
more 0 thanA. By inductionB, and thereforeA, is a convex combination
of tournament matrices. A tournament matrix having only 0 and 1 as
elements is clearly an extreme point ofT
g
n
. ff
LetR=(r
1,r2,...,rn) be a nonnegative vector with
r
1+r2+···+r n=

n
2

.
2
The matrices belong toff
n
2
, but since the main diagonal of a generalized tournament
matrix contains only 0’s, it lies in a coordinate subspace of dimensionn(n−1).

34 Basic Existence Theorems
We denote by
T
g
(R)
theclass of all generalized tournament matrices with row sum vectorR,
and by
T(R)
theclass of all tournament matrices with row sum vectorR. Notice that
T(R)⊆T
g
(R).
LetAbe a generalized tournament matrix inT
g
(R). The canonical
form ofAunder simultaneous row and column permutations, as described
in Theorem 1.3.2 but using the lower block triangular form, has the form
PAP
T
=







A
1O O ... O
JA
2O ... O
JJA
3... O
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
JJJ...A
t







(2.9)
where all the blocksJbelow the diagonal are matrices of 1’s, and the
diagonal blocksA
1,A2,...,Atare the irreducible components ofA. These
irreducible components are themselves generalized tournament matrices. It
is clear from (2.9 ≤i<j≤t, then the sum of the elements in a
row of the matrix (2.9 A
iis strictly greater than the sum of the
elements in a row that meetsA
j. It follows that the canonical form (2.9
can be determined simply by simultaneously permuting rows and columns
so that the row sums are in nondecreasing order.
Landau [13] characterized the score vectors of tournaments. Moon [18]
characterized the score vectors of generalized tournaments. We follow the
short proof of Landau’s theorem given by Mahmoodian [14] and Thomassen
[21]. Another short proof is given by Bang and Sharp [2].
Theorem 2.2.2Letnbe a positive integer and letR=(r
1,r2,...,rn)be
a nondecreasing, nonnegative integral vector. ThenT(R)is nonempty if
and only if
k

i=1
ri≥
ξ
k
2
χ
(k=1,2,...,n),with equality whenk=n. (2.10)
Proof.We refer to (2.10Landau’s conditionand to the inequalities
in (2.10Landau’s inequalities. As already shown, Landau’s condition
is satisfied by a tournament matrix with row sum vectorR.Weprove
the converse by contradiction. Choose a counterexample withnsmallest
and, for thatn, withr
1smallest. First suppose that there exists akwith
1≤k<nsuch that
k

i=1
ri=
ξ
k
2
χ
.

2.2 Tournament Matrices and Landau’s Theorem 35
The minimality assumption onnimplies thatR

=(r 1,r2,...,rk)isthe
row sum vector of some tournament matrixA

. In addition,
l

i=1
(rk+i−k)=
k+l

i=1
ri−

k
2

−kl


k+l
2



k
2

−kl
=

l
2

(l=1,2,...,n−k),
with equality whenl=n−k. Hence the minimality assumption onn
also implies that there exists a tournament matrixA
ffff
with row sum vector
R
ffff
=(rk+1−k, rk+2−k,...,rn−k). The matrix

A

Ok,n−k
Jn−k,k A
ffff

is a tournament matrix with row sum vectorR.
Now suppose that the inequalities (2.10 k=
1,2,...,n−1. Thenr
1>0, and the vectorR
ffffff
=(r1−1,r2,...,rn−1,rn+1)
satisfies Landau’s condition and so by the minimality assumption onr
1,
there is a tournament matrixAwith row sum vectorR
ffffff
. Since (r n+1)−
(r
1−1) = (r n−r1)+2≥2, there exists an integerpwith 1<p<nsuch
that columnpofAhasa0inrow1anda1inrow n(and so rowpofA
has a 1 in column 1 and a 0 in columnn). Interchanging the 0 and 1 in
columnpand in rowpgives a tournament matrix inT(R). ff
We remark that Landau’s condition (2.10
tion
(r
n,...,r2,r1)(n−1,...,2,1,0).
Fulkerson [10] reformulated Landau’s condition in terms of what we shall
call the normalized row sum vector of a tournament matrix or thenormal-
ized score vectorof a tournament. LetA=[a
ij] be a tournament matrix
of ordernwith row sum vectorR=(r
1,r2,...,rn). Becausea ij+aji=1
for alli=janda
ii= 0 for alli, we obtain
n

j=i+1
aij−
i−1

j=1
aji=ri−(i−1) (i=1,2,...,n). (2.11)
Conversely, givenR, if the elements above the main diagonal of the tourna-
ment matrixA=[a
ij] of ordernsatisfy (2.11 Ahas row sum vector
R=(r
1,r2,...,rn). The vector
H=(h
1,h2,...,hn)=(r 1−0,r2−1,...,rn−(n−1)),
whose components are given byr
i−(i−1) (i=1,2,...,n), is thenormalized
row sum vectorofA. The vectorRis nondecreasing if and only ifh
i−1−hi≤
1(i=2,3,...,n). Sinceh
1=r1,wehaveh 1≥0.

36 Basic Existence Theorems
The following theorem [10] is a restatement of Theorem 2.2.2 in terms
of the normalized row sum vector.
Theorem 2.2.3LetH=(h
1,h2,...,hn)be a nonnegative vector satisfy-
ing
h
1≥0andh i−1−hi≤1(i=2,3,...,n).
ThenHis the normalized row sum vector of a tournament matrix if and
only if
h
1+h2+···+h k≥0(k=1,2,...,n) (2.12
with equality fork=n. The row sum vector of such a tournament matrix
is equal toR=(r
1,r2,...,rn)wherer i=hi+(i−1) (i=1,2,...,n).∗
Letpandnbe positive integers. Instead of a round-robin tournament
in which each ofnteamsx
1,x2,...,xnplays one game against every other
team, we may also consider a round-robin tournament in which each team
playspgames against every other team. A nonnegative integral matrix
A=[a
ij] of ordernis ap-tournament matrixprovided
a
ii=0 (i=1,2,...,n)anda ij+aji=p(1≤i<j≤n),
that is, provided
A+A
T
=p(J n−In).
The elementa
ijrecords the number of games in which teamx idefeats
teamx
j(1≤i, j≤n;i≤=j). We have
n

i=1
n

j=1
aij=

1≤i<j≤n
(aij+aji)=

1≤i<j≤n
p=p

n
2

.
We letT(R;p) denote theclass of allp-tournament matrices with a pre-
scribed row sum vectorR=(r
1,r2,...,rn). ThusT(R;1) =T(R). The
proof of Theorem 2.2.2 can easily be adapted to prove the following exis-
tence theorem forp-tournament matrices.
Theorem 2.2.4Letpandnbe positive integers. LetR=(r
1,r2,...,rn)
be a nondecreasing, nonnegative integral vector. ThenT(R;p)is nonempty
if and only if
k

i=1
ri≥p

k
2

(k=1,2,...,n),with equality whenk=n. (2.13)
One can consider an even more general setting of a round-robin tour-
nament in which there is a prescribed numberp
ijof games to be played
between each pair of teams{x
i,xj}.LetP=[p ij] be a nonnegative inte-
gral matrix of ordernwith 0’s on and below the main diagonal. Then a

2.2 Tournament Matrices and Landau’s Theorem 37
P-tournament matrixis a nonnegative integral matrixA=[a
ij] of ordern
satisfying
a
ii=0 (i=1,2,...,n)anda ij+aji=pij(1≤i<j≤n),
equivalently,A+A
T
=P+P
T
.Wehave
n

i=1
n

j=1
aij=

1≤i<j≤n
(aij+aji)=

1≤i<j≤n
pij.
We denote the class of allP-tournament matrices with prescribed row sum
vectorR=(r
1,r2,...,rn)byT (R;P).
The following theorem is due to Cruse [7]. Since we shall not make use
of this theorem, we do not include a proof.
Theorem 2.2.5Letnbe a positive integer and letP=[p
ij]be a nonneg-
ative integral matrix of ordernwith0’s on and below its main diagonal.
LetR=(r
1,r2,...,rn)be a nonnegative integral vector. ThenT(R;P)is
nonempty if and only if

i∈K
ri≥

i∈K

j∈K
pij(K⊆{1,2,...,n}), (2.14)
with equality whenK={1,2,...,n}.
For generalized tournament matrices we have the following theorem of
Moon [18].
Theorem 2.2.6Letnbe a positive integer and letR=(r
1,r2,...,rn)be
a nondecreasing, nonnegative vector. ThenT
g
(R)is nonempty if and only
if(2.10)holds.
Proof.We have already verified that the conditions (2.10
if there is a matrix inT
g
(R). Now suppose that the conditions (2.10
We first observe the following. LetS=(s
1,s2,...,sn) where
s
i=n−1−r i(i=1,2,...,n). (2.15)
Suppose that there exists a matrixA=[a
ij] of ordernwith row sum vector
Rand column sum vectorSsuch that
a
ii=0 (i=1,2,...,n) and 0≤a ij≤1(i, j=1,2,...,n;i≤=j).
Let
B=[b
ij]=
1
2
(A−A
T
+Jn−In).
Then
b
ii=0 (i=1,2,...,n),

38 Basic Existence Theorems
0≤b
ij=
a
ij−aji+1
2
≤1(i=j),
and
b
ij+bji=1 (i=j).
In addition, the sum of the elements in rowiofBequals
r
i−si+n−1
2
=r
i.
ThusBis inT
g
(R). It thus suffices to show that (2.10
of such a matrixA.
LetC=[c
ij]=J n−In. Then we seek a matrixA=[a ij] of ordern
such that
0≤a
ij≤cij(i, j=1,2,...,n),
n

j=1
aij=ri(i=1,2,...,n),and
n

i=1
aij=sj(j=1,2,...,n).
SinceRis nondecreasing andSis nonincreasing, it follows from Corollary
1.4.2 that such a matrixAexists if and only if
n

i=k+1
l

j=1
cij≥
k

i=1
ri−
l

j=1
sj≥0(k=1,2,...,n;j=1,2,...,n).(2.16)
First assume thatk≤l. Then using (2.15 C,we
see that (2.16
l(n−k)−(l−k)+
k

i=1
ri−
l

j=1
(n−1−r j)≥0,
that is,
−kl+k+
k

i=1
ri+
l

i=1
ri≥0. (2.17)
But by the Landau inequalities,
−kl+k+
k

i=1
ri+
l

i=1
ri≥−kl+k+
ξ
k
2
χ
+
ξ
l
2
χ
=
(l−k)(l−k+1)
2
≥0,
and hence (2.17

2.3 Symmetric Matrices 39
Now assume thatk≥l+ 1. In a similar way we get that (2.16
equivalent to
l(n−k)+
k

i=1
ri−
l

j=1
(n−1−r j)≥0,
that is,
−kl+l+
k

i=1
ri+
l

j=1
rj≥0. (2.18)
Using Landau’s inequalities again, we get
−kl+l+
k

i=1
ri+
l

j=1
rj≥−kl+l+

k
2

+

l
2

=
(k−l)(k−(l+ 1))
2
≥0,
and hence (2.18
theorem is complete. ∆
2.3 Symmetric Matrices
LetR=(r 1,r2,...,rn) be a nonnegative vector. Also letpbe a positive
integer. We define several classes of symmetric matrices with row sum
vectorR.
The class of all symmetric, nonnegative matrices with row sum vector
Ris denoted byN(R). IfRis an integral vector, the class of all symmetric,
nonnegative integral matrices of ordernwith row sum vectorRis denoted
byZ
+
(R). The classZ
+
(R;p) is the class of all symmetric, nonnegative
integral matrices with row sum vectorR, each of whose elements does not
exceedp. The subclasses ofN(R),Z
+
(R), andZ
+
(R;p) consisting of those
matrices with only 0’s on the main diagonal are denoted, respectively, by
N(R)
0,Z
+
(R)0,andZ
+
(R;p) 0.Ifp=1,weuseA(R) instead ofZ
+
(R; 1),
andA(R)
0instead ofZ
+
(R;1)0.ThusA(R) is the set of adjacency matrices
of graphs, with a loop permitted at each vertex, onnvertices with degree
sequence (r
1,r2,...,rn), andA 0(R) is the set of adjacency matrices of
(simple nvertices with degree sequenceR. The classesN(R)
andN(R)
0are convex polytopes and will be investigated in Chapter 8.
Given a nonnegative vectorRΓ= 0, the diagonal matrix





r
10...0
0r
2...0
.
.
.
.
.
.
.
.
.
.
.
.
00 ... r
n




Random documents with unrelated
content Scribd suggests to you:

containing a part of this work or any other work associated with
Project Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute
this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or
providing access to or distributing Project Gutenberg™
electronic works provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty

payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project
Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright

law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except
for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.

If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set
forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.
Section 2. Information about the Mission
of Project Gutenberg™

Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.
The Foundation’s business office is located at 809 North 1500
West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact

Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws
regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states
where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot
make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.
Please check the Project Gutenberg web pages for current
donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and

credit card donations. To donate, please visit:
www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.
Project Gutenberg™ eBooks are often created from several
printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.

Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com