 Perron–Frobenius theorem

In linear algebra, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector has strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Leontief's inputoutput model^{[1]} 8.3.6 p. 681); to demography (Leslie population age distribution model^{[1]} 8.3.7 p. 683); to mathematical background of the internet search engines^{[2]} 15.2 p. 167 and even to ranking of football teams^{[3]} p. 80.
"In addition to saying something useful, the Perron–Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful, and useful mathematics eventually tends to be beautiful.^{[1]}"
Statement of the Perron–Frobenius theorem
A matrix in which all entries are positive real numbers is here called positive and a matrix whose entries are nonnegative real numbers is here called nonnegative. The eigenvalues of a real square matrix A are complex numbers and collectively they make up the spectrum of the matrix. The exponential growth rate of the matrix powers A^{k} as k → ∞ is controlled by the eigenvalue of A with the largest absolute value. The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a nonnegative real square matrix. Early results were due to Oskar Perron (1907) and concerned positive matrices. Later, Georg Frobenius (1912) found their extension to certain classes of nonnegative matrices.
Positive matrices
Let A = (a_{ij}) be an n × n positive matrix: a_{ij} > 0 for 1 ≤ i, j ≤ n. Then the following statements hold.
 There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue, such that r is an eigenvalue of A and any other eigenvalue λ (possibly, complex) is strictly smaller than r in absolute value, λ < r. Thus, the spectral radius ρ(A) is equal to r.
 The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is onedimensional. (The same is true for the left eigenspace, i.e., the eigenspace for A^{T}.)
 There exists an eigenvector v = (v_{1},…,v_{n}) of A with eigenvalue r such that all components of v are positive: A v = r v, v_{i} > 0 for 1 ≤ i ≤ n. (Respectively, there exists a positive left eigenvector w : w^{T} A = r w^{T}, w_{i} > 0.)
 There are no other positive (moreover nonnegative) eigenvectors except positive multiples of v (respectively, left eigenvectors except w), i.e. all other eigenvectors must have at least one negative or nonreal component.
 , where the left and right eigenvectors for A are normalized so that w^{T}v = 1. Moreover, the matrix v w^{T} is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection.
 Collatz–Wielandt formula: for all nonnegative nonzero vectors x, let f(x) be the minimum value of [Ax]_{i} / x_{i} taken over all those i such that x_{i} ≠ 0. Then f is a real valued function whose maximum is the Perron–Frobenius eigenvalue.
 A "Minmax" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g(x) be the maximum value of [Ax]_{i} / x_{i} taken over i. Then g is a real valued function whose minimum is the Perron–Frobenius eigenvalue.
 The Perron–Frobenius eigenvalue satisfies the inequalities
These claims can be found in Meyer^{[1]} chapter 8 claims 8.2.1115 page 667 and exercises 8.2.5,7,9 pages 668669.
The left and right eigenvectors v and w are usually normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors.
Nonnegative matrices
An extension of the theorem to matrices with nonnegative entries is also available. In order to highlight the similarities and differences between the two cases the following points are to be noted: every nonnegative matrix can be obviously obtained as a limit of positive matrices, thus one obtains the existence of an eigenvector with nonnegative components; obviously the corresponding eigenvalue will be nonnegative and greater or equal in absolute value than all other eigenvalues (see Meyer^{[1]} chapter 8.3 page 670 or Gantmacher^{[4]} chapter XIII.3 theorem 3 page 66 for details). However, the simple examples
show that for nonnegative matrices there may exist eigenvalues of the same absolute value as the maximal one ((1) and (−1) – eigenvalues of the first matrix); moreover the maximal eigenvalue may not be a simple root of the characteristic polynomial, can be zero and the corresponding eigenvector (1,0) is not strictly positive (second example). So it may seem that most properties are broken for nonnegative matrices, however Frobenius found the right way.
The key feature of theory in the nonnegative case is to find some special subclass of nonnegative matrices – irreducible matrices – for which a nontrivial generalization is possible. Namely, although eigenvalues attaining maximal absolute value may not be unique, the structure of maximal eigenvalues is under control: they have the form e^{i2πl/h}r, where h is some integer number – period of matrix, r is a real strictly positive eigenvalue, l = 0, 1, ..., h − 1. The eigenvector corresponding to r has strictly positive components (in contrast with the general case of nonnegative matrices, where components are only nonnegative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below.
Classification of matrices
Let A be a square matrix (not necessarily positive or even real). The matrix A is irreducible if any of the following equivalent properties holds.
Definition 1 : A does not have nontrivial invariant coordinate subspaces. Here a nontrivial coordinate subspace means a linear subspace spanned by any proper subset of basis vectors. More explicitly, for any linear subspace spanned by basis vectors e_{i1} , ..., e_{ik}, n > k > 0 its image under the action of A is not contained in the same subspace.
Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P:
where E and G are nontrivial (i.e. of size greater than zero) square matrices.
If A is nonnegative other definitions exist:
Definition 3: For every pair of indices i and j, there exists a natural number m such that (A^{m})_{ij} is not equal to 0.
Definition 4: One can associate with a matrix A a certain directed graph G_{A}. It has exactly n vertices, where n is size of A, and there is an edge from vertex i to vertex j precisely when A_{ij} > 0. Then the matrix A is irreducible if and only if its associated graph G_{A} is strongly connected.
A matrix is reducible if it is not irreducible.
Let A be nonnegative. Fix an index i and define the period of index i to be the greatest common divisor of all natural numbers m such that (A^{m})_{ii} > 0. When A is irreducible, the period of every index is the same and is called the period of A. When A is also irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in G_{A} (see Kitchens^{[5]} page 16). The period is also called the index of imprimitivity (Meyer^{[1]} page 674) or the order of cyclicity.
If the period is 1, A is aperiodic.
A matrix A is primitive if it is nonnegative and its mth power is positive for some natural number m (i.e. the same m works for all pairs of indices). It can be proved that primitive matrices are the same as irreducible aperiodic nonnegative matrices.
A positive square matrix is primitive and a primitive matrix is irreducible. All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. However, a general nonnegative irreducible matrix A may possess several eigenvalues whose absolute value is equal to the spectral radius of A, so the statements need to be correspondingly modified. Actually the number of such eigenvalues is exactly equal to the period. Results for nonnegative matrices were first obtained by Frobenius in 1912.
Perron–Frobenius theorem for irreducible matrices
Let A be an irreducible nonnegative n × n matrix with period h and spectral radius ρ(A) = r. Then the following statements hold.
 The number r is a positive real number and it is an eigenvalue of the matrix A, called the Perron–Frobenius eigenvalue.
 The Perron–Frobenius eigenvalue r is simple. Both right and left eigenspaces associated with r are onedimensional.
 A has a left eigenvector v with eigenvalue r whose components are all positive.
 Likewise, A has a right eigenvector w with eigenvalue r whose components are all positive.
 The only eigenvectors whose components are all positive are those associated with the eigenvalue r.
 Matrix A has exactly h (where h is the period) complex eigenvalues with absolute value r. Each of them is a simple root of the characteristic polynomial and is the product of r with an hth root of unity.
 Let ω = 2π/h. Then the matrix A is similar to e^{iω}A, consequently the spectrum of A is invariant under multiplication by e^{iω} (corresponding to the rotation of the complex plane by the angle ω).
 If h > 1 then there exists a permutation matrix P such that

 where the blocks along the main diagonal are zero square matrices.
 9. Collatz–Wielandt formula: for all nonnegative nonzero vectors x let f(x) be the minimum value of [Ax]_{i} / x_{i} taken over all those i such that x_{i} ≠ 0. Then f is a real valued function whose maximum is the Perron–Frobenius eigenvalue.
 10. The Perron–Frobenius eigenvalue satisfies the inequalities
The matrix shows that the blocks on the diagonal may be of different sizes, the matrices A_{j} need not be square, and h need not divide n.
Further properties
Let A be an irreducible nonnegative matrix, then:
 (1+A)^{n−1} is a positive matrix. (Meyer^{[1]} claim 8.3.5 p. 672).
 Wielandt's theorem. If B<A, then ρ(B)≤ρ(A). If equality holds (i.e. if μ=ρ(A)e^{iφ} is eigenvalue for B), then B = e^{iφ} D AD^{−1} for some diagonal unitary matrix D (i.e. diagonal elements of D equals to e^{iΘl}, nondiagonal are zero). (Meyer^{[1]} claim 8.3.11 p. 675).
 If some power A^{q} is reducible, then it is completely reducible, i.e. for some permutation matrix P, it is true that: , where A_{i} are irreducible matrices having the same maximal eigenvalue. The number of these matrices d is the greatest common divisor of q and h, where h is period of A. (Gantmacher^{[4]} section XIII.5 theorem 9).
 If c(x)=x^{n}+c_{k1} x^{nk1} +c_{k2} x^{nk2} + ... + c_{ks} x^{nks} is the characteristic polynomial of A in which the only nonzero coefficients are listed, then the period of A equals to the greatest common divisor for k_{1}, k_{2}, ... , k_{s}.(Meyer^{[1]} page 679).
 Cesàro averages: where the left and right eigenvectors for A are normalized so that w^{t}v = 1. Moreover the matrix v w^{t} is the spectral projection corresponding to r  Perron projection. (Meyer^{[1]} example 8.3.2 p. 677).
 Let r be the PerronFrobenius eigenvalue, then the adjoint matrix for (rA) is positive (Gantmacher^{[4]} section XIII.2.2 page 62).
 If A has at least one nonzero diagonal element, then A is primitive. (Meyer^{[1]} example 8.3.3 p. 678).
The following facts worth to be mentioned.
 If 0 ≤ A < B, the r_{A} ≤ r_{B}, moreover, if A is irreducible, then the inequality is strict: r_{A} < r_{B}.
One of the definitions of primitive matrix requires A to be nonnegative and there exists m, such that A^{m} is positive. One may one wonder how big m can be, depending on the size of A. The following answers this question.
 Assume A is nonnegative primitive matrix of size n, then A^{n22n+2} is positive. Moreover there exists a matrix M given below, such that M^{k} remains not positive (just nonnegative) for all k< n^{2}2n+2, in particular (M^{n22n+1})_{11}=0.
(Meyer^{[1]} chapter 8 example 8.3.4 page 679 and exercise 8.3.9 p. 685).
Applications
Numerous books have been written on the subject of nonnegative matrices, and Perron–Frobenius theory is invariably a central feature. So there is a vast application area and the examples given below barely begin to scratch its surface.
Nonnegative matrices
The Perron–Frobenius theorem does not apply directly to nonnegative matrices. Nevertheless any reducible square matrix A may be written in uppertriangular block form (known as the normal form of a reducible matrix)^{[6]}



 PAP^{−1} =


where P is a permutation matrix and each B_{i} is a square matrix that is either irreducible or zero. Now if A is nonnegative then so are all the B_{i} and the spectrum of A is just the union of their spectra. Therefore many of the spectral properties of A may be deduced by applying the theorem to the irreducible B_{i}.
For example the Perron root is the maximum of the ρ(B_{i}). Whilst there will still be eigenvectors with nonnegative components it is quite possible that none of these will be positive.
Stochastic matrices
A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of nonnegative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible.
If A is rowstochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It may not be the only eigenvalue on the unit circle: and the associated eigenspace can be multidimensional. If A is rowstochastic and irreducible then the Perron projection is also rowstochastic and all its rows are equal.
Algebraic graph theory
The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative nsquare matrix is the graph with vertices numbered 1, ..., n and arc ij if and only if A_{ij} ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible.^{[7]}^{[8]}
Finite Markov chains
The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrixtheoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the subshift of finite type).
Compact operators
Main article: Krein–Rutman theoremMore generally, it can be extended to the case of nonnegative compact operators, which, in many ways, resemble finitedimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of pointset topology.^{[9]}
Proof methods
A common thread in many proofs is the Brouwer fixed point theorem^{[citation needed]}. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work (see Gantmacher^{[4]} section XIII.2.2 page 54 ). The proof based on the spectral theory can be found in^{[10]} from which part of the arguments are borrowed.
Perron root is strictly maximal eigenvalue for positive (and primitive) matrices

 If A is a positive (or more generally primitive) matrix, then there exists real positive eigenvalue r (PerronFrobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence r is the spectral radius of A.
Pay attention that claim is wrong for general nonnegative irreducible matrices, which have h eigenvalues with the same absolute eigenvalue as r, where h is the period of A.
Proof. Consider first the case of positive matrices. Let A be a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise consider A/ρ(A)). Hence, as it is known, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Assume that that λ ≠ 1. Then there exists a positive integer m such that A^{m} is a positive matrix and the real part of λ^{m} is negative. Let ε be half the smallest diagonal entry of A^{m} and set T = A^{m} − ε1 which is yet another positive matrix. Moreover if Ax = λx then A^{m}x = λ^{m}x thus λ^{m} − ε is an eigenvalue of T. Because of the choice of m this point lies outside the unit disk consequently ρ(T) > 1. On the other hand all the entries in T are positive and less than or equal to those in A^{m} so by Gelfand's formula ρ(T) ≤ ρ(A^{m}) ≤ ρ(A)^{m} = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle. Proof for positive matrices is finished .
Absolutely the same arguments can be applied to the case of primitive matrices, after one just need to mention the following simple lemma, which clarifies the properties of primitive matrices
Lemma: Consider a nonnegative A. Assume there exists m, such that A^{m} is positive, then A^{m+1}, A^{m+2}, A^{m+3},... are all positive.
Indeed, A^{m+1}= A A^{m}, so it can have zero element only if some row of A is entirely zero, but in this case the same row of A^{m} will be zero. Lemma is proved.
After lemma is established one applies the same arguments as above for primitive matrices to prove the main claim.
Power method and the positive eigenpair

 for a positive (or more generally irreducible nonnegative) matrix A the dominant eigenvector is real and strictly positive (for nonnegative A respectively nonnegative)
It can be argued by the power method, which states that for sufficiently generic (in the sense discussed below) matrix A the sequence of vectors b_{k+1}=Ab_{k} /  Ab_{k}  converges to the eigenvector with the maximum eigenvalue. (Initial vector b_{0} can be chosen arbitrary except some measure zero set). Starting with a nonnegative vector b_{0} one gets the sequence of nonnegative vectors b_{k}. Hence the limiting vector is also nonnegative. By power method this limiting vector is the desired the dominant eigenvector for A, so assertion is proved. Corresponding eigenvalue is clearly nonnegative.
To accomplish the proof two arguments should be added. First, the power method converges for matrices which does not have several eigenvalues of the same absolute value as the maximal one. The previous section argument guarantees this.
Second, is to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following simple fact, which is of independent interest:
 Lemma: consider a positive (or more generally irreducible nonnegative) matrix A. Assume v is any nonnegative eigenvector for A, then it is necessarily strictly positive and corresponding eigenvalue is also strictly positive.
Proof. Recall that one of the definitions of irreducibility for nonnegative matrices is that for all indexes i,j there exists m, such that (A^{m})_{ij} is strictly positive. Consider a nonnegative eigenvector v, assume that at least one of its components say jth is strictly positive. Then one can deduce that the corresponding eigenvalue is strictly positive, indeed, consider n such that (A^{n})_{ii} >0, hence: r^{n}v_{i} = A^{n}v_{i} >= (A^{n})_{ii}v_{i} >0. Hence r is strictly positive. In the same manner one can deduce strict positivity of the eigenvector. To prove it, consider m, such that (A^{m})_{ij} >0, hence: r^{m}v_{j} = (A^{m}v)_{j} >= (A^{m})_{ij}v_{i} >0, hence v_{j} is strictly positive, i.e. eigenvector is strictly positive. Proof is finished.
Multiplicity one
The proof that the PerronFrobenius eigenvalue is simple root of the characteristic polynomial is also elementary. The arguments here are close to that ones in Meyer^{[1]} chapter 8 page 665, where details can be found.

 Eigenspace associated to PerronFrobenius eigenvalue r is onedimensional
Consider strictly positive eigenvector v corresponding to r. Assume there exists another eigenvector w with the same eigenvalue. (Vector w can be chosen to be real, because A and r are both real, so the null space of Ar has a basis consisting of real vectors). Assume at least one of the components of w is positive (otherwise multiply w by 1). Consider maximal possible α such that u=v α w is nonnegative. Then clearly one of the components of u is zero, otherwise α is not maximum. Obviously vector u is an eigenvector. It is nonnegative, hence by the lemma described in the previous section nonnegativity implies strict positivity for any eigenvector. On the other hand it was stated just above that at least one component of u is zero. The contradiction implies, that w does not exist. The claim is proved.

 There is no Jordan cells corresponding to the PerronFrobenius eigenvalue r and all other eigenvalues which has the same absolute value
The idea is the following: if there is a Jordan cell, then the infinity norm (A/r)^{k}_{∞} tends to infinity for k → ∞ , but it will actually contradict the existence of the positive eigenvector.
The details are the following. Assume r=1, otherwise consider A/r. Let v be PerronFrobenius strictly positive eigenvector, so Av=v, then:
So A^{k}_{∞} is bounded for all k. Actually it gives another proof that there is no eigenvalues which has greater absolute value then PerronFrobenius one. And it gives more. It contradicts the existence of the Jordan cell for any eigenvalue which has absolute value equal to 1 (in particular for the PerronFrobenius one), because existence of the Jordan cell implies that A^{k}_{∞} is unbounded. For example for two by two matrix:
hence J^{k}_{∞} = k+λ (for λ=1), so it tends to infinity when k does so. Since J^{k} = C^{1} A^{k}C , then  A^{k}  >= J^{k}/ ( C^{−1}  C ), so it also tends to infinity. Obtained contradiction implies that there is no Jordan cells for the corresponding eigenvalues. Proof is finished.
Combining the two claims above together one gets:

 the PerronFrobenius eigenvalue r is simple root of the characteristic polynomial.
Actually in the case of non primitive matrices, there exists other eigenvalues which has the same absolute value as r. Same claim is true for them, but it requires more work.
No other nonnegative eigenvectors

 Consider positive (or more generally irreducible nonnegative matrix) A. The PerronFrobenius eigenvector is the only (up to multiplication by constant) nonnegative eigenvector for A.
So other eigenvectors should contain negative, or complex components. The idea of proof is the following  eigenvectors for different eigenvalues should be orthogonal in some sense, however two positive eigenvectors cannot be orthogonal, so they correspond the same eigenvalue, but eigenspace for the PerronFrobenius is one dimensional.
The formal proof goes as follows. Assume there exists an eigenpair (λ, y) for A, such that vector y is positive. Consider (r, x), where x  is the right PerronFrobenius eigenvector for A (i.e. eigenvector for A^{t}). Then: r x^{t} y = (x^{t} A) y= x^{t} (A y)=λ x^{t} y, also x^{t} y >0, so one has: r=λ. But eigenspace for the PerronFrobenius eigenvalue r is one dimensional, as it was established before. Hence nonnegative eigenvector y is multiple of the PerronFrobenius one. Assertion is proved.
One can consult Meyer^{[1]} chapter 8 claim 8.2.10 page 666 for details.
Collatz–Wielandt formula

 Consider positive (or more generally irreducible nonnegative matrix) A. For all nonnegative nonzero vectors x let f(x) be the minimum value of [Ax]_{i} / x_{i} taken over all those i such that x_{i} ≠ 0. Then f is a real valued function whose maximum is the Perron–Frobenius eigenvalue r.
First observe that r is attained for x taken to be the PerronFrobenius eigenvector v. So one needs to prove that values f on the other vectors are less or equal. Consider some vector x. Let ξ=f(x), so 0<= ξx <=Ax. Consider w to be the right eigenvector for A. Hence w^{t} ξx <= w^{t} (Ax) = (w^{t} A)x = r w^{t} x . Hence ξ<=r. So proof is finished.
One can consult Meyer^{[1]} chapter 8 page 666 for details.
Perron projection as a limit: A^{k}/r^{k}
Consider positive (or more generally primitive) matrix A, and r its PerronFrobenius eigenvalue, then:

 There exists a limit A^{k}/r^{k} for k → ∞, denote it by P.
 P is a projection operator: P^{2}=P, which commutes with A: AP=PA.
 The image of P is onedimensional and spanned by the PerronFrobenius eigenvector v (respectively for P^{t}  by the PerronFrobenius eigenvector w for A^{t}).
 P= v w^{t}, where v,w are normalized such that w^t v = 1.
 Hence P is a positive operator.
Hence P is a spectral projection for the PerronFrobenius eigenvalue r, and it is called Perron projection. Pay attention: the assertion above is not true for general nonnegative irreducible matrices.
Actually the claims above (except claim 5) are valid for any matrix M such that there exists an eigenvalue r, which is strictly greater than the other eigenvalues in absolute value and it is simple root of the characteristic polynomial. (These requirements hold for primitive matrices as it was shown above). The proof of this more general fact is sketched below.
Proof. One can demonstrate the existence of the limit as follows. Assume M is diagonalizable, so it is conjugate to diagonal matrix with eigenvalues r_{1}, ... , r_{n} on the diagonal (denote r_{1}=r). The matrix M^{k}/r^{k} will be conjugate (1, (r_{2}/r)^{k}, ... , (r_{n}/r)^{k}), which tends to (1,0,0,...,0), for k → ∞. So the limit exists. The same method works for general M (without assumption M is diagonalizable).
The projection and commutativity properties are elementary corollaries of the definition: M M^{k}/r^{k}= M^{k}/r^{k} M ; P^{2} = lim M^{2k}/r^{2k}=P. The third fact is also elementary: M (P u)= M lim M^{k}/r^{k} u = lim r M^{k+1}/r^{k+1} u, so taking the limit one gets M (P u)=r (P u), so image of P lies in the reigenspace for M, which is onedimensional by the assumptions.
Denote by v reigenvector for M (by w for M^{t}). Columns of P are multiples of v, because image of P is spanned by it. Respectively rows  of w. So P takes a form (a v w^{t}), for some a. Hence its trace equals to (a w^{t} v). On the other hand trace of projector equals to the dimension of its image. It was proved before that it is not more than onedimensional. From the definition one sees that P acts identically on the reigenvector for M. So it is onedimensional. So one concludes that choosing (w^{t} v)=1, implies P=v w^{t}. The proof is finished.
Inequalities for Perron–Frobenius eigenvalue
For any nonnonegative matrix A its Perron–Frobenius eigenvalue r satisfies the inequality:
Actually it is not specific to nonnegative matrices: for any matrix A and any its eigenvalue λ it is true that . This is immediate corollary of the Gershgorin circle theorem. However there is another proof which is more direct. Any matrix induced norm satisfies the inequality A ≥ λ for any eigenvalue λ, because A ≥ Ax/x = λx/x = λ. The infinity norm of a matrix is the maximum of row sums: Hence the desired inequality is exactly A_{∞} ≥ λ applied to nonnegative matrix A.
Another inequality is:
This fact is specific for nonnegative matrices, for general matrices there is nothing similar. To prove it one first suppose that A is positive (non just nonnegative). Then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say w_{i}) is 1. Then r = (Aw)_{i} ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all nonnegative matrices by continuity.
Another way to argue it is via the CollatzWielandt formula. One takes the vector x = (1, 1, ..., 1) and immediately obtains the inequality.
Further proofs
Perron projection
The proof now proceeds using spectral decomposition. The trick here is to split the Perron root away from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property:

 The Perron projection of an irreducible nonnegative square matrix is a positive matrix.
Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if A is an irreducible nonnegative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if P is its Perron projection then AP = PA = ρ(A)P so every column of P is a positive right eigenvector of A and every row is a positive left eigenvector. Moreover if Ax = λx then PAx = λPx = ρ(A)Px which means Px = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). And there is yet more to come. If A is a primitive matrix with ρ(A) = 1 then it can be decomposed as P ⊕ (1 − P)A so that A^{n} = P + (1 − P)A^{n}. As n increases the second of these terms decays to zero leaving P as the limit of A^{n} as n → ∞.
The power method is a convenient way to compute the Perron projection of a primitive matrix. If v and w are the positive row and column vectors that it generates then the Perron projection is just wv/vw. It should be noted that the spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid on top of one another and each generally has complex entries extending to all four corners of the square matrix. Nevertheless they retain their mutual orthogonality which is what facilitates the decomposition.
Peripheral projection
The analysis when A is irreducible and nonnegative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So enter the peripheral projection which is the spectral projection of A corresponding to all the eigenvalues that have modulus ρ(A) ....

 The peripheral projection of an irreducible nonnegative square matrix is a nonnegative matrix with a positive diagonal.
Cyclicity
Suppose in addition that ρ(A) = 1 and A has h eigenvalues on the unit circle. If P is the peripheral projection then the matrix R = AP = PA is nonnegative and irreducible, R^{h} = P, and the cyclic group P, R, R^{2}, ...., R^{h−1} represents the harmonics of A. The spectral projection of A at the eigenvalue λ on the unit circle is given by the formula . All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of A is given by A = R ⊕ (1 − P)A so the difference between A^{n} and R^{n} is A^{n} − R^{n} = (1 − P)A^{n} representing the transients of A^{n} which eventually decay to zero. P may be computed as the limit of A^{nh} as n → ∞.
Caveats
The matrices L = , P = , T = , M = provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of L are both equal to P, thus when the original matrix is reducible the projections may lose nonnegativity and there is no chance of expressing them as limits of its powers. The matrix T is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible nonnegative square matrix is nonzero then the matrix must be primitive but this example demonstrates that the converse is false. M is an example of a matrix with several missing spectral teeth. If ω = e^{iπ/3} then ω^{6} = 1 and the eigenvalues of M are {1,ω^{2},ω^{3},ω^{4}} so ω and ω^{5} are both absent.
Terminology
A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms strictly positive and positive to mean > 0 and ≥ 0 respectively. In this article positive means > 0 and nonnegative means ≥ 0. Another vexed area concerns decomposability and reducibility: irreducible is an overloaded term. For avoidance of doubt a nonzero nonnegative square matrix A such that 1 + A is primitive is sometimes said to be connected. Then irreducible nonnegative square matrices and connected matrices are synonymous.^{[11]}
The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is a the vector of a probability distribution and is sometimes called a stochastic eigenvector.
Perron–Frobenius eigenvalue and dominant eigenvalue are alternative names for the Perron root. Spectral projections are also known as spectral projectors and spectral idempotents. The period is sometimes referred to as the index of imprimitivity or the order of cyclicity.
See also
 Zmatrix (mathematics)
 Mmatrix
 Pmatrix
 Lmatrix
 Hurwitz matrix
 Metzler matrix (Quasipositive matrix)
 Positive operator
Notes
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} ^{f} ^{g} ^{h} ^{i} ^{j} ^{k} ^{l} ^{m} ^{n} ^{o} Meyer, Carl (2000), Matrix analysis and applied linear algebra, SIAM, ISBN 0898714540, http://www.matrixanalysis.com/Chapter8.pdf
 ^ Langville, Amy; Meyer, Carl (2006), Google page rank and beyond, Princeton University Press, ISBN 0691122024, http://pagerankandbeyond.com
 ^ Keener, James (1993), "The Perron–Frobenius theorem and the ranking of football teams", SIAM Review (SIAM) 35 (1): 80–93, http://links.jstor.org/sici?sici=00361445%28199303%2935%3A1%3C80%3ATPTATR%3E2.0.CO%3B2O
 ^ ^{a} ^{b} ^{c} ^{d} Gantmacher, Felix (2000) [1959], The Theory of Matrices, Volume 2, AMS Chelsea Publishing, ISBN 0821826646, http://books.google.com/books?id=cyX32q8ZP5cC&lpg=PA178&vq=preceding%20section&pg=PA53#v=onepage&q&f=true (1959 edition had different title: "Applications of the theory of matrices". Also the numeration of chapters is different in the two editions.)
 ^ Kitchens, Bruce (1998), Symbolic dynamics: onesided, twosided and countable state markov shifts., Springer, http://books.google.ru/books?id=mCcdC_5crpoC&lpg=PA195&ots=RbFr1TkSiY&dq=kitchens%20perron%20frobenius&pg=PA16#v=onepage&q&f=false
 ^ See 2.43 (page 51) in Varga reference below
 ^ Brualdi, Richard A.; Ryser, Herbert John (1992). Combinatorial Matrix Theory. Cambridge: Cambridge UP. ISBN 0521322650.
 ^ Brualdi, Richard A.; Cvetkovic, Dragos (2009). A Combinatorial Approach to Matrix Theory and Its Applications. Boca Raton, FL: CRC Press. ISBN 9781420082234.
 ^ Mackey, Michael C. (1992). Time's Arrow: The origins of thermodynamic behaviour. New York: SpringerVerlag. ISBN 0387977023.
 ^ Smith, Roger (2006), "A Spectral Theoretic Proof of Perron–Frobenius", Mathematical Proceedings of the Royal Irish Academy (The Royal Irish Academy) 102 (1): 29–35, doi:10.3318/PRIA.2002.102.1.29, ftp://emis.maths.adelaide.edu.au/pub/EMIS/journals/MPRIA/2002/pa102i1/pdf/102a102.pdf
 ^ For surveys of results on irreducibility, see Olga TausskyTodd and Richard A. Brualdi.
References
Original papers
 Perron, Oskar (1907), "Zur Theorie der Matrices", Mathematische Annalen 64 (2): 248–263, doi:10.1007/BF01449896
 Frobenius, Georg (1912), "Ueber Matrizen aus nicht negativen Elementen", Sitzungsber. Königl. Preuss. Akad. Wiss.: 456–477
 Frobenius, Georg (1908), "Über Matrizen aus positiven Elementen, 1", Sitzungsber. Königl. Preuss. Akad. Wiss.: 471–476
 Frobenius, Georg (1909), "Über Matrizen aus positiven Elementen, 2", Sitzungsber. Königl. Preuss. Akad. Wiss.: 514–518
 Romanovsky, V. (1933), "Sur les zéros des matrices stocastiques", Bulletin de la Société Mathématique de France 61: 213–219, http://www.numdam.org/item?id=BSMF_1933__61__213_0
 Collatz, Lothar (1942), "Einschließungssatz für die charakteristischen Zahlen von Matrize", Mathematische Zeitschrift 48 (1): 221–226, doi:10.1007/BF01180013
 Wielandt, Helmut (1950), "Unzerlegbare, nicht negative Matrizen", Mathematische Zeitschrift 52 (1): 642–648, doi:10.1007/BF02230720
Further reading
 Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. ISBN 0898713218.
 Chris Godsil and Gordon Royle, Algebraic Graph Theory, Springer, 2001.
 A. Graham, Nonnegative Matrices and Applicable Topics in Linear Algebra, John Wiley&Sons, New York, 1987.
 R. A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1990
 S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability London: SpringerVerlag, 1993. ISBN 0387198326 (2nd edition, Cambridge University Press, 2009)
 Henryk Minc, Nonnegative matrices, John Wiley&Sons, New York, 1988, ISBN 0471839663
 Seneta, E. Nonnegative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) ISBN 9780387297651
 Suprunenko, D.A. (2001), "Perron–Frobenius theorem", in Hazewinkel, Michiel, Encyclopaedia of Mathematics, Springer, ISBN 9781556080104, http://eom.springer.de/P/p072350.htm (The claim that A_{j} has order n/h at the end of the statement of the theorem is incorrect.)
 Richard S. Varga, Matrix Iterative Analysis, 2nd ed., SpringerVerlag, 2002
Categories: Matrix theory
 Dynamical systems
 Theorems in algebra
 Mathematical and quantitative methods (economics)
Wikimedia Foundation. 2010.