- Permutation matrix
In
mathematics , inmatrix theory , a permutation matrix is a square(0,1)-matrix that has exactly one entry 1 in each row and each column and 0's elsewhere. Each such matrix represents a specificpermutation of m elements and, when used to multiply another matrix, can produce that permutation in the rows or columns of the other matrix.Definition
Given a permutation π of "m" elements,:given in two-line form by :its permutation matrix is the "m × m" matrix "P"π whose entries are all 0 except that in row "i", the entry π("i") equals 1. We may write:where denotes a row vector of length "m" with 1 in the "j"th position and 0 in every other position.
Properties
Given two permutations π and σ of "m" elements and the corresponding permutation matrices "P"π and "P"σ:
As permutation matrices are orthogonal matrices (i.e., ), the inverse matrix exists and can be written as:
Multiplying times a
column vector g will permute the rows of the vector::Multiplying a
row vector h times will permute the columns of the vector::Notes
Let "Sn" denote the
symmetric group , or group of permutations, on {1,2,...,"n"}. Since there are "n"! permutations, there are "n"! permutation matrices. By the formulas above, the "n" × "n" permutation matrices form a group under matrix multiplication with the identity matrix as theidentity element .If (1) denotes the identity permutation, then "P"(1) is the
identity matrix .One can view the permutation matrix of a permutation σ as the permutation σ of the columns of the identity matrix "I", or as the permutation σ−1 of the rows of "I".
A permutation matrix is a
doubly stochastic matrix . TheBirkhoff–von Neumann Theorem says that every doubly stochastic matrix is aconvex combination of permutation matrices of the same order and the permutation matrices are theextreme point s of the set of doubly stochastic matrices.The product "PM", premultiplying a matrix "M" by a permutation matrix "P", permutes the rows of "M"; row "i" moves to row π("i"). Likewise, "MP" permutes the columns of "M".
The map "S""n" → A ⊂ GL("n", Z2) is a faithful representation. Thus, |A| = "n"!.
The trace of a permutation matrix is the number of fixed points of the permutation. If the permutation has fixed points, so it can be written in cycle form as π = ("a"1)("a"2)...("a""k")σ where σ has no fixed points, then "e""a"1,"e""a"2,...,"e""a""k" are
eigenvector s of the permutation matrix.From
group theory we know that any permutation may be written as a product of transpositions. Therefore, any permutation matrix "P" factors as a product of row-interchanging elementary matrices, each having determinant −1. Thus the determinant of a permutation matrix "P" is just the signature of the corresponding permutation.Examples
The permutation matrix "P"π corresponding to the permutation π = (1 4 2 5 3) is:
Given a vector g,:
Solving for "P"
If we are given two matrices "A" and "B" which are known to be related as , but the permutation matrix "P" itself is unknown, we can find "P" using
eigenvalue decomposition:::
where is a
diagonal matrix of eigenvalues, and and are the matrices ofeigenvector s. The eigenvalues of and will always be the same, and "P" can be computed as . In other words, , which means that the eigenvectors of "B" are simply permuted eigenvectors of "A".Example
Given the two matrices
::
and the transformation matrix that changes into is
:
which says that the first & second row as well as the first & second column of have been swapped to yield (and visual inspection confirms this).
After finding the eigenvalues of both and and diagonalizing them into a
diagonal matrix is:
and the matrix of eigenvectors for is
:
and the matrix of eigenvectors for is
:
Comparing the first eigenvector (i.e., the first column) of both we can write the first column of by noting that the first element () matches the second element (), thus we put a 1 in the second element of the first column of .Repeating this procedure, we match the second element () to the first element (), thus we put a 1 in the first element of the second column of ; and the third element () to the third element (), thus we put a 1 in the third element of the third column of .
The resulting matrix is:
:
And comparing to the matrix from above, we find they are the same.
Explanation
A permutation matrix will always be in the form :where e"a""i" represents the "i"th basis vector (as a row) for R"j", and where:is the
permutation form of the permutation matrix.Now, in performing matrix multiplication, one essentially forms the dot product of each row of the first matrix with each column of the second. In this instance, we will be forming the dot product of each column of this matrix with the vector with elements we want to permute. That is, for example, = ("g"0,...,"g"5)T, :e"a""i"·v="g""a""i"
So, the product of the permutation matrix with the vector v above, will be a vector in the form ("g""a"1, "g""a"2, ..., "g""a""j"), and that this then is a permutation of v since we have said that the permutation form is :So, permutation matrices do indeed permute the order of elements in vectors multiplied with them.
Matrices with constant line sums
The sum of the values in each column or row in a permutation matrix adds up to exactly 1. A possible generalization of permutation matrices is nonnegative integral matrices where the values of each column and row add up to a constant number "c". A matrix of this sort is known to be the sum of "c" permutation matrices.
For example in the following matrix "M" each column or row adds up to 5.:This matrix is the sum of 5 permutation matrices.
See also
*
Alternating sign matrix
*Generalized permutation matrix
Wikimedia Foundation. 2010.