In this explainer, we will learn how to use the matrix multiplication to determine the square and cube of a square matrix.
There are many operations in linear algebra that are very similar to the well-known operations from conventional algebra, such as addition, subtraction, and scaling. There are also additional operations such as matrix multiplication and inversion which to some extent mirror the algebraic properties of their conventional counterparts, while being calculated in a way that is fundamentally more complex.
One operation that is central to both conventional algebra and linear algebra is that of exponentiation, which is usually referred to as taking the “power” of a number or matrix. In conventional algebra, it is possible to take any number and raise it to a power , giving . It does not matter whether and are zero, nonzero, integer, noninteger, rational, irrational, or complex as the output can always be calculated. The same is not true in linear algebra, where a matrix cannot always be exponentiated. For example, as we will shortly see, it is not possible to exponentiate a nonsquare matrix because the operation will not be well defined. As another example, we cannot raise to the power or any other negative power unless the matrix inverse is known to exist, which is only possible for square matrices with a nonzero determinant. These are just two of the exceptions for dealing with exponentiation in linear algebra, which we now define more rigorously.
Definition: Power of a Square Matrix
For a square matrix and positive integer , the th power of is defined by multiplying this matrix by itself repeatedly; that is, where there are copies of the matrix .
It is easiest to demonstrate this definition with a simple, nontrivial example. We define the matrix
To calculate the matrix , we are multiplying the matrix by itself. In other words, we could write
It now remains to complete the matrix multiplication. If a matrix has order and another has order , then the matrix multiplication is well defined, resulting in a matrix of order . In the equation above, we are multiplying a matrix of order by a matrix of order , meaning that the output matrix will also have order . We must therefore find the matrix on the right-hand side of the equation where the entries are to be found. We complete the calculations with reference to the definition of matrix multiplication. First, we consider the first row of the leftmost matrix and the first column of the middle matrix, which corresponds to the entry in the first row and first column of the rightmost matrix, as follows: where we have completed the calculation . Next, we take the entry in the first row of the leftmost matrix and the second column of the middle matrix, which we will combine to find the entry in the first row and second column of the rightmost matrix: where we have calculated . Next, we use the second row of the leftmost matrix and the first column of the middle matrix: with the calculations being . The entry in the second row and second column of the rightmost matrix is then calculated: where we have calculated that . Now that all entries have been computed, we can write that
In this specific example, we have demonstrated how to take the square of a matrix. Naturally, the definition can be extended to higher powers. Given that we have calculated both and , we could also calculate . First, we write the two matrices
Then, completing the matrix multiplication directly gives
Likewise, we could calculate , , and so on. Before practicing some further examples of taking the power of a matrix, we will provide one further result that will explain why we may only take the power of a square matrix.
Theorem: Matrix Powers and Square Matrices
Taking the power of a matrix is only well defined if this is a square matrix. If has order , then this order will be common to , , , and so on.
For two matrices and , the matrix multiplication is only well defined if there is the same number of columns in as there are rows in . If has order and has order , then is well defined and has order . If we were only to consider the matrix and attempt to complete the matrix multiplication , then we would be attempting to multiply a matrix with order by a matrix with order . This can only be well defined if , meaning that has to be a matrix with order , which implies that this is a square matrix. The order of is therefore identical to the original matrix , as is also the case for , , and so forth.
Example 1: Finding the Square of 𝐴 Matrix
For write as a multiple of .
Before attempting to write as a multiple of , we need to calculate itself. Completing the necessary matrix multiplication gives
The output matrix is the same as the original matrix , except every entry has been multiplied by . We hence find that can be written in terms of itself by the expression .
It is seldom the case that we wish to consider the power of a matrix in isolation and, normally, we would be combining any matrix powers with other expressions involving, potentially, other matrices. The principles of matrix exponentiation never change, so even if multiple matrices are involved then the working should never be much more difficult.
Example 2: Evaluating Matrix Expressions Involving Powers
Consider the matrices What is ?
We should begin by calculating both and in the usual way. We calculate that
We also have that
Now that we have both and , it is straightforward to calculate that
It is probably unsurprising that we can easily take, for instance, the third and fourth powers of a matrix employing our understanding of how we find the second power of a matrix, as we have done above. Following the usual laws of powers and indices, for any integer we can write the matrix , providing that . We would expect, therefore, that we can write or , with the two results being the same. This is a perfectly reasonable assumption and is true, being possible to demonstrate after it has been shown that matrix multiplication is associative.
Example 3: Calculating Higher Powers of Matrices
Given the matrix calculate .
We should begin by calculating and then using this result to calculate . We find that
Now we have both of the matrices which means that we can calculate as the matrix multiplication between and :
We now have everything necessary to calculate the required expression:
There are alternatives for calculating powers of a square matrix which allow for the process to be simplified. For example, it is possible to use the characteristic polynomial of a square matrix to then employ the Cayley-Hamilton theorem. For an matrix , this theorem allows for to be written in terms of the lower-order expressions and the identity matrix . Alternatively, we could take a square matrix and then use the similarity equation to simplify the calculation of to any positive integer power. These two options are both very interesting in their own right and require in-depth study to understand, although they are both supremely versatile and elegant results. For the remainder of this explainer, we will continue with our manual method of computing the power of a matrix.
Example 4: Powers of Matrices
Consider the matrix Find .
The matrix has order , which means that will also have this order. Therefore, we expect to find a matrix of the form where the entries are to be calculated. We will complete the matrix multiplication in full, illustrating every step completely.
First, we calculate the entry in the first row and first column of the rightmost matrix: where the calculation we completed was . Now we calculate the entry in the first row and second column of the rightmost matrix: where the calculation is . Next, we focus on the entry in the first row and third column of the rightmost matrix: where we have . Now we move onto the second row of the rightmost matrix, resetting to the first column: with the calculation being . Then, we take the entry in the second row and second column: where The final entry in the second row is then computed: since . The entry in the third row and first column is calculated: given that . The penultimate entry is then completed: where we have calculated . The final entry is then worked out: given that . Now that all entries of the rightmost matrix have been found, we can write the answer as
Given that taking the power of a matrix involves repeating matrix multiplication, we could reasonably expect that the algebraic rules of matrix multiplication would to some extent influence the matrix exponentiation in a similar way. Even though this is obvious to an extent, it is upsettingly easy to turn to the rules of conventional algebra when completing questions involving matrices, under the assumption that they will still hold. In the following example, we will treat each statement individually and will present the relevant properties of matrix multiplication in tandem, explaining why the given statements do or do not hold as a result.
Example 5: Properties of Raising Matrices to a Power
Which of the following statements is true for all matrices and ?
- Matrix multiplication is associative, which means that . We could continue this role to obtain results such as , and so forth. In the given equation, the left-hand side is , which by definition can be written as . Given the associativity property of matrix multiplication, we can write that and hence confirm that the given statement is true.
- Conventional algebra is commutative over multiplication. For two real numbers and , this means that . This result allows us to take an expression such as and use the commutative property to collect the two middle terms of the right-hand side: However, matrix multiplication is generally not commutative, meaning that unless in special circumstances (such as diagonal matrices or simultaneously diagonal matrices). Therefore, the expansion cannot be simplified under the assumption that . Hence, the given statement is false.
- To complete the matrix multiplication , we can begin by writing where we have used the associativity property to arrange the final expression. Because matrix multiplication is not commutative, the bracketed term cannot be rearranged as , meaning that we cannot rewrite the final expression as , which would have allowed the simplification . Given that this is not the case, the statement is false.
- We have that Since it is generally the case that , we cannot obtain the simplification given in the question.
- We begin by completing the expansion We know that generally , meaning that we cannot write the right-hand side as and hence the statement in the question is false.
There are many related topics which bolster the justification for studying matrix exponentiation. When working with a square matrix, it is clear that repeatedly multiplying such a matrix by itself will generally lead to results that are successively more complicated to calculate given the large numbers involved, as we have seen in several of the examples above. It is therefore advantageous to be able to reduce the complexity of these calculations as much as possible. To this end, the Cayley-Hamilton theorem provides an elegant and mathematically gratifying method for calculating the power of a matrix using the characteristic polynomial and matrix powers of a lower order. Under certain circumstances, it is possible to diagonalize a matrix, which significantly reduces the complexity of calculating its integer powers.
- For a square matrix and positive integer , we define the power of a matrix by repeating matrix multiplication; that is, , where there are copies of the matrix on the right-hand side.
- The power of a matrix is only well defined if the matrix is a square matrix. Furthermore, if is of order , then this will be the case for , , and so on.
- Higher powers of a matrix can be calculated with reference to the lower powers of a matrix. In other words, , , and so forth.
- The associative and noncommutative nature of matrix multiplication must be fully understood before attempting to simplify expressions involving the powers of a matrix.