In this explainer, we will learn how to identify the properties of determinants and use them to simplify problems.

As well as being very interesting in a practical sense, the determinant of a square matrix is suffused with a range of attractive algebraic properties. The determinant of a square matrix is a quantity that provides summary information about the matrix, such as whether it is invertible, that is useful to know in advance of attempting to perform any algebra involving the matrix. If we are only interested in understanding summary information about certain matrices when they are combined using the operations of conventional algebra, then we would prefer to limit the number of calculations that we would need to achieve this. The calculation of a determinant is almost uniquely susceptible to simplification, with many options being available, especially for square matrices of higher order. It is also imbued with a string of attractive algebraic properties that are related to several of the other key concepts in linear algebra.

In this explainer, we will describe several key properties of the determinant. Before doing so, we will need to familiarize ourselves with several concepts that will allow us to verify the theorems that we claim to be true. At each stage, we will provide a short demonstration for each new concept that we introduce, although we will have to assume a reasonable level of familiarity with each of these. Although the theorems that we will describe will apply to all square matrices, it may distract from the key points if we have to perform the number of calculations that are required when calculating the determinant of a matrix that has order or larger. Therefore, we will restrain ourselves only to the use of and matrices. Before beginning, we will briefly revise how to calculate the determinants of these types of matrices, beginning with matrices of order .

### Definition: Determinant of a 2 × 2 Matrix

For a matrix the determinant of is denoted and is given by the formula

Beyond a matrix of order , the determinant of a matrix is the next easiest to calculate, and it is simple, if not entirely trivial. Being able to calculate the determinants of matrices is an important process to master, as it is a necessary technique for calculating the determinants of matrices with order or larger. Suppose that we had the matrix where we have highlighted each entry to demonstrate the process. The determinant of is then calculated as

We will use this understanding when calculating the determinant of a matrix. Unlike for a matrix, when calculating the determinant of a matrix with a higher order, there is a choice of how to proceed with the calculation. The general method for calculating the determinant of a higher-order square matrix is something that we will not cover in this explainer, instead choosing to use Sarrus’s rule as our preferred approach. The calculation of the determinant of a matrix using Sarrus’s rule is reliant on being able to calculate the determinants of matrices with order that are composed of certain entries of the original matrix. These are three of the appropriate “matrix minors” of the original matrix, and they are used as follows.

### Definition: Sarrus’ Rule

Consider a matrix with the following entries:

Then, the determinant can be calculated using Sarrus’s rule as

Sarrus’s rule uses the entries in the top row of a matrix in conjunction with the determinants of all of the matrix minors that exclude the first row. We will demonstrate how to use Sarrus’s rule by calculating the determinant of the matrix

To aid our calculations, we highlight the entries in the top row of the matrix

Then, using Sarrus’s rule, we calculate the determinant as

Now that we have shown how to calculate the determinants of matrices with order and , we will begin to consider some of the most important results that describe the algebraic properties of the determinant. The first of these results is arguably one of the most important, as we can use this result to generate several others of interest. Given that calculating the determinant of a matrix will often require many arithmetic calculations and that matrix multiplication can require a similarly large number of calculations, the following result that links the two is one that is surprisingly concise and elegant, with significant implications in both the computational and abstract sense.

### Theorem: The Determinant Is Multiplicative with Respect to Matrix Multiplication

Consider the two square matrices and , which both have order . Then, the determinant is multiplicative with respect to the matrix multiplication . In other words,

There is one immediate consequence to this theorem. Matrix multiplication is generally not commutative, meaning that . However, given that , we have that

We will demonstrate this result with the two matrices

It makes sense to begin by calculating the determinants of both and . Recalling the definition above, we have that

We then calculate the determinant of :

In order to verify the multiplicative property of the determinant, we will also need to calculate both matrix multiplications and . We can check that

Then, we find

As we stated earlier, it is the case that . The theorem that we gave, however, does not require that in order for the equation to hold. Now that we have both and , we will calculate their determinants. This is completed in the same manner as we did in equations (1) and (2). First, we calculate

This is very encouraging, as we can use equations (1)–(3) to check that , which is true in our case, since . We also anticipate that we should obtain when calculating the determinant . As we expect, we have

We have found what we expected and have provided one check that the determinant is in fact distributive over matrix multiplication. We can then use this property when looking to obtain the determinants of matrices that are composed of other matrices by multiplication. As we will see in the following questions, once we understand that , we do not have to perform the matrix multiplication in order to find the determinant of this matrix. This will reduce the number of steps needed to calculate the determinant, which would generally be considered as a positive.

### Example 1: The Multiplicative Property of the Determinant for 2 × 2 Matrices

Consider the matrices

Without performing the matrix multiplication , find the determinant .

### Answer

We know that the determinant is multiplicative with respect to matrix multiplication. In other words, we have

In order to calculate , we only need to separately calculate both and . We find that

We also calculate

Given equation (4) , we can use the results in equations (5) and (6) to give

If we wished, we could check that this result is correct by performing the matrix multiplication hence being able to check that

We could also do the same for the matrix , and we would find that, even though , it is the case that .

The merits of the above theorem will become even more apparent when we begin working with matrices of a higher order. In the following example, we will consider two matrices with order . Multiplying two such matrices together is a computationally heavy task that we would prefer to avoid if possible. If we are only interested in the determinant and not the matrix itself, then there is no need to perform this calculation now that we are armed with the powerful result that we have just used.

### Example 2: The Multiplicative Property of the Determinant for 3 × 3 Matrices

Consider the matrices

Without performing the matrix multiplication , find the determinant .

### Answer

We will use Sarrus’s rule to calculate the determinants of the two matrices. We being by highlighting the first row in each matrix:

Then, the determinant of is

The determinant of is

Since we know that , we have

One way of checking that the result is correct would be by actually performing the matrix multiplication

We could then use Sarrus’s rule on this matrix to show that

We can extend our result about the multiplicative property of the determinant to cover scenarios where we might multiply together more than two matrices. We know that matrix multiplication is associative, which means that for any matrices , , and of compatible orders. We can then take the determinants of both sides of this equation, which gives

Since we know that , we obtain the full result from which we can quickly deduce the most general form of the result, as follows.

### Theorem: General Multiplicative Property of the Determinant

Consider the matrices , all with order . Then, it is the case that

### Example 3: Multiplicative Property of the Determinant with More Than Two Matrices

Consider the square matrices , , , and that all have the same order. Furthermore, suppose that some of the determinants are known:

Use this information to calculate .

### Answer

We know that the determinant is multiplicative, and hence,

Using the information given to us in the question, we have

Solving, we find that .

The above theorem is just one result that can be deduced from the multiplicative property of the determinant. This result is so powerful that it allows several specific results that are themselves of considerable interest and versatility. We can easily extend the theorem above to another operation of regular interest within linear algebra: taking a positive integer power of a square matrix. Given that matrix exponentiation is just repeated instances of a matrix multiplied by itself, the above theorem can be used to drastically simplify the number of calculations that are needed to find the determinant of a matrix that has been raised to a positive integer power. Before we prove the following result, we should recall that for a square matrix and positive integer , we define the th “power” of the matrix as where there are instances of the matrix on the right-hand side. Given that matrix multiplication tends to involve many arithmetic calculations, matrix exponentiation will involve many more calculations for even small values of . If we are able to obtain the determinant without having to actually calculate , then this would be a significant advantage. As ever, there is a theorem to help us in this regard.

### Theorem: The Determinant and Matrix Exponentiation

Consider an matrix and a positive integer . Then,

### Example 4: Matrix Exponentiation and the Determinant of a 2 × 2 Matrix

Consider the matrix

Without calculating , find .

### Answer

We will make use of the result . This means that we only need to calculate the determinant . Using the formula for the determinant of a matrix, we have

We can then calculate

We can check that this result is correct be performing the matrix multiplication

The determinant is then calculated:

Had we wished to calculate , for being any number larger than 2, then it is clear why we would want to avoid having to calculate , if at all possible. This is especially true when is a square matrix with a large order. Unless there is a good reason to do otherwise, if we are solely interested in the determinant , then we would use the above theorem as a shortcut to doing this. The larger the value of and the larger the order of , the more useful this result will become.

### Example 5: Matrix Exponentiation and the Determinant of a 3 × 3 Matrix

Consider the matrix

Without calculating , find .

### Answer

Before calculating , we will first calculate the determinant of . We will use Sarrus’s rule and first highlight the top row of :

We then employ Sarrus’s rule to give

Given that for any positive integer , we can calculate

We could check this result in the normal way, by calculating and then using Sarrus’s rule to find that , although this process would take much longer than the efficient and fairly painless working that we completed in the example above. Once again, if there is no actual need to calculate , then we would clearly choose not to do so. In general, linear algebra can be quite troublesome in terms of the sheer number of calculations that can be involved, so any shortcut to obtain results is definitely desirable.

We will now give another important application of the multiplicative property of the determinant that can spare the effort of having to perform many calculations. We recall that for a square matrix of order , the inverse matrix (if it exists) is the unique square matrix of the same order such that , where is the identity matrix. Since the inverse matrix is defined with reference to the original matrix and by the operation of matrix multiplication, we expect that our previous theorem would apply to the inverse of a matrix.

### Theorem: The Determinant of the Multiplicative Inverse Matrix

Consider an matrix with inverse . Then, it is the case that

We can prove this result as follows. Recalling that the inverse matrix satisfies the result we take the determinant of both sides of this equation. We obtain

It is a known result that the determinant of the identity matrix is equal to 1, giving

The multiplicative property of the determinant means that the left-hand side can be written in an alternative form:

It is then a simple matter to show that as required. This result is useful for exactly the same reason as the results pertaining to matrix multiplication and matrix exponentiation: we can save ourselves a lot of effort if we are only interested in the determinant of the matrix inverse and do not actually need to calculate this matrix. The inverse of a matrix can be described without too much effort. For a matrix the inverse matrix is

We can check directly that , where is the identity matrix. Given that is a matrix, we have , which allows the equation above to be equivalently written as

### Example 6: The Determinant and the Inverse of a 2 × 2 Matrix

Consider the matrix

After calculating the determinant and the inverse matrix , do the results confirm that ?

### Answer

The determinant of is calculated as follows:

Given that , we expect that . This can be confirmed by first deriving the inverse matrix , which is calculated using the general result

For the matrix , we calculate the inverse matrix using equation (7):

The determinant is then calculated:

This confirms the result that we obtained earlier, which was found without us having to calculate .

For square matrices with order or larger, it is possible to write explicit formulas for the inverse matrix in terms of matrix minors by using the adjoint matrix method. Alternatively, the inverse can be calculated by using elementary row operations to obtain the reduced echelon form of a particular matrix. Although these methods each have situations in which their use is favorable over the other, they will both incur a degree of computational complexity that is preferable to avoid, if possible. Accordingly, we will not demonstrate any derivation of the inverse matrices in the next example. Although this would in some ways illustrate the power of the above theorem, the calculation of determinants for larger square matrices is very lengthy and would distract from the purpose of the exercise.

### Example 7: The Determinant and the Inverse of a 3 × 3 Matrix

Consider the matrix

Without calculating , find .

### Answer

We do not yet know that is invertible, which means that may not actually exist. The quickest way to find this out is just to calculate the determinant . We highlight the entries in the first row of :

Then we use Sarrus’s rule to give

Given that , we know that the inverse matrix does exist. We use the known theorem , and thus we conclude that .

We could have used either Gauss–Jordan elimination or the adjoint matrix method to calculate the inverse and checked that , where is the identity matrix. The determinant of could then be calculated using Sarrus’s rule:

This confirms the result that we calculated in the example.

The results that we have covered in this explainer are all derived from the original theorem that the determinant is multiplicative with respect to matrix multiplication:

From result, we were able to provide two subsequent theorems relating to matrix exponentiation and the matrix inverse. There are many other results relating the determinant to other concepts within linear algebra. For example, the matrix transpose has a curious and rather helpful property in relation to the determinant. If is a square matrix and is the transpose of this matrix, then taking the determinant of either matrix will give the same result. In other words,

There are other results too. For example, suppose that is an matrix and that is a constant. Then, it is the case that

We have not expounded on these two results in this explainer, although there are many situations in which they are useful.

Given that the determinant has several associated, elegant results relating to matrix multiplication and exponentiation, we might expect that similar results hold for the operations of addition and subtraction. This assumption would be misguided, as it can easily be shown that the determinant is not additive. In other words, it will usually be the case that

Although there will be some matrices that are additive with respect to the determinant (such as the case when either or are the zero matrix), the statement above could easily be confirmed by selecting two random matrices of order and then separately calculating and . Almost certainly, the two quantities will not be equal.

### Key Points

- The determinant is multiplicative with respect to matrix multiplication. In other words, .
- For a positive integer , it is the case that .
- If is a square matrix and the inverse exists, then the determinants are related as follows: .
- The determinant is not additive (i.e., in general).