In this explainer, we will learn how to identify the properties of determinants and use them to simplify problems.
Irrespective of the method used, calculating the determinant of a matrix will usually involve many manual calculations. For matrices, calculating the determinant is straightforward due to the relatively small number of calculations that are required. However, calculating the determinant of a matrix is already significantly more complicated and involving many more arithmetic operations, meaning that the process is prone to introducing errors. The situation becomes much more fraught still when considering matrices that are of order , or any square matrix of a larger order.
Calculating the determinant manually will usually require some variant of the cofactor expansion method, which reduces the calculation of the determinant of an matrix to a problem where we must calculate determinants of matrices that are derived from the original matrix. These determinants are referred to as matrix minors, and it is the iterative approach that begins to hint at the complexity of manually calculating a determinant. As they will be relevant to discussions of the determinant, let us recall the definition of minors and cofactors.
Definition: Minors and Cofactors
Let be a matrix of order . Then, the minor of element (denoted by ) is the determinant of the matrix obtained after removing row and column from .
The cofactor of element (denoted by ) is written in terms of the minor as
Furthermore, in the matrix case, recall that we can use these definitions to formulate an equation for the determinant.
Definition: Determinant of a 3 Γ 3 Matrix (Cofactor Expansion)
Let be a matrix. Then, for any fixed , or 3, the determinant of is where each is the cofactor of entry . This is known as the cofactor expansion (or Laplace expansion) along row . Alternatively, for any fixed , or 3, we have
This is the cofactor expansion along column .
Typically speaking, we will consider the cofactor expansion of the first row of the matrix. In that case, we can rewrite the above equation using minors as
Due to the risks involved with manually calculating the determinants of large square matrices, it is preferable to avoid such approaches if at all possible. One way of doing so is to understand the various algebraic properties of the determinant and how these can be used to either reduce the number of calculations that are involved or to reduce the magnitude of the entries of the matrix in question (thereby allowing us to work with smaller numbers when finding the determinant).
If we have a square matrix , then we are often interested in knowing whether is invertible. In other words, if is a matrix of order , then we would like to know if there exists another matrix of the same order such that , where is the identity matrix of the same order. If a square matrix has an inverse , then the matrix will be imbued with a variety of attractive algebraic properties, which would not be the case if such an inverse did not exist. Conveniently, it is a well-known result of matrices that this inverse matrix exists if and only if the determinant of is nonzero. Therefore, naturally, the average mathematician is highly attuned to circumstances that might quickly allow them to spot whether a determinant is zero. This explainer contains several examples of techniques that allow for a square matrix to be identified as having a determinant of zero, without the need for the determinant to be fully calculated.
Given that we are naturally most interested in whether a determinant of a square matrix is zero or nonzero, any results of this type (or even a similar type) are to our advantage. Our first result of this type relates to expanding a matrix across certain cofactors, but definitely not in the same way as the definition given above!
Property: Cofactor Expansion Using a Different Row or Column
Suppose that is a square matrix with entries . Suppose that we take a particular row . Then, take a different row (i.e., ) and calculate the cofactors for this row: , and . Then, we have
Similarly, for a column , and the cofactors of a different column (i.e., ), we have
The first thing to note is that the calculation above is certainly not a calculation of the determinant, but the form given does clearly bear some resemblance. We will demonstrate the result with an example.
Suppose that we considered the matrix and chose to find the cofactors of the third row. In other words, we will set and then calculate the following minors:
Evaluating each of these determinants gives the following values:
Then, from these, multiplying by each time, we get the following cofactors:
Suppose now that we were to multiply these cofactors by the corresponding entries of a different row. In other words, let us choose another row from the matrix that is not the third row. Arbitrarily, let us pick the second row. Using the terminology in the property above, we would set . This gives us the following entries:
Then, multiplying these entries by the corresponding cofactors in row 3, we get
This confirms the property above, and although we have not given any explanation of this, we should think of this as a strong hint that there are powerful underlying results relating to the determinant. In fact, the property above can be proven as a consequence of other properties that are given later in this explainer. We will now give an example.
Example 1: Using the Properties of Determinants to Evaluate an Expression Involving Cofactors
Consider the equation
Evaluate
Answer
The given matrices are found by taking the matrix minors of the third row of the given matrix. Labeling this matrix as , the relevant minors are
However, we can also see that the coefficients used in the given expansion are the matrix elements
We know that we have for any matrix when it is the case that . In our situation, the expression can be rewritten as follows: which we can conclude is equal to zero.
With this result in hand, we will now begin to cover other properties that relate to the calculation of the determinant of a matrix (rather than the previous property, which relates to quantities that are calculated in a similar way to the determinant, but which are not themselves determinants). There will be many full examples in this explainer, and in between these, we will give some extra information to fully demonstrate the principles involved. For the sake of consistency, we will choose to work frequently with the matrix
Following on from the discussion above, this is an example of a matrix that has a determinant of zero, which can be calculated by expanding the determinant along the top row of entries of the matrix, which we highlight below. This is an example of the cofactor expansion method that we defined above, where the coefficients are the top row of entries of the matrix. The calculations are as follows:
This allows us to complete the calculation
Because the matrix has a determinant of zero, it is not invertible. This means that there does not exist an inverse such that , where is the identity matrix.
This explainer will cover many separate results concerning the determinant, and we will begin by focusing on the transpose of a matrix. As a brief reminder, we can denote a matrix of order as , where represents the entry in row and the column of . The transpose of is a matrix of order and is usually denoted . This new matrix can be written as . Another way of saying this is that the entry in the row and column of is the entry in the row and column of the transpose .
We can best demonstrate this with a brief example. Suppose that we defined the following matrix and its transpose :
For the sake of demonstration, we have highlighted the entry of , which is in the 2nd row and 3rd column of . We can see that this is exactly the same as the highlighted entry in the 3rd row and 2nd column of the matrix transpose . The analogous behavior can be observed for every nondiagonal element, and in this sense, the transpose can be thought of as the operation that flips the matrix around the diagonal entries.
With the above properties in mind, we will now give a definition about how the matrix transpose will affect the determinant of a square matrix. The following result is deceptively simple and can often be highly nonintuitive until the governing principles of the determinant are fully understood. Please be assured that the result is indeed correct and, as ever, can be tested on any number of arbitrary examples (as long as we are aware that this program of work would not constitute a proof).
Property: The Determinant and the Matrix Transpose
Suppose that is a square matrix and that is the transpose of this matrix. Then, it is the case that
In other words, the determinant of the transpose is equal to the determinant of the original matrix.
This property is so simple that it may induce a sense of skepticism in the reader. We can confirm it with reference to the matrices and as defined above. We previously calculated the determinant of directly, hence showing that . The result above implies that , which means that the transpose matrix will also have a determinant of zero. We can confirm this by expanding along the top row of entries. We highlight the top row as follows:
Then, we expand across the top row as shown:
This confirms that the determinant of the transpose matrix is also zero, as we expected.
Example 2: Understanding the Effect of Transposition on the Determinant of a Matrix
Without evaluating the determinant determine which of the following is equal to it.
Answer
We will begin by labeling the original matrix as . We know that the determinant of a matrix is equal to the determinant of that matrixβs transpose, which in this case means that .
A quick way of identifying the matrix transpose can be to examine one particular row of and see how this would be affected under transposition. Suppose that we took the matrix and highlighted the first row, as shown:
One of the properties of matrix transposition is that the first row of will correspond to the first column of , since in general, the row of must correspond to the column of the transpose . In other words, the transpose must be of the following form:
From the options above, we can see that only a and d may refer to taking the determinant of the transpose since the first columns of b and c do not match the first column highlighted above.
We can now identify the correct transpose by highlighting the second row of , which under transposition becomes the second column of . As we can see, the second row is highlighted:
And once transposed, this becomes the second column, as shown:
This new detail excludes option a, meaning that option d must be the correct answer. Indeed, we can apply the same trick as before to check that the third row of matches the third column of . This is verified in the following line of working, which confirms that option d is the correct choice:
We could, of course, separately calculate the determinant of both these matrices to check that they are equal. Using any preferred method, we would have found that .
The determinant of a transpose matrix being equal to the determinant of the original matrix is a powerful result with a suite of applications. Although this result can occasionally be useful in its own right, one of its main strengths is the ability to reinforce other results that feature the operation of transposition (either explicitly or implicitly). We will see an example of this in the next definition, which provides another way of simplifying the calculation of a determinant when a matrix has particular properties. We mentioned previously how we are often most interested in finding out whether or not a matrix has a zero/nonzero determinant, and the following result provides a powerful tool for being able to draw such conclusions without having to complete the full set of calculations that would normally be necessary in computing the determinant.
Property: The Determinant of a Matrix with Zero Rows or Columns
Suppose that is a square matrix and that has at least one zero row or at least one zero column. Then, .
In this property, a βzero rowβ means a row of the matrix for which all entries are zero. We could give a demonstration of this using familiar terms by taking a matrix, where the top row is zero, and then finding the determinant by expanding along the top row. This would quickly show that the determinant must be zero.
As a less immediate demonstration, the following matrix has a zero row in the second row, which we have highlighted:
Given the large number of zeros in this matrix, we can see how a full determinant calculation would be simplified because we would be performing many operations where we multiply by these zero entries. However, the result above means that we need not complete any manual calculations because we have seen that there is a zero row and hence .
We can see how the first result about the determinant and transpose is now applicable in this situation. We know that , which means that . In the equation below, we have given the transpose matrix, now highlighting the second column (which corresponds to the second row of the original matrix). We can see that there is a zero column in the second column of :
Example 3: Finding the Determinant of a Matrix That Includes a Row of Zeros
Find the value of
Answer
The determinant of a matrix will be equal to zero if it has at least one zero row or one zero column. The matrix above has a zero row in the third row, which means that the determinant will be zero. We could check this result by manually calculating the determinant using any favorable method. In this case, we will expand along the bottom row using the standard cofactor method.
We begin by highlighting all entries in the third row of the original matrix, which we have labeled as :
Then, we can find the determinant by expanding along the third row, as shown:
There is no need to continue the calculations any further since every highlighted coefficient is zero, meaning that the total sum will be zero irrespective of the values of , or .
As well as being a strong standalone result, the previous property can be thought of as a consequence of a combination of several other separate, arguably more powerful, results. At the very least, finding zero rows or zero columns can be thought of as a desirable and fortuitous outcome that is likely to result from taking a broader approach to calculating the determinant of a matrix.
We said before that we would usually like to know whether or not the determinant of a matrix is equal to zero, and that we would aim to develop an array of techniques that would allow us to calculate the determinant of a matrix that could be used as alternatives to immediately using one of the standard methods. The next result will meet both of these wishes and will also walk us in a direction of travel that will arrive at a much larger and more powerful set of results regarding the determinant.
Property: The Determinant and a Matrix with a Repeated Row or a Repeated Column
Suppose that is a square matrix and that there is either a repeated row or a repeated column. Then, .
To demonstrate the result above, let us consider the following matrix:
In this matrix, the second column is equal to the third column, and the property above implies that the determinant of this matrix must therefore be zero. We can show that this is the case by expanding along the first column of this matrix. First, we highlight the relevant entries, as shown:
Then, we proceed to compute the determinant, as shown:
As we can see, the determinant of the matrix is certainly zero, which is caused by the determinants of the matrices all being equal to zero (as we can clearly see without the need for explicit calculation).
The definition above applies equally to rows and columns because of their behavior with respect to transposition and the fact that the determinant of a matrix transpose is equal to the determinant of the matrix itself.
Example 4: Finding the Determinant of a Matrix That Includes Repeated Rows
Use the properties of determinants to evaluate
Answer
We can see that the matrix has a repeated row, as highlighted:
We know that any square matrix with at least one repeated row or column will have a determinant of zero.
We can check this by calculating the determinant, by expanding along the third row. We label the given matrix as , and then highlight all entries in the third row, as follows:
Then, we calculate the determinant of as shown:
This shows that the determinant is indeed zero, which we can see once we have derived each of the determinants, which all evaluate to give zero.
We will now begin to introduce a series of results regarding the determinant of a square matrix, all of which should be considered as part of a larger tool kit. Just with any decent tool kit, the success is drawn from how the different tools can be used in conjunction. Just like with the following results for the determinant (as well as those that we have already given), we will normally be looking to employ a certain combination of these depending on the specific nature of the problem.
A curious property of determinants is the effect of either swapping a pair of rows or the effect of swapping a pair of columns. In terms of finding the determinant of a matrix, the following result is arguably not quite as useful as some of the results that will follow it, although it can still be used to simplify calculations and move specific entries from one location to another more convenient location.
Property: The Effects on the Determinant of Swapping Rows or Columns
Suppose that is a square matrix. Further suppose that we decide to switch row with row or column with column , where , to give the new matrix . Then, it is the case that
Furthermore, suppose that we decide to complete nontrivial row swaps or column swaps, resulting in the matrix . Then, it would be the case that
Given that this result is one of the easier types of results to understand, we will apply this directly to an example. Furthermore, we will actually streamline the working by simply counting the number of rows or column swaps, which are required to transform the first matrix into the second. Providing that we are only using this row operation, then we will obtain a result of the form where is the total number of row or column swaps.
Example 5: Understanding the Effect of Swapping Rows or Columns on the Determinant of a Matrix
Consider the equation
Find, without expanding, the value of the determinant
Answer
The two matrices appear to be very similar and related by a series of row and column swaps. Assuming that this is the case, we will attempt to manipulate the first matrix into the second matrix using only row or column swap operations, keeping count of how many of these we apply. We recall that by completing one row swap or column swap operation, we induce a sign change in the determinant. If the original matrix is and the resulting matrix is , then we will have the determinant relation . If there are two such operations, then this introduction of the negative sign will be undone, given that . Following from this, if we complete row or column swaps, then we will have the determinant relationship .
We start with the original matrix and decide to switch the first and the third columns, giving the new matrix
We have performed one swap operation so far, meaning that currently . Now, we observe that the entries of the first column of this matrix are a match for the entries in the desired matrix, albeit in a different order. This suggests that we should not perform any further column operations that involve the first column. In contrast, we can see that the second and the third columns need to be switched, giving the new matrix
We have completed two swap operations to get to this stage, meaning that at the moment . Inspecting the matrix above, we can see that each column now contains the correct entries, although not in the correct order. This situation is easily remedied, as we can complete our working by swapping the first row with the third row, giving the desired matrix as shown:
We need a total of three swap operations to achieve this, meaning that . Assuming that the original matrix is labeled and the desired matrix is labeled , this gives the relationship . Given that we were told initially that , this gives the final answer that .
We will now give a result that will assist us in some of the remaining examples, as well as being a generally interesting result about determinants that hints at a deeper mathematical meaning. There are multiple ways of expressing and using the following result, all of which are equivalent. We choose the following form because the statement of the result relates more naturally to the previous result.
Property: The Determinant and Common Factors of a Particular Row or Column
Suppose that and are square matrices that are identical except for the entries of a particular row . Furthermore, suppose that each entry of in this row is a scale factor of the corresponding entry in the row of . In other words, for a fixed , suppose that for all , where is a constant. Then, it is the case that
Similarly, this result also holds when there is a column of that is a scalar multiple of the corresponding column in .
The above result is particularly useful and can be employed so as to decrease the complexity of calculating the determinant, by means of removing large or inconvenient factors that are common either to a specific row or a specific column. We will demonstrate with the following matrix:
Using a direct method such as cofactor expansion, we would find that . However, we can also use the result above and note that matrix has two rows where there are common factors. In this case, the second row has a common factor of 2, and the third row has a common factor of 3. By first looking at the second row, we can remove the common factor of 2 and use the definition above to obtain the determinant relation:
The numbers inside the rightmost matrix are now smaller, meaning that it would be safer to calculate the determinant of this matrix in the sense that it is less likely to introduce errors. We will not yet calculate the determinant of the rightmost matrix, given that we can remove a factor of 3 from the third row. Combining this together with the previous calculation gives the following:
The new rightmost determinant is now easier to calculate due to the numbers involved being lower. Evaluating this directly using any preferred means recovers the result
This result may not seem as though it provides any immediately useful benefits, but it is most certainly a powerful tool that can be used to simplify the calculations of determinants quite dramatically (albeit usually in tandem with other results). This is especially the case when working with square matrices of order or larger, where one is always desperately searching for any tricks that might make the calculation easier!
Example 6: Understanding How a Common Factor in a Row or Column of a Matrix Affects the Determinant
Given that find a relation between and without expanding either determinant.
Answer
By briefly inspecting both matrices, we can see that the first row of is the same as the first of after a scale factor of 3 has been applied. Likewise, the same holds for the second rows with a scale factor of 6, and, finally, for the third rows, the scale factor is 5. With this knowledge, we will now begin to write the determinant of in terms of the determinant of . We will do this by focusing on each of the rows in turn, using the stated scale factors to give a relationship between the determinants.
Given that the first row of is a copy of the first row of after applying a scale factor of 3, we can establish the relationship
We are now in a position where the first row of the rightmost matrix is a match for the first row of the matrix .
Now, we will work on the second row, from which we will remove a scale factor of 6. This establishes the following relationship between the determinants:
Apart from the third row, the rightmost matrix is now a match for the matrix that was given in the question. To complete our working, we will remove the factor of 5 from the third row of the rightmost matrix. This gives
This gives the relationship .
In the previous two sets of working, we only removed scale factors that were positive integers. It is worth noting that this does not have to be the case, and we could easily have removed any scale factor, whether it was noninteger or negative. Depending on the situation, this can be a helpful trick, especially for reducing the number of negative values appearing in a matrix (which otherwise tend to increase the risk of a sign error being made).
We will now introduce one of the most versatile and powerful results that can be used to find the determinant of a matrix. The strength of this result lies as much in the simplicity of use as well as its vast utility within linear algebra.
Property: The Invariance Property of the Determinant
Suppose that is a square matrix and that we obtain matrix by adding a constant multiple of one row to another row. Then, the determinant is invariant under this operation. In other words,
The same is true if we add a constant multiple of one column to another column.
For example, suppose that we worked with the original matrix from the beginning of this explainer:
We already know that , so we expect to achieve this result. The invariance property of the determinant, when combined with another established result, can achieve this with ease. For example, suppose that we subtracted a copy of the second row from the third row. This would give the matrix
We will not do so yet, but we could check that the determinant of this new matrix is zero. Indeed, it must be the case by the invariant property. However, now suppose that we deducted a copy of the first row from the second row. This gives the new matrix
Now, we can immediately bring in a result from earlier in this explainer, which stated that a matrix with a repeated row or column would produce a determinant of zero. This means that the matrix above must have a determinant of zero and, by the invariance property, so must the original matrix .
Hopefully, this example has begun to truly demonstrate how the range of results around the determinant of a matrix can be used to calculate this quantity without having to use a direct method. In the example above, after some practice, we would be able to perform most of the arithmetic without needing to write anything down, allowing us to quickly find that the particular value of the determinant is zero. In the following example, we will give another demonstration of this, which will require several techniques that we have previously covered in this explainer.
Example 7: Using the Invariance Property of Matrix Determinants
Find, without expanding, the value of the determinant of
Answer
We will use the invariant property of the determinant to manipulate the expression above into a simpler form. We know that adding a constant multiple of one row to another will leave the determinant unchanged, so we begin by adding a copy of the first row to the third row. This gives
We are now in a very convenient situation, because the third row is just a scalar multiple of the second row, with the scale factor being . We know that we can remove any scale factor of a particular row or column when calculating the determinant, meaning that we can establish the following:
Now, we have found that the second row of the rightmost matrix is just a copy of the third row of this matrix, meaning that this a repeated row. We know that a matrix with either a repeated row or column will have a determinant of zero, which must be the case for the given matrix. Hence, the determinant of the given matrix is zero.
The various results that we have presented so far can often be used in conjunction with each other, in order to obtain extra relevant information about a matrix and its determinant. Usually, we will have to work with some form of clue that can be deduced from the original matrix or a simple manipulation of it. There is no fixed approach for how we might scan a matrix to decide whether we should use a particular technique, and this is more akin to an art form that becomes honed with practice. A demonstration of this is given in the following example.
Example 8: Finding Factors in the Determinant of a Matrix
Select a factor of the determinant
Answer
We notice that there appears to be a relationship between the first and third rows, in that the constant terms of the top row (, and ) are the same as those in the bottom row (4, 8 and 8) except for a difference in sign. This means that if we add the two rows together, then we should expect all of these terms to cancel out. Regardless of our precise aim with the matrix above, if we are even remotely interested in the determinant, then this will be a valuable step.
Recall that, by the invariant property, adding a scalar multiple of one row to another row does not change the value of the determinant. Hence, we add a copy of the third row to the first row, giving
We can see now that the first row features a constant factor of . This includes the second entry that has a value of zero, which could alternatively be written as . This means that we can remove a factor of from the determinant, giving
This shows that is a factor of the determinant of the original matrix, meaning that option c is the correct choice.
We will now introduce a versatile result that can be used to assist in calculating the sum of two separate determinants. We state the result in terms of relationships between the rows of these two matrices; although by this point, we can probably believe that the result applies equally well to columns.
Property: The Summation Property of Two Determinants
Suppose that and are square matrices of the same order and that they are identical except for one row. In other words, suppose that except for the row of values when . Furthermore, create a new matrix that is identical to and except for row , meaning that when . Finally, assume for row that the matrix is found as the summation of the corresponding rows in the matrices and . In other words, assume that when . If these conditions are met, it is the case that
The analogous result holds for column operations.
The formality of this result might obscure how easy it is to apply, so we will give a specific example to help demonstrate the point. Suppose that we had the two matrices
We could calculate both of these determinants directly, in which case we would find that and . This would give us the result that .
Above, we took the preemptive step of highlighting the only row in which these two matrices differ. Suppose now that we created new matrix , which has the first and second rows being equal to the first and second rows of and , but with the third row being the summation of each element in these two matrices. In other words, we would have
We can then use any preferred method to calculate the determinant of , which would give . This confirms that , as expected. We will give another demonstration of this in the following example, albeit applying to columns rather than rows.
Example 9: Finding the Sum of Two Determinants Using Properties of Determinants
Use the properties of two determinants to evaluate
Answer
We will label the two given matrices as follows:
These two matrices differ only by the highlighted column, with all other entries being identical. If we are asked to calculate the sum , then we may avoid the direct method by exploiting this property. Suppose that we create the new matrix that is identical to and except for the second column, which is found by summing the respective entries of each matrix. We know that if this construction is applied to the second column (or indeed, any column or any row), we will have the determinant relationship .
In this particular case, let
We observe that the matrix has a repeated column, which is especially convenient. This is because it is a known property of square matrices that any repeated row or column will imply a determinant of zero. This means that ; and given that we already established that , this in turn implies that . This can be confirmed by direct methods.
The previous property allowed us to complete the previous example by only having to calculate the determinant of one matrix, rather than two (as it might have initially seemed). Furthermore, this manipulated the problem into a form where it was easy to recognize that the answer was zero, which was because we were able to identify that the matrix in question had a repeated row. This is just another example of how useful it can be to understand the full range of properties of the determinant and to be able to combine these in order to avoid having to complete many arithmetic operations.
In the final part of this explainer, we will state a very strong result that can be highly useful when calculating the determinant of a matrix. The following result concerns triangular matrices, which we will shortly define. Triangular matrices are incredibly useful in linear algebra and have been studied in depth, with the following result being a particular standout. Even better, the following result will apply to diagonal matrices, which are also central to the study of linear algebra and are a special case of a triangular matrix.
Property: Calculating the Determinant of a Triangular Matrix
Let be a square matrix (a matrix of order ) where for all (i.e., let be an upper triangular matrix). Then, the determinant of is the product of the diagonal entries; that is,
Additionally, suppose that is a square matrix where for all (i.e., let be a lower triangular matrix). The determinant of this matrix is also the product of the diagonal entries.
It follows that an upper triangular or a lower triangular matrix has a determinant of zero if one of the diagonal entries is equal to zero. Furthermore, this result applies immediately to diagonal matrices, which are matrices that are simultaneously in upper triangular and lower triangular form. In other words, a matrix would be in diagonal form if for all .
This result is often helpful when combined with the invariant property of determinants. To illustrate this, we will return for the final time to our original matrix:
This matrix is not in upper triangular or lower triangular form, but it can be manipulated into this form using the invariant property. Suppose that we subtracted a multiple of four times the first row from the second row and then subtracted a multiple of seven times the first row from the third row. This would give the new matrix
This matrix is nearly in upper triangular form, and we can move toward this form by subtracting a multiple of twice the second row to the third row, giving
This matrix is now in upper triangular form (even though one of the diagonal entries is zero, which does not contradict the definition). Given this, the determinant is the product of the diagonal entries. Given that the third diagonal entry is zero, the product of all diagonal entries is zero and hence the determinant of this matrix is zero (as we already knew!) In the next example, we will see a direct application of this property to a lower triangular matrix.
Example 10: Using the Properties of Determinants to Evaluate Triangular Matrices
Use the properties of determinants to evaluate
Answer
We recall that if a matrix is either in upper triangular or lower triangular form, then the determinant is the product of the diagonal entries. We will label the matrix above as and observe that this is in lower triangular form, given that for all . The determinant is then the product of the diagonal entries. In other words,
In the following example, we will see how we can apply the result about triangular matrices directly to diagonal matrices. The process is effectively identical in that we will only be taking the product of the diagonal entries.
Example 11: Using the Properties of Determinants to Evaluate Diagonal Matrices
Consider the equation
Determine the value of
Answer
The matrix is in diagonal form, which means that the determinant is just the product of the diagonal entries. Labeling this matrix as , we have
Thus, we have the equation , which is rearranged to give . The final answer is then .
We have now completed our exposition of some central results surrounding the concept of the determinant. These doubtlessly represent the key core results, being those that we are likely to use most often. As mentioned above, it can take a while to develop an intuition as to when these results should be made use of. Given a completely random square matrix, there may not be any of the results above that we could easily use. In other words, no matter how strong of our understanding of the results in this explainer, it is perfectly possible that we will find ourselves in a situation where these offer no real advantage, meaning that we are forced to use a direct method.
Let us summarize what has been learned in this explainer.
Key Points
For each of the following, suppose that and are square matrices of order . Then, the following results apply:
- The cofactor expansion of a matrix using coefficients from row and cofactors from row , where (which is not the same as calculating the determinant), gives us Similarly, for a column and the cofactors of a different column (i.e., ), we have
- The determinant of a matrix is unaffected by transposition. In other words, .
- If contains either a zero row or a zero column, then it will have a determinant of zero.
- If has either at least one repeated row or column, then it will have a determinant of zero.
- Suppose that matrix can be obtained from matrix by using row or column swap operations. Then, we have .
- Suppose that and are identical except that there is either one row or column of that is a constant scalar multiple of the same row or column in . Then, we have .
- If matrix can be obtained from the matrix using the row operation , then it will be the case that . The same is true for the analogous column operation and is known as the βinvarianceβ property of the matrix determinant.
- Suppose that , , and are square matrices of the same order and that they are identical except for one row. In other words, assume that when and that when . If these conditions are met, it is the case that .
- If a matrix is upper triangular, lower triangular, or diagonal, then the determinant will be equal to the product of the diagonal entries. This implies that if at least one of these entries is zero, then the determinant will be zero.