In this explainer, we will learn how to find the LDU and PLU decompositions of a matrix.

Often, we choose to describe matrices as being of one or more βtypes,β which are typically determined by the order of the matrix as well as other additional features. For example, one of the simplest types of matrix is the square matrix, which has an equal number of rows and columns. Square matrices are the only types of matrix for which we can define the determinant or the multiplicative inverse matrix, which are two reasons why they are interesting to mathematicians. A subgroup of the square matrices are the lower-triangular and upper-triangular matrices, which are even more attractive because their determinant is easy to calculate (being simply the product of the diagonal entries) and any calculation involving them is easier due to the large number of entries that are zero.

Given that triangular matrices have many desirable algebraic properties, one common aim is to split or βdecomposeβ a matrix into a lower-triangular matrix and an upper-triangular matrix such that , which is known as the β LU β decomposition. As well as offering an attractive recipe for the matrix, writing a square matrix as the product of two triangular matrices can make subsequent calculations much easier to complete. The drawback of such an expression is that it is not always possible to derive, which some might consider to be a very significant drawback!

In this explainer, we will assume knowledge of elementary matrices and the LU decomposition method using Doolittleβs algorithm (or any other valid method), as there are separate explainers for these. Instead, this explainer will focus on demonstrating how to complete PLU decomposition in several examples. In practice, the completion of a PLU decomposition is very similar to the process used for LU decomposition, only with the possibility of having to use the row-swap operation and the associated elementary matrix (which is also a permutation matrix, in this case). We will first explain several examples where only one row swap is necessary and later we will give examples where more than one is needed.

Consider the following square matrix, which has the pivot entries highlighted:

To complete the LU decomposition of , we would ordinarily choose to use a nonzero pivot entry in the first row to eliminate any pivot entries in the rows below. However, this is not possible with the matrix above because the entry in the first row and first column is zero. The remedial step is simply to use an elementary row matrix of the first type, which corresponds to a row-swap operation. In this case, we use the row-swap operation which has the corresponding elementary matrix

Suppose that we were to create a new, row-equivalent matrix by multiplying the elementary matrix on the left-hand side with the original matrix . Then, the effect would be

The pivots are now aligned in a way that is much more favorable, meaning that we could now operate with the new, row-equivalent matrix to complete the LU decomposition (of this new matrix and not the original matrix , which does not have an LU decomposition). There is still the chance, of course, that we might need to use another elementary row operation of the first type, although we will only do so if absolutely necessary, preferring to use elementary row operations of the second and third types.

Focusing on the pivot entries of the matrix above, we see that to achieve upper-triangular form we need to at least eliminate the pivot entry in the second row of the matrix. This can be completed with the elementary row operation . The corresponding elementary matrix and the inverse are as follows:

Given that these two matrices are inverses of each other, we have that , meaning that we can inject this pairing on the left-hand side of the trivial equation . This gives the matrix equation

The leftmost matrix is in lower-triangular form, as we would expect. The second and third matrices can be combined using matrix multiplication to give

The middle matrix is not yet in upper-triangular form, which means that we have not yet solved the problem properly. That being said, the only entry that is preventing the middle matrix from being in the required form is the pivot in the third row. If we can eliminate this, then we will have solved the problem. As is often the way in linear algebra, we are now faced with a choice as to how we proceed, as there are infinitely many combinations of elementary matrices that will achieve what we need. We choose first to multiply every entry in the third row by 6 using the row operation . The corresponding elementary matrix and inverse are

Given that these two matrices are each otherβs multiplicative inverse, we can insert them between the left and middle matrices in the previous step:

By multiplying together the first pair of matrices and then separately multiplying together the second set of matrices, we obtain

The leftmost matrix is in lower-triangular form, as required, but the middle matrix is not yet in upper-triangular form. Thanks to the previous preparatory step, we can eliminate the pivot in the third row using the row operation , for which the corresponding elementary matrices are

Following the established method, we insert this pairing between the left and middle matrices of the previous step, giving

By taking the product between the first pair of matrices and then taking the product between the second pair of matrices, we obtain the desired form:

The leftmost matrix is in lower-triangular form and the middle matrix is in upper-triangular form. We have therefore decomposed the row-equivalent matrix by the expression , where

Since we originally defined the matrix as being equal to a permutation matrix multiplied by the original matrix as , we can write the full expression as . This is known as the PLU decomposition of .

Suppose we had obtained the general expression , where was the product of elementary matrices of the first type. This means that is a product of permutation matrices and, given that all permutation matrices are equal to their own multiplicative inverse, this means that itself is equal to its own inverse. Therefore, the expression can just as easily be written if this is considered preferable.

### Example 1: PLU Decomposition of a 3 Γ 3 Matrix

Find the PLU factoring of the matrix

### Answer

We begin by highlighting the pivots in each row. As we can see, there are two pivot entries below the pivot in the first row, both of which will need to be removed in order to achieve upper-triangular form:

The pivot in the second row can be removed using the row operation , for which the corresponding elementary matrices are

Now the trivial equation can be multiplied on the left-hand side by this elementary matrix and its inverse, the product of which must be the identity matrix (hence why this does not affect the truth of the statement ). The result is and taking the product of the two leftmost matrices gives the new matrix equation

The pivot entry in the second row now looks as though it may cause some complications, given that we would ideally want the pivot to appear in the second entry to help achieve upper-triangular form. Nonetheless, we do not have to focus on this yet, as we still need to remove the pivot entry in the third row. The obvious row operation is , which has the elementary matrices

Taking the previous step and injecting the elementary matrix and inverse in between the first and second matrices give

Combining the first pair of matrices and the second pair of matrices gives the simplified expression

The leftmost matrix is in lower-triangular form, as it should be, although the middle matrix is not in upper-triangular form. Indeed, we see now that it will be impossible to achieve upper-triangular form unless we swap the second and third rows in the middle matrix in the previous step. We can naively use the row operation and the elementary matrices

Injecting into the previous step in the normal way gives the matrix equation which can be simplified by taking the product of the first pair of matrices and then separately doing the same with the second pair of matrices. The outcome is

The middle matrix is in upper-triangular form, as we were hoping to achieve. But something has gone terribly wrong, in that the leftmost matrix is now not in lower-triangular form. This is because we multiplied by one of the elementary matrices. However, we can actually take this equation and multiply both sides, on the left, by the permutation matrix . This gives

We can take the product of the first and second matrices on the left-hand side, which will thankfully return a leftmost matrix with lower-triangular form. We also take the product of the two matrices on the right-hand side of the equation, giving

We have therefore achieved the form , where

The magic of this trick is that we can temporarily break the βlower-triangular-nessβ of the leftmost matrix in order to move closer to an upper-triangular form in the middle matrix. The leftmost matrix can then be moved back into upper-triangular form by applying the permutation matrix on the left-hand side. It is necessary that this be the case, as otherwise we would not be able to obtain upper-triangular form.

In the following example, we will demonstrate this technique, without applying it to a square matrix as we did in the previous two examples. It can be a matter of tedious debate among mathematics as to whether or not nonsquare matrices can be considered as either lower or upper triangular. For our well-being, we will not engage in such debates and will instead assume that it is acceptable for a nonsquare matrix to be either lower or upper triangular. As we will see, this changes our method very little, if it all, and it is still possible to find the PLU decomposition of a matrix whether or not it is square.

### Example 2: PLU Decomposition of a 3 Γ 5 Matrix

Find the PLU factoring of the matrix

### Answer

We begin by highlighting the pivot entries of the matrix. We need to remove the pivots in the second and third rows with reference to the pivot in the first row:

An obvious choice for the first row operation is , which will eliminate the pivot in the second row. The elementary matrices for this row operation are

Given that these two matrices are inverses of each other, we can multiply the trivial equation on the left-hand side by this pair of matrices, which overall will change nothing but will begin moving towards the LU form that we require. The full equation is and we can take the product of the second and third matrices to give the simplified equation

The left-hand matrix is in lower-triangular form but the middle matrix is not yet in upper-triangular form. Given that the pivot in the second row is in the final entry, we are likely to require a permutation matrix at some point in order to achieve upper-triangular form for the middle matrix. For the time being, we will continue by eliminating the pivot in the third row using the elementary row operation , for which the elementary matrices are

By inserting this into the previous step, we have

By taking the product of the first pair of matrices and then the product of the second pair of matrices, we find the much simplified form

The leftmost matrix is in lower-triangular form but we are yet to manipulate the middle matrix into upper triangular form. We will be unable to achieve this without using the row-swap operation , which has the elementary matrices

By inserting these matrices in the normal way, we have the expanded and yet equivalent equation

We again take the product of the first pair of matrices and then the product of the second pair of matrices, giving the equation

The middle matrix is now in upper-triangular form but the left-hand matrix is no longer in lower-triangular form, which is a problem that we must fix. Fortunately, the fix is very easy and we just apply the permutation matrix to both sides of the previous equation. This gives

The βlower-triangular-nessβ can then be restored by taking the product of the first and second matrices, giving

We have therefore obtained the PLU decomposition for the matrix , with the full specification being encapsulated by the equation , where

Calculating the PLU decomposition of a matrix is a little different to finding the LU decomposition. The difference between the two is that the former is always possible whereas the latter is not. Once we understand how to proceed with calculating the LU decomposition of a matrix (if possible), there is only a touch of extra effort required to understand how to find the PLU decomposition. The situation becomes only slightly more complicated if more than one permutation matrix is required, which is a situation that we will explore later on in this explainer. For the moment, we will give one further example to practice this method.

### Example 3: PLU Decomposition of a 4 Γ 3 Matrix

Find the PLU factoring of the matrix

### Answer

Our approach begins by highlighting the pivot entries, all of which are in the first column of the matrix:

To manipulate this matrix into upper-triangular form, we would first need to remove all of the pivot entries that are below the pivot entry in the first row. There are three of these entries to remove and we initially target the pivot in the second row. This can be turned into a zero entry with the row operation , for which the elementary matrices are

Although it is not particularly inspiring to state, it is nevertheless true that . Given that the two elementary matrices are inverses of each other, we could equally write since the product of the two leftmost matrices is the identity matrix . Written out in full, this is

Now, we take the product of the second and third matrices to find

The leftmost matrix is in lower-triangular form, which we will need to maintain to find either an LU or a PLU decomposition. This must be done while manipulating the middle matrix further toward upper-triangular form. We now choose to remove the pivot entry in the third row using the row operation . The elementary matrices that we will need are

By slotting this pair of matrices into the previous step, we obtain the matrix equation

By taking the product of the first pair of matrices and then taking the product of the second pair of matrices, we obtain the simplified expression

The leftmost matrix remains in lower-triangular form and the middle matrix is getting closer to upper-triangular form. We can get closer to this with the row operation , which has the corresponding elementary matrices

Placing these matrices into the previous step gives

Taking the product of the first pair of matrices and then the product of the second pair of matrices gives

The leftmost matrix is still in lower-triangular form and the middle matrix is very close to upper-triangular form. That being said, there is no possible way of getting the middle matrix into upper-triangular form without having to use the first type of elementary row operation, in this particular case the row-swap operation . The elementary matrices that we need are

Although it will temporarily change the leftmost matrix into being non-lower triangular, we insert this pair of matrices in the previous step in the normal way:

Taking the two matrix products gives

The middle matrix is only one step away from being put into upper-triangular form. The difficulty is that now the leftmost matrix is not in lower-triangular form, which we will need to remedy before we continue with our method. This can be done by taking the equation in the step above and multiplying both sides, on the left, by the permutation matrix :

If we take the product of the first pair of matrices, then we obtain an equation with a left-hand side that is in exactly the form we require:

Now that the leftmost matrix is once again in lower-triangular form, we can proceed with our method in the normal way. There is one more pivot to eliminate in the fourth row before the middle matrix is in upper-triangular form. The final row operation is and the elementary matrices for this are

These elementary matrices are pumped into the previous step in the normal way:

Taking the product between the two pairs of matrices now reveals the final solution to be

The leftmost matrix is in lower-triangular form and the second matrix is in upper-triangular form. Therefore, we have found the matrices such that , hence achieving a PLU decomposition for the original matrix .

We have now seen a range of examples that, while not being without challenge, have only required us to utilize one permutation matrix to achieve a PLU decomposition. Believably, it could be the case that we require more than one permutation matrix to achieve this form, so the question must be asked about how we manage this situation. In fact, the remedy for this is hardly more complicated than in any of the previous examples. The simplicity of our working will hinge on the fact that the product of two permutation matrices is another permutation matrix, which means that any two such matrices and can be lumped together as , where is also a permutation matrix.

We will demonstrate this with the example matrix

Although it is not necessary, it is usually more pragmatic to have a 1 in the top left entry if possible. We can achieve this with the row-swap operation , which has the elementary matrices

Although it will be slightly gratuitous, we will write the matrix equation in full but with the above pair of matrices taken as a product to the left-hand side:

By taking the product of the second and third matrices, we obtain

The problem with this expression is that the leftmost matrix is not in lower-triangular form, which we must remedy before we continue. Fortunately, the solution is very simple and we only have to multiply both sides of the equation on the left by the matrix . Since this matrix is equal to its own inverse, we obtain

Taking the product of the first and second matrices will return the identity matrix , given that is equal to its own inverse. This gives the simplified equation

Now that the pivot entry in the top left is equal to 1, we can use this to eliminate the pivots in the second and third rows. Focusing on the second row, we can use the row operation and the corresponding elementary matrices

Inserting these into the previous step gives the equation

By taking the product of the second and third matrices, we find that

This looks much more promising, because the leftmost matrix is now in lower-triangular form, which we will certainly try to preserve. We need to eliminate the remaining pivot in the third row of the second matrix, for which we employ the row operation . The elementary matrices are and these are placed into the previous step in the usual way:

We take the product of the first pair of matrices and the second pair of matrices in the normal way:

The leftmost matrix is in lower-triangular form, as required, although the second matrix is not yet in the upper-triangular form that we need. In fact, the only way to achieve this is with recourse to another row operation of the first type, in this case the row-swap operation . The permutation matrices for this step are

Completing the usual step gives the equation

We then take the product of the first pair of matrices and the second pair of matrices, giving

Although the second matrix is now in upper-triangular form, we have unfortunately broken the lower-triangular form that was previously held by the leftmost matrix. We can fix this by multiplying the equation above on both sides, on the left, by the permutation matrix . This gives

We have nearly completed our working. All that we need to do now is to take the product of the first two matrices on the left-hand side of the equation as well as the product of the first two matrices on the right-hand side of the equation. This gives

The glory of our method is shown in full. The leftmost matrix is in lower-triangular form and the second matrix is in upper-triangular form. If we define then we have and this means that we have found the correct PLU decomposition.

For the final example of this explainer, we will give another example where more than one permutation matrix is needed in order to complete the PLU decomposition. Ordinarily, when using row operations to complete any process (typically GaussβJordan elimination), it would be expected that several row-swap operations may be needed to shift the pivot variables into favorable positions. This technique is normally known as βpartial pivoting,β with full βpivotingβ referring to column operations as well as row operations.

### Example 4: PLU Decomposition of a 4 Γ 5 Matrix

Which of the following is a correct PLU factorization of the matrix

### Answer

We begin by highlighting the pivot entries of the matrix

The two pivot entries in the first and second rows are not in an ideal position, so it is likely that we will have to use row-swap operations to change this, which will mean that some permutation matrices will be required. We can use the row-swap operation to take the pivot entry in the third row and place this in the top left corner, which is normally our preferred step if possible. The corresponding matrices are

We take the βequationβ and multiply the left-hand matrix by the product of these two matrices (which is just the identity matrix ), as shown:

By taking the product of the second and third matrices on the left-hand side, we will obtain a shortened version of this equation, giving

Although the middle matrix is closer to being in the upper-triangular form that it originally was in, the left-hand matrix is not in lower-triangular form. We can correct this by multiplying both sides of the equation, on the left, by the permutation matrix , giving

Since the permutation matrix is equal to its own inverse, the product of the two leftmost matrices is just the identity matrix , which we do not need to write down. More simply, the equation above can be written as

The pivot entry in the second row is still likely to cause us a problem, but we will return to this later. For the moment, we will remove the pivot entry in the fourth row, as this entry precludes the achievement of upper-triangular form. We use the row operation and hence the elementary matrices

Taking the previous step and multiplying the left-hand side, on the left, with these two matrices give the equivalent equation

The second and third matrices can be combined using matrix multiplication to give the simplified equation

We are now in a better position as the leftmost matrix is in lower-triangular form and the second matrix is moving closer toward upper-triangular form. However, there is still an issue with the pivot entry in the second row, which is to the right of the pivots in the third and fourth rows. To correct this, we now employ the row-swap operation which has the permutation matrices

By injecting these equations into the previous step, we have the rather garish equation

By taking the product of the second and third equations, we obtain the slightly neater form

The advantage of this form is that the middle matrix is nearer to being in upper-triangular form. The disadvantage is that the leftmost matrix is no longer in lower-triangular form. The remedy for this is to multiply both sides of the equation, on the left, by the permutation matrix :

By taking the product of the first two matrices on the left-hand side and separately taking the product of the first two matrices on the right-hand side, we obtain the vastly preferable equation

The expression above looks very promising, since the leftmost matrix is a lower-triangular matrix and the second matrix is nearly in upper-triangular form. Additionally, the first matrix on the right-hand side of the equation is still a permutation matrix (even though it is no longer an elementary matrix). We now aim to quickly complete the process by moving the remaining pivot entries into the appropriate position. The first row operation is and the elementary matrices are

These two matrices are then inserted into the previous step in the normal way:

Completing the matrix multiplication for the first pair of matrices and then for the second pair of matrices gives the simplified form

Note that the leftmost matrix is in lower-triangular form and the second matrix is one step away from being in upper-triangular form. The last step is to use the row operation and the elementary matrices

Then, the previous step can be written as

The final step needed is to bring together the two pairs of matrices on the left side of the above equation, giving the final form

We have therefore found the three matrices such that , which means that we have found the PLU decomposition of the matrix. therefore, the correct option is A.

Just as with LU decomposition, finding the PLU decomposition of a matrix is ultimately a matter of understanding the elementary row operations and their counterpart elementary matrices. Given that each elementary matrix is very similar to the identity matrix of appropriate order, each elementary matrix is easy to combine with another matrix by matrix multiplication, with the effects being identical to performing whatever the associated row operation. This means that, even with matrices having large orders, finding the LU or PLU decomposition is essentially just a matter of determining a useful series of elementary row operations, with little extra effort required to turn these into the corresponding elementary matrices. Once one of the two types of decomposition is found, the original matrix is expressed in terms of matrices that either are in triangular form (and hence have much more useful algebraic properties) or are permutation matrices (which are very well understood and have convenient properties).

### Key Points

- The LU decomposition of a matrix uses elementary row operations of the second and third types, but it is not always possible to find. If at any point a row operation of the first type is needed, then the PLU decomposition must be calculated instead.
- It is always possible to write a matrix in terms of a lower-triangular matrix , an upper-triangular matrix , and a permutation matrix , such that .
- In the PLU decomposition, the permutation matrix can be expressed as a product of all of the elementary matrices of the first type, in order.
- For any permutation matrix , the inverse is . One consequence of this is that the PLU decomposition can alternatively be expressed as , which is often considered favorable.