Lesson Explainer: Elementary Matrices | Nagwa Lesson Explainer: Elementary Matrices | Nagwa

Lesson Explainer: Elementary Matrices Mathematics

Join Nagwa Classes

Attend live Mathematics sessions on Nagwa Classes to learn more about this topic from an expert teacher!

In this explainer, we will learn how to identify elementary matrices and their relation with row operations and how to find the inverse of an elementary matrix.

Most of the time, when working with systems of linear equations, our first approach would be to create the corresponding augmented coefficient matrix and then manipulate the matrix into the reduced echelon form using row operations, in a process that is most commonly referred to as “Gauss–Jordan elimination.” This approach can be very efficient and is normally the method that is programmed into any computer algebra package by default for solving a system of linear equations. One matter that is often neglected when talking about elementary row operations or Gauss–Jordan elimination in general is the fact that every elementary row operation can be encoded by a very simple matrix that is ostensibly similar to the identity matrix of the same order. Although such matrices might be considered unnecessary, being able to operate in this way is actually vitally important when looking to complete algorithms such as the LU or PLU decomposition of a matrix. For these methods, it is necessary for us to also understand the multiplicative inverse matrix for each of the elementary matrices as well as taking account of the noncommutative property of matrix multiplication by defining the elementary matrices as only ever being used on the left-hand side.

By taking the elementary row operations and phrasing each of these in terms of an equivalent elementary matrix, we are then able to use the algebraic properties of matrices, including the elegant results of matrices being associative under matrix multiplication. If nothing else, then by phrasing elementary row operations as elementary matrices we afford ourselves the opportunity to practice row operations in two ways: by completing Gauss–Jordan elimination and as the product of elementary matrices. Although, undeniably, the first situation is more common, there are a substantial minority of cases where the latter approach is preferable. Before we begin demonstrating the versatility of this approach, we will remind ourselves of the elementary row operations.

Definition: Elementary Row Operations

Consider a matrix 𝐴 of order 𝑚×𝑛 and with rows labeled 𝑟,𝑟,,𝑟. Then, the three elementary row operations that we can perform are as follows:

  • Switching of row 𝑖 with row 𝑗, denoted 𝑟𝑟;
  • Scaling of row 𝑖 by a nonzero constant 𝑐, denoted 𝑟𝑐𝑟;
  • Adding a scaled version of row 𝑗 to row 𝑖, denoted 𝑟𝑟+𝑐𝑟.

If an elementary row operation is used to transform the matrix 𝐴 into a new matrix 𝐴, then we should say that these two matrices are “row equivalent.”

We will not recap the effects of the elementary row operations at this stage, as we will do so later in tandem with their corresponding elementary matrices. At this stage, we might begin to question how it could possibly be the case that the elementary matrices could be used to emulate the effect of the elementary row operations, without having to create a matrix that is absurdly complicated or intractable. Instead, the opposite is true: the elementary matrices are extremely simple, differing from the identity matrices in at most two entries.

Definition: The First Type of Elementary Row Operation and the Corresponding Elementary Matrix

Consider a matrix 𝐴 and the first type of elementary row operation 𝑟𝑟, giving the row-equivalent matrix 𝐴. Then, we can define the corresponding “elementary” matrix: 𝑃=1000001001000001, which is essentially the identity matrix but with the 𝑖th and 𝑗th rows having been swapped. Then, the row-equivalent matrix 𝐴 can be written as the matrix multiplication 𝐴=𝑃𝐴.

This matrix is actually a particular type of permutation matrix and the effect of this type of matrix is easily observed. Suppose that we had the matrix 𝐴=121023201 and for some reason we felt that it would be advantageous to perform the operation 𝑟𝑟 in order to find the row-equivalent matrix 𝐴. This row operation involves swapping the second and third rows, which means that the corresponding elementary matrix will be the identity matrix after having swapped the second and third rows: 𝑃=100001010.

As a matter of convention, we multiply the elementary matrix on the left-hand side of 𝐴. It is important that we set this convention when we are looking at the third type of elementary row operation later in this explainer. Given that matrix multiplication is generally noncommutative, it will usually be the case that 𝐴𝐵𝐵𝐴 for any two matrices 𝐴 and 𝐵 with compatible orders. Therefore, we fix this convention at this stage and reassert that any elementary matrix will always be multiplied by the matrix of interest on the left-hand side. To obtain the row-equivalent matrix 𝐴, we now perform the matrix multiplication 𝐴=𝑃𝐴=100001010121023201=121201023.

As we can see, the result is exactly what we expected: the second and third rows of the matrix have been swapped.

The example above treated a 3×3 matrix. Recalling that the matrix product 𝐴𝐵 is defined so long as 𝐴 has order 𝑚×𝑛 and 𝐵 has order 𝑛×𝑝, it is clear that we did not need to work with a square matrix in the previous example. In some ways, this is totally obvious, as row operations are used as part of Gauss–Jordan elimination, and there is no reason why this method must apply only to systems of linear equations that produce a square coefficient matrix. In summary, there is no reason at all why the elementary matrices cannot apply to nonsquare matrices, as we will see in the following example.

Example 1: Using First Type of Elementary Matrix

Consider the matrix 𝐴=205135012110.

  1. Write the elementary matrix corresponding to the row operation 𝑟𝑟.
  2. Derive the subsequent row-equivalent matrix 𝐴.

Answer

The matrices 𝐴 and 𝐴 are row-equivalent and must therefore have the same order, 3×4, meaning that 𝑃 must have order 3×3 for the matrix multiplication 𝐴=𝑃𝐴 to be well defined. The matrix 𝑃 must switch the first and the third row of 𝐴, which means that this matrix must be equal to the identity matrix after having swapped the first and third rows. In other words, 𝑃=001010100.

We can check that this elementary matrix does indeed produce the desired effect by calculating 𝐴=𝑃𝐴=001010100205135012110=211035012051.

As we can see, the matrix 𝐴 is the same as the matrix 𝐴, only after having the first and third rows switched.

At this stage, after defining the elementary matrices 𝑃, it would normally occur to a mathematician to ask how we might calculate the inverse matrix. There is a risk that this could quickly become an overly computational exercise, wherein we attempt to derive 𝑃 using one of the normal methods. The necessary rationale is actually much less painful than this. Suppose that we performed the row-switch operation 𝑟𝑟 and wanted to undo this change; then, the obvious thing to do would be to perform the same row-switch operation again, hence returning the matrix to its original state. In short, this means that the inverse matrix of 𝑃 is the matrix itself. We formalize this understanding in the following theorem.

Theorem: Inverse of the Elementary Matrix 𝑃𝑖𝑗

The inverse of the matrix 𝑃 is given by the formula 𝑃=𝑃.

Few matrices are equal to their own inverse, with these being referred to as “involutory” matrices (which are elusive sightings in the wilderness of linear algebra). The result about their inverses is nearly trivial when considering the effect of their corresponding elementary row operations. Purely for the sake of demonstration, we will give one example that requires the calculation and use of the inverse of the first type of elementary matrix.

Example 2: Using the First Type of Elementary Matrix and the Corresponding Inverse Matrix

Consider the matrix 𝐴=3048211018320397.

  1. Write the elementary matrix corresponding to the row operation 𝑟𝑟.
  2. Derive the subsequent row-equivalent matrix 𝐴.
  3. Is it true that multiplying 𝐴 by the inverse elementary matrix on the left side will return the original matrix 𝐴?

Answer

The elementary row-swap operation 𝑟𝑟 means that we require the elementary matrix that swaps the second and fourth rows of any compatible matrix. The appropriate elementary matrix is equivalent to the identity matrix only with the second and fourth rows having been swapped: 𝑃=1000000100100100.

Multiplying matrix 𝐴 on the left by this matrix, we obtain the row-equivalent matrix 𝐴=𝑃𝐴=10000001001001003048211018320397=3048039718322110.

The effect has been exactly what we desired: the second and fourth rows of the matrix have been swapped.

We should find that multiplying 𝐴 on the left by the inverse matrix 𝑃 will return the original matrix 𝐴. Since the first type of elementary matrix is equal to its own inverse, we have 𝑃=𝑃 and hence 𝑃𝐴=𝑃𝐴=10000001001001003048039718322110=3048211018320397=𝐴.

So, yes, it is true that multiplying 𝑃𝐴 by the inverse elementary matrix on the left side will return the original matrix 𝐴.

This exceptionally helpful property of the first type of elementary matrix is useful when attempting to calculate the PLU decomposition of a matrix, as well as many other scenarios. Although the inverses of the second and third types of elementary matrices are slightly more complicated, their similarity to the identity matrix means that their inverse is relatively simple (at least when compared to the average level of complexity that is involved in calculating the inverse of a square matrix).

We will now move on and consider the second type of elementary row operation: the row-scaling operation. Although this type of operation is not quite as simple as the first type of elementary matrices that defined the row-swap operations, the elementary matrix of the second type of row operation has the extremely helpful feature that it is a diagonal matrix, which is among the easiest types of matrices to work with in an algebraic sense. Although the inverses of these matrices will not be involutory like the first type of elementary matrix, they are still diagonal matrices, which means that the inverse is nearly trivial to calculate.

Definition: The Second Type of Elementary Row Operation and the Corresponding Elementary Matrix

Consider a matrix 𝐴 and the second type of elementary row operation 𝑟𝑐𝑟, where 𝑐0, giving the row-equivalent matrix 𝐴. Then, we can define the corresponding “elementary” matrix 𝐷(𝑐)=1000𝑐0001, which is essentially the identity matrix but with the entry in the 𝑖th row and 𝑖th column being equal to 𝑐 instead of 1. Then, the row-equivalent matrix 𝐴 can be written as the matrix multiplication 𝐴=𝐷(𝑐)𝐴.

The second type of elementary matrix is a little more complicated than the first type, and in some ways it is simpler. All we have done is take the 𝑖th row of the identity matrix and multiply every entry by the constant 𝑐. Given that the only nonzero entry this in row appears in the 𝑖th position and has a value of 1, only the value of this entry is altered. We will demonstrate this by example. Suppose that we take the matrix 𝐴=134010231083214 and want to perform the row operation 𝑟3𝑟 to achieve the row-equivalent matrix 𝐴. The row-scaling operation is of the second type of elementary row operation, which will require the equivalent elementary matrix 𝐷(3), which will be the same as the identity matrix only with the entry in the second row and column being equal to 3 instead of 1: 𝐷(3)=100030001.

We then multiply the original matrix 𝐴 on the left side by the elementary matrix 𝐷(3). This has exactly the effect that we were aiming for: 𝐴=𝐷(3)𝐴=100030001134010231083214=134010693083214.

The row-equivalent matrix 𝐴 is the same as the original matrix 𝐴 but only after the second row has been multiplied by 3, which is exactly the effect that we were aiming for.

Example 3: Using the Second Type of Elementary Matrix

Consider the matrix 𝐴=2081115124080310.

  1. Write the elementary matrix corresponding to the row operation 𝑟12𝑟.
  2. Derive the subsequent row-equivalent matrix 𝐴.

Answer

Every entry in the third row of 𝐴 is divisible by 2, meaning that using the row operation 𝑟12𝑟 should only produce integer entries in the row-equivalent matrix 𝐴. To obtain the corresponding elementary matrix 𝐷12, we take the 4×4 identity matrix and change the entry in the third row and third column to have the value 12. This gives the matrix 𝐷12=10000100001200001.

By multiplying the matrix 𝐴 on the left side by the elementary matrix 𝐷12, we find 𝐴=𝐷12𝐴=100001000012000012081115124080310=2081115112040310.

The resulting matrix 𝐴 is the same as the original matrix 𝐴 but after every entry in the third row has been divided by 2, which exactly mimics the row operation 𝑟12𝑟.

As with the first type of elementary matrix, we will be interested in finding the inverse of the second type of elementary matrix. The rationale for the structure of the inverse matrix is as follows: if we use the second type of row operation to scale every entry in row 𝑖 by a nonzero constant 𝑐, then this can be undone by once again scaling row 𝑖 by a factor 1𝑐, returning every entry in this row into its original form. Therefore, if the original elementary matrix is 𝐷(𝑐), then the inverse matrix is 𝐷1𝑐. This is summarized in the following theorem.

Theorem: Inverse of the Elementary Matrix 𝐷𝑖(𝑐)

The inverse of the matrix 𝐷(𝑐) is given by the formula 𝐷(𝑐)=𝐷1𝑐.

Although this result is not quite as convenient as the analogous result for the first type of elementary matrix (which was equal to its own inverse), it is still very useful. Diagonal matrices are one of the easiest types of matrices to work with, being commutative with each other and having simplified algebraic properties in regards to multiplication and exponentiation as well as inversion. Calculating the inverse of a diagonal matrix is essentially a trivial task, involving none of the complexities that can arise when working with a nondiagonal square matrix, hence the brevity of the above theorem.

Example 4: Using the Second Type of Elementary Matrix and the Corresponding Inverse Matrix

Consider the matrix 𝐴=310124.

  1. Write the elementary matrix corresponding to the row operation 𝑟2𝑟.
  2. Derive the subsequent row-equivalent matrix 𝐴.
  3. Is it true that multiplying 𝐴 by the inverse elementary matrix on the left side will return the original matrix 𝐴?

Answer

We would like to take the matrix above and implement the elementary row operation 𝑟2𝑟. This is a row operation of the second type and can be equivalently expressed by the elementary matrix 𝐷(2)=1002, which can be thought of as a copy of the identity matrix 𝐼, with the entry in the second row and second column having been replaced by 2. By multiplying this matrix on the left side with the original 𝐴, we find the row-equivalent matrix 𝐴=𝐷(2)𝐴=1002310124=310248.

The effect is precisely what we were expecting: the matrix 𝐴 is the same as the original matrix 𝐴 except that every entry in the second row has been multiplied by 2. To return to the original matrix 𝐴, we could take 𝐴 and then use the inverse row operation 𝑟12𝑟. The corresponding elementary matrix is 𝐷12=10012.

We can check that this is true by multiplying by 𝐴 on the left-hand side: 𝐷12𝐴=10012310248=310124=𝐴.

So, yes, it is true that multiplying 𝐷12 by the inverse elementary matrix on the left side will return the original matrix 𝐴.

In practice, the third type of elementary row operation is the one that is used most frequently, and it is rightly seen as being more complicated than the other two types of row operation. The first type of row operation simply involves switching two rows, whereas the second type of row operation will scale only one row at a time. In contrast, the third type of elementary row operation involves two different rows of the matrix under the combined operation of both scaling and addition, which frequently causes errors to occur due to the more sophisticated nature of the operation. Perhaps surprisingly, the third type of elementary row operation has a corresponding elementary matrix that is still very similar to the identity matrix (much like the elementary matrices for the first and second types of row operations). The third type of elementary matrix will not be equal to its own inverse (like the first type of elementary matrix) and neither will it be diagonal (as in the second type of elementary matrix). We must therefore be mindful that the third type of elementary matrix can only act on the left-hand side and will not have an inverse that is equal to itself.

Definition: The Third Type of Elementary Row Operation and the Corresponding Elementary Matrix

Consider a matrix 𝐴 and the third type of elementary row operation 𝑟𝑟+𝑐𝑟, giving the row-equivalent matrix 𝐴. Then, we can define the corresponding “elementary” matrix: 𝐸=100001000𝑐100001, which is essentially the identity matrix but with the entry in the 𝑖th row and 𝑗th column being equal to 𝑐 instead of 0. Then, the row-equivalent matrix 𝐴 can be written as the matrix multiplication 𝐴=𝐸(𝑐)𝐴.

As ever, the quickest way to demonstrate the efficacy of this technique is with a suitable example. Suppose that we had the matrix 𝐴=104122201512 and we wished to implement the row operation 𝑟𝑟2𝑟 to obtain the row-equivalent matrix 𝐴. Then, using the theorem above, the corresponding elementary matrix must be a copy of the identity matrix 𝐼, except that the entry in the third row and first column must be equal to 2. The correct elementary matrix is therefore 𝐸(2)=100010201.

Multiplying by 𝐴 on the left-hand side should return the same matrix, only with the first row subtracted twice from the third row. Therefore, the first and second rows should be unchanged, as we find when we complete the multiplication: 𝐴=𝐸(2)𝐴=100010201104122201512=104122203570.

With the third and final type of elementary matrix having been classified and demonstrated, we will soon be in a position to begin combining the various types of elementary matrix together in sequence, much like we would do when applying a series of row operations in sequence (e.g., when completing Gauss–Jordan elimination or manipulating a square matrix into a specific form). Before doing so, we will briefly practice applying the third type of elementary matrix in isolation, and then we will define the corresponding inverse matrix.

Example 5: Using the Third Type of Elementary Matrix

Consider the matrix 𝐴=130329413.

  1. Write the elementary matrix corresponding to the row operation 𝑟𝑟3𝑟.
  2. Derive the subsequent row-equivalent matrix 𝐴.

Answer

The row operation 𝑟𝑟3𝑟 is of the third type and will involve taking every entry in the second row and subtracting three times the entry in the same column of the first row. The correct elementary matrix is therefore a copy of the identity matrix, only with the entry 3 appearing in the second row and first column: 𝐸(3)=100310001.

Multiplying this matrix by the original matrix 𝐴 gives 𝐴=𝐸(3)𝐴=100310001130329413=130079413.

The row-equivalent matrix 𝐴 is exactly the same as if we had taken the original matrix 𝐴 and performed the row operation 𝑟𝑟3𝑟.

We now turn our attention to the inverse of the third type of elementary matrix. If we had taken a general matrix 𝐴 and performed the row operation 𝑟𝑟+𝑐𝑟, then the quickest way to undo this is to apply the inverse row operation 𝑟𝑟𝑐𝑟, which will recover the original matrix 𝐴. Therefore, the inverse of the elementary matrix 𝐸(𝑐) is also of the third type: 𝐸(𝑐). This is summarized in the following theorem:

Theorem: Inverse of the Elementary Matrix 𝐸𝑖𝑗(𝑐)

The inverse of the matrix 𝐸(𝑐) is given by the formula 𝐸(𝑐)=𝐸(𝑐).

Example 6: Using the Third Type of Elementary Matrix and the Corresponding Inverse Matrix

Consider the matrix 𝐴=414304221098.

  1. Write the elementary matrix corresponding to the row operation 𝑟𝑟+12𝑟.
  2. Derive the subsequent row-equivalent matrix 𝐴.
  3. Is it true that multiplying 𝐴 by the inverse elementary matrix on the left side will return the original matrix 𝐴?

Answer

The row operation 𝑟𝑟+12𝑟 is of the third type. The effect will be that we take every entry in the first row and to this we add half the value of each entry in the same column of the second row. The corresponding elementary matrix will thus be a copy of the identity matrix 𝐼 only with the value of 12 in the first row and second column: 𝐸12=1120010001.

By multiplying this on the left-hand side by the original matrix 𝐴, we obtain the row-equivalent matrix 𝐴=𝐸12𝐴=1120010001414304221098=433404221098, which gives the desired result. The inverse row operation will be 𝑟𝑟12𝑟 and if this is applied to the row-equivalent matrix 𝐴, then we will simply return the original matrix 𝐴. The elementary matrix for this row operation is 𝐸12=1120010001.

When multiplying on the left-hand side by 𝐴, we must find the original matrix 𝐴. We can check this as follows: 𝐸12𝐴=1120010001433404221098=414304221098=𝐴.

So, yes, it is true that multiplying 𝐸12 by the inverse elementary matrix on the left side will return the original matrix 𝐴.

When working with elementary row operations, it will seldom be the case that we only need a single such operation to solve whatever problem we are working with. For processes such as Gauss–Jordan elimination or LU decomposition, there are usually many elementary row operations that we must apply in sequence before the overall outcome can be achieved. All of our work above would therefore be pretty pointless if the elementary matrices were not accommodating of this frequent requirement!

It is actually quite straightforward to verify that we can chain together elementary matrices in the same way that we can with elementary row operations. Given that we have checked that each of the three types of elementary matrices has the correct effect, we only need to check that it is possible to combine these matrices together in sequence. At this point, it is helpful to remember that when applied to a general matrix 𝐴, any of the elementary matrices must return a row-equivalent matrix 𝐴, which is of the same order as the original matrix 𝐴. Suppose that we have a matrix 𝐴 with order 𝑚×𝑛 and an elementary matrix 𝑀 with order 𝑝×𝑞 that is multiplied on the left. In order for the matrix multiplication 𝐴=𝑀𝐴 to be well defined, there must be the same number of columns in 𝑀 as there are rows in 𝐴. Therefore, we must have 𝑞=𝑚, meaning that 𝑀 has order 𝑝×𝑚 and 𝐴 has order 𝑚×𝑛, meaning that the matrix 𝐴 has order 𝑝×𝑛. However, we require that 𝐴 have the same order as 𝐴, which means that 𝑝=𝑚 and therefore the order of the elementary matrix 𝑀 is necessarily 𝑚×𝑚.

Since any elementary matrix 𝑀 must have order 𝑚×𝑚, we know that we can combine two of these matrices together, with the output being another matrix of order 𝑚×𝑚. This logic extends to the product of many such elementary matrices all having the same order, and we therefore conclude that we can chain together any number of these matrices without changing the order of the output, row-equivalent matrix 𝐴. Please note that this is not to say that we can combine these matrices together in any order, given that matrix multiplication is not commutative and therefore, generally, it will be the case that 𝑀𝑀𝑀𝑀. This makes perfect sense, as we cannot generally apply a series of row operations together in an arbitrary order and expect to get the same result.

After having given many examples of how to use each type of the three types of elementary matrix, it is easy enough to apply these in series, thus encoding a series of row operations applied to a matrix. That being said, it is best to standardize the way in which we approach this in order to avoid confusion. We will consider the matrix 𝐴=210121341.

Suppose that we then chose to apply the following row operations in order: 𝑟𝑟, 𝑟𝑟3𝑟, and 𝑟12𝑟. We will not show the calculations here but the outcome is the row-equivalent matrix

121210052.(1)

The first row operation 𝑟𝑟 is also of the first type and the elementary matrix is identical to the 3×3 identity matrix, but with the first and second rows swapped: 𝑃=010100001.

We can then multiply the matrix 𝐴 by this elementary matrix on the left-hand side, giving the row-equivalent matrix 𝐴=𝑃𝐴=010100001210121341=121210341.

For the row operation 𝑟𝑟3𝑟, we have the matrix 𝐸(3)=100010301.

Rather than apply this to 𝐴, we will instead apply this to 𝐴, calling the result 𝐴 to avoid confusion. We find 𝐴=𝐸(3)𝐴=100010301121210341=1212100104.

The final row operation 𝑟12𝑟 has the elementary matrix 𝐷12=1000100012.

We can derive the new matrix 𝐴 by multiplying this matrix by 𝐴: 𝐴=𝐷12𝐴=10001000121212100104=121210052.

This returns the row-equivalent matrix that we gave in equation (1). In the above working, we chose to demonstrate each step and we actually created more work for ourselves than we needed. Instead of completing every single step of the above working, we can choose to derive the answer in a way that is more succinct, without losing any information about which row operations we performed.

Recall that, in the example above, we took the original matrix 𝐴 and multiplied it by the elementary matrix 𝑃 to obtain 𝐴=𝑃𝐴.

We then took the resulting matrix 𝐴 and multiplied it on the left-hand side by the elementary matrix 𝐸(3), using the equation immediately above to give 𝐴=𝐸(3)𝐴=𝐸(3)(𝑃𝐴).

The final elementary matrix 𝐷12 was multiplied by 𝐴 to give the matrix 𝐴. Using the equation above, we would find 𝐴=𝐷12𝐴=𝐷12(𝐸(3)(𝑃𝐴)).

This expression for 𝐴 may look as though it has not provided any visual improvement and that is because, at this stage, it has not done so. For this improvement to be realized, we can recall that matrix multiplication is associative, meaning that we can alternatively write

𝐴=𝐷12𝐸(3)𝑃𝐴.(2)

As the equation above indicates, we will now combine all of the elementary matrices into a single matrix, before multiplying by the original matrix 𝐴. This can be completed as follows, by multiplying the matrices together, this time starting from the right-hand side (although this is arbitrary). We collect all of the matrices together as a new matrix 𝑀, defined as 𝑀=𝐷12𝐸(3)𝑃=1000100012100010301010100001=1000100012010100031=01010003212.

Using equation (2), we can then write the entire series of row operations as the one matrix written immediately above, the highly simplified equation 𝐴=𝑀𝐴, which can be checked manually if needed. Such concise, algebraic expressions are sought after in my theaters of linear algebra, as they can then be thought of in a more abstract sense, allowing the full power of linear algebra to be utilized.

Example 7: Writing a Series of Elementary Row Operations as a Single Matrix

Consider the matrix 𝐴=122230413 and the following row operations performed in the order given to give the row-equivalent matrix 𝐴𝑟2𝑟:, 𝑟𝑟3𝑟, 𝑟𝑟+5𝑟, 𝑟𝑟, and 𝑟𝑟𝑟.

  1. Write a single matrix 𝑀 corresponding to the combined row operations 𝐴 into 𝐴.
  2. Use 𝑀 to calculate 𝐴.

Answer

The quickest way to achieve this is to consider all of the elementary row operations and then write down all of the corresponding elementary matrices. For the first three row operations 𝑟2𝑟, 𝑟𝑟3𝑟, and 𝑟𝑟+5𝑟, the elementary matrices are 𝐷(2)=200010001,𝐸(3)=100010301,𝐸(5)=100010051.

The final two row operations are 𝑟𝑟 and 𝑟𝑟𝑟, which have the elementary matrices 𝑃=010100001,𝐸(1)=100110001.

We can write the row-equivalent matrix as a product of these elementary matrices: 𝐴=𝐸(1)𝑃𝐸(5)𝐸(3)𝐷(2)𝐴=(𝐸(1)𝑃𝐸(5)𝐸(3)𝐷(2))𝐴=𝑀𝐴, where we have used the associative property of matrix multiplication to group together the product of the elementary matrices into the single matrix 𝑀, which we must now find. The most error-proof way of doing this is to take the product of the two rightmost terms and then move leftward to complete the calculation. To reduce the level of visual discomfort that would be incurred from writing out all of the elementary matrices at once, we do not write the elementary matrices out in full until they are required for the next stage of the calculation. We begin with the two rightmost elementary matrices (which correspond to the first two row operations that were required, in order): 𝑀=𝐸(1)𝑃𝐸(5)(𝐸(3)𝐷(2))=𝐸(1)𝑃𝐸(5)100010301200010001=𝐸(1)𝑃𝐸(5)200010601.

The rightmost matrix is no longer an elementary matrix of any type, but it still has a reasonably simple form. We continue the process, now absorbing the third row operation performed into the calculation: 𝑀=𝐸(1)𝑃𝐸(5)200010601=𝐸(1)𝑃100010051200010601=𝐸(1)𝑃200010651.

There are now only two steps remaining to complete the calculation of 𝑀. For the fourth row operation, we perform the next stage of the method: 𝑀=𝐸(1)𝑃200010651=𝐸(1)010100001200010651=𝐸(1)010200651.

For the final step, we bring in the final row operation, giving the answer

𝑀=𝐸(1)010200651=100110001010200651=010210651.(3)

Now that we have calculated the matrix 𝑀, we can use equation (3) to give 𝐴. We find that 𝐴=𝑀𝐴=010210651122230413=230014029.

This is exactly the matrix that we would have found if we had performed the given elementary row operations in order.

There are many possible applications of viewing a series of row operations through the lens of elementary matrices. A common application is found when working with a system of linear equations. Suppose that we began with the square coefficient matrix 𝐴 and applied a series of elementary row operations to achieve the reduced echelon form 𝐴. Then, by replicating the previous method, we could form the corresponding elementary matrices and combine these together into an overall matrix 𝑀, such that the reduced echelon matrix could then be written as

𝐴=𝑀𝐴.(4)

Even though the matrix 𝑀 will generally not be an elementary matrix of any type, it is constructed as the product of elementary matrices, each of which is invertible. This means that 𝑀 itself must be invertible. Additionally, it is a known result from linear algebra that if a square matrix 𝐴 of order 𝑛×𝑛 is also invertible, then the reduced echelon matrix 𝐴 will be equal to the identity matrix 𝐼. If this is the case, then equation (4) simplifies to 𝐼=𝑀𝐴.

The beauty of this technique is now revealed in full: the matrix 𝑀 is actually the inverse of 𝐴, since by definition the inverse matrix 𝐴 is that which satisfies the equation 𝐼=𝐴𝐴=𝐴𝐴.

Furthermore, the multiplicative inverse of a matrix is unique. In having found the matrix 𝑀, we have surprisingly found the inverse 𝐴 as the product of elementary matrices.

Key Points

  • There are three types of elementary row operations and each of these can be written in terms of a square matrix that differs from the corresponding identity matrix in at most two entries.
  • By definition, each of the elementary matrices is assumed to act on the left-hand side.
  • Every elementary matrix is invertible, with the inverse matrix being straightforward to derive and express.
  • Matrix multiplication is associative, which means that chains of elementary matrices can be multiplied together to represent a sequence of row operations.

Join Nagwa Classes

Attend live sessions on Nagwa Classes to boost your learning with guidance and advice from an expert teacher!

  • Interactive Sessions
  • Chat & Messaging
  • Realistic Exam Questions

Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy