Explainer: Solving a System of Three Equations Using a Matrix Inverse

In this explainer, we will learn how to solve a system of three linear equations using the inverse of the matrix of coefficients.

There are a number of perspectives from which linear algebra can be fruitfully and consistently viewed. One such perspective is to understand matrices as a way of encoding information about how vectors are transformed through space, which provides an algebraic understanding of how we morph points, lines, planes, and higher-dimensional objects. Another perspective is that linear algebra is one manifestation of a more general idea known as vector spaces, wherein attractive abstract properties are used to define many systems that share algebraic properties with linear algebra, conventional algebra, and many other areas of mathematics.

Out of all the possible perspectives that we can have on linear algebra, arguably there is one which contributed most to the development of the subject, at least in the initial stages. Mathematicians throughout history have always had a peculiar attraction towards solving equations. Following the popularization of algebra, it soon became interesting for mathematicians to study simultaneous equations, where two variables must be solved for in tandem. This idea generalizes in a natural way to systems of equations with many unknown variables, which is where we begin to witness the versatility and power of linear algebra in driving the historical developments within mathematics, offering an increasingly well-honed and varied toolkit.

Our goal in this explainer will be to solve systems of linear equations by using our understanding of the inverse of a square matrix. As we will see, any system of linear equations can be expressed strictly in terms of matrices, meaning that we can use our understanding of linear algebra to solve these.

Definition: Matrix Form of a System of Linear Equations

Consider a general system of linear equations in the variables ๐‘ฅ,๐‘ฅ,โ€ฆ,๐‘ฅ๏Šง๏Šจ๏Š and the coefficients ๐‘Ž๏ƒ๏…: ๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘,๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘,โ‹ฎโ‹ฎโ‹ฎโ‹ฑโ‹ฎโ‹ฎโ‹ฎ๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘.๏Šง๏Šง๏Šง๏Šง๏Šจ๏Šจ๏Šง๏Š๏Š๏Šง๏Šจ๏Šง๏Šง๏Šจ๏Šจ๏Šจ๏Šจ๏Š๏Š๏Šจ๏‰๏Šง๏Šง๏‰๏Šจ๏Šจ๏‰๏Š๏Š๏‰

Then we define the coefficient matrix ๐ด=โŽกโŽขโŽขโŽฃ๐‘Ž๐‘Žโ‹ฏ๐‘Ž๐‘Ž๐‘Žโ‹ฏ๐‘Žโ‹ฎโ‹ฎโ‹ฑโ‹ฎ๐‘Ž๐‘Žโ‹ฏ๐‘ŽโŽคโŽฅโŽฅโŽฆ๏Šง๏Šง๏Šง๏Šจ๏Šง๏Š๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Š๏‰๏Šง๏‰๏Šจ๏‰๏Š and the two vectors uv=โŽกโŽขโŽขโŽฃ๐‘ฅ๐‘ฅโ‹ฎ๐‘ฅโŽคโŽฅโŽฅโŽฆ,=โŽกโŽขโŽขโŽขโŽฃ๐‘๐‘โ‹ฎ๐‘โŽคโŽฅโŽฅโŽฅโŽฆ.๏Šง๏Šจ๏Š๏Šง๏Šจ๏‰

Then the system of linear equations can be encapsulated by the matrix equation ๐ด=uv, which in long form would be written as โŽกโŽขโŽขโŽฃ๐‘Ž๐‘Žโ‹ฏ๐‘Ž๐‘Ž๐‘Žโ‹ฏ๐‘Žโ‹ฎโ‹ฎโ‹ฑโ‹ฎ๐‘Ž๐‘Žโ‹ฏ๐‘ŽโŽคโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽฃ๐‘ฅ๐‘ฅโ‹ฎ๐‘ฅโŽคโŽฅโŽฅโŽฆ=โŽกโŽขโŽขโŽขโŽฃ๐‘๐‘โ‹ฎ๐‘โŽคโŽฅโŽฅโŽฅโŽฆ.๏Šง๏Šง๏Šง๏Šจ๏Šง๏Š๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Š๏‰๏Šง๏‰๏Šจ๏‰๏Š๏Šง๏Šจ๏Š๏Šง๏Šจ๏‰

We will see later that this way of using matrix multiplication is a very useful approach for expressing a system of linear equations, as it allows us to use any tool from the vast mathematical toolkit of linear algebra. Let us consider, for example, the system of linear equations


At this stage, we would probably be tempted to solve this system of linear equations by using any of the known, standard methods for solving simultaneous equations of two variables. However, we choose to follow the definition above and we express the above system of linear equations by constructing the matrices ๐ด=๏”31โˆ’24๏ ,=๏“๐‘ฅ๐‘ฆ๏Ÿ,=๏”โˆ’124๏ .uv

The system of linear equations can then be expressed as ๐ด=uv. In long form this is

๏”31โˆ’24๏ ๏“๐‘ฅ๐‘ฆ๏Ÿ=๏”โˆ’124๏ .(2)

It might understandably seem that this representation has not helped us in the slightest, which is where the concept of the matrix inverse can be parachuted in to provide support. Suppose that we were to construct the matrix inverse of ๐ด using the well-known formula for the inverse of a 2ร—2 matrix. For a general 2ร—2 matrix ๐ต=๏”๐‘Ž๐‘๐‘๐‘‘๏ , the inverse matrix is ๐ต=1๐‘Ž๐‘‘โˆ’๐‘๐‘๏“๐‘‘โˆ’๐‘โˆ’๐‘๐‘Ž๏Ÿ.๏Šฑ๏Šง

For our matrix ๐ต, we would find that ๐ด=13ร—4โˆ’1ร—(โˆ’2)๏”4โˆ’123๏ =114๏”4โˆ’123๏ .๏Šฑ๏Šง

Suppose now that we were to multiply left side of equation (2) by the matrix inverse ๐ด๏Šฑ๏Šง. We would obtain 114๏”4โˆ’123๏ ๏”31โˆ’24๏ ๏“๐‘ฅ๐‘ฆ๏Ÿ=114๏”4โˆ’123๏ ๏”โˆ’124๏ .

Since matrix multiplication is associative, we can group the terms on the left-hand side in the following order: 114๏€ผ๏”4โˆ’123๏ ๏”31โˆ’24๏ ๏ˆ๏“๐‘ฅ๐‘ฆ๏Ÿ=114๏”4โˆ’123๏ ๏”โˆ’124๏ .

As if by magic, we find that the term within the curved brackets is just a copy of the identity matrix. Completing this matrix multiplication gives 114๏”140014๏ ๏“๐‘ฅ๐‘ฆ๏Ÿ=114๏”4โˆ’123๏ ๏”โˆ’124๏ , allowing us to take the scaling constant within the matrix to find ๏”1001๏ ๏“๐‘ฅ๐‘ฆ๏Ÿ=114๏”4โˆ’123๏ ๏”โˆ’124๏ .

Completing the final matrix multiplication on the left-hand side gives ๏“๐‘ฅ๐‘ฆ๏Ÿ=114๏”4โˆ’123๏ ๏”โˆ’124๏ .

We now have an expression for ๐‘ฅ and ๐‘ฆ in terms of one final matrix multiplication. Carrying this out gives ๏“๐‘ฅ๐‘ฆ๏Ÿ=114๏”โˆ’2870๏ =๏”โˆ’25๏ .

We can check that ๐‘ฅ=โˆ’2 and ๐‘ฆ=5 are the only two values which solve the system of linear equations in (1), confirming that we have solved the problem.

Rather than focus on the specific problem above, we can show that this method can be applied to a general system of linear equations, providing that a few conditions are met. Suppose that we have a linear system where there are as many equations as there are unknown variables. In other words, we have ๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘,๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘,โ‹ฎโ‹ฎโ‹ฎโ‹ฑโ‹ฎโ‹ฎโ‹ฎ๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘.๏Šง๏Šง๏Šง๏Šง๏Šจ๏Šจ๏Šง๏Š๏Š๏Šง๏Šจ๏Šง๏Šง๏Šจ๏Šจ๏Šจ๏Šจ๏Š๏Š๏Šจ๏Š๏Šง๏Šง๏Š๏Šจ๏Šจ๏Š๏Š๏Š๏Š

Then define the matrix and two vectors ๐ด=โŽกโŽขโŽขโŽฃ๐‘Ž๐‘Žโ‹ฏ๐‘Ž๐‘Ž๐‘Žโ‹ฏ๐‘Žโ‹ฎโ‹ฎโ‹ฑโ‹ฎ๐‘Ž๐‘Žโ‹ฏ๐‘ŽโŽคโŽฅโŽฅโŽฆ,=โŽกโŽขโŽขโŽฃ๐‘ฅ๐‘ฅโ‹ฎ๐‘ฅโŽคโŽฅโŽฅโŽฆ,=โŽกโŽขโŽขโŽขโŽฃ๐‘๐‘โ‹ฎ๐‘โŽคโŽฅโŽฅโŽฅโŽฆ.๏Šง๏Šง๏Šง๏Šจ๏Šง๏Š๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Š๏Š๏Šง๏Š๏Šจ๏Š๏Š๏Šง๏Šจ๏Š๏Šง๏Šจ๏Šuv

Then the system of linear equations can be described by the matrix equation ๐ด=,uv where ๐ด is a square matrix. It is crucial that ๐ด be a square matrix as the multiplicative inverse is not defined for non-square matrices. Now supposing that ๐ด๏Šฑ๏Šง exists, we can multiply by it on the left-hand side of the above equation, giving ๐ด(๐ด)=๐ด.๏Šฑ๏Šง๏Šฑ๏Šงuv

Matrix multiplication is associative, meaning that we can write ๏€น๐ด๐ด๏…=๐ด.๏Šฑ๏Šง๏Šฑ๏Šงuv

By definition, we have that ๐ด๐ด=๐ผ๏Šฑ๏Šง๏Š, where ๐ผ๏Š is the ๐‘›ร—๐‘› identity matrix. This implies that ๐ผ=๐ด.๏Š๏Šฑ๏Šงuv

We also know that the identity matrix ๐ผ๏Š leaves a matrix unchanged when combined under the operation of matrix multiplication. This allows the final simplification uv=๐ด.๏Šฑ๏Šง

If our goal was to find the vector u, it has clearly been achieved in the above equation. We will now demonstrate how this method can be applied to other systems of linear equations.

Example 1: Using the Inverse Matrix to Solve a System of Linear Equations

Solve the system of the linear equations 3๐‘ฅ+2๐‘ฆ=8,6๐‘ฅโˆ’9๐‘ฆ=3, using the inverse of a matrix.


We create the matrices corresponding to the system of linear equations above. If we assign ๐ด=๏”326โˆ’9๏ ,=๏“๐‘ฅ๐‘ฆ๏Ÿ,=๏”83๏ ,uv then the problem can be equivalently encoded by the matrix problem ๏”326โˆ’9๏ ๏“๐‘ฅ๐‘ฆ๏Ÿ=๏”83๏ .

More succinctly, we could write ๐ด=.uv

Our goal will be to solve this equation for u, given that this vector contains the variables ๐‘ฅ and ๐‘ฆ that we would like to find. Assuming that the inverse ๐ด๏Šฑ๏Šง exists, we could multiply by it on the left-hand side of the equation, giving ๐ด(๐ด)=๐ด.๏Šฑ๏Šง๏Šฑ๏Šงuv

Given that matrix multiplication is associative, this statement is equivalent to saying that ๏€น๐ด๐ด๏…=๐ด.๏Šฑ๏Šง๏Šฑ๏Šงuv

By definition, we know that ๐ด๐ด=๐ผ๏Šฑ๏Šง๏Šจ, where ๐ผ๏Šจ is the 2ร—2 identity matrix, giving ๐ผ=๐ด.๏Šจ๏Šฑ๏Šงuv

The identity matrix will leave the vector u unchanged when combined under matrix multiplication as ๐ผ๏Šจu. This implies that


We now have a formula for u, provided that we can find ๐ด๏Šฑ๏Šง. To do this we use the expression for the inverse of a 2ร—2 matrix: ๐ต=๏”๐‘Ž๐‘๐‘๐‘‘๏ ,๐ต=1๐‘Ž๐‘‘โˆ’๐‘๐‘๏“๐‘‘โˆ’๐‘โˆ’๐‘๐‘Ž๏Ÿ.๏Šฑ๏Šง

Given that ๐ด=๏”326โˆ’9๏  we have ๐ด=13ร—(โˆ’9)โˆ’2ร—6๏”โˆ’9โˆ’2โˆ’63๏ =โˆ’139๏”โˆ’9โˆ’2โˆ’63๏ .๏Šฑ๏Šง

We can now use equation (3) to find u: uv=๐ด=โˆ’139๏”โˆ’9โˆ’2โˆ’63๏ ๏”83๏ =โˆ’139๏”โˆ’78โˆ’39๏ =๏”21๏ .๏Šฑ๏Šง

Earlier we purposefully defined u=๏“๐‘ฅ๐‘ฆ๏Ÿ.

We therefore find that ๐‘ฅ=2 and ๐‘ฆ=1, as we can check in the original equations.

It might, at this stage, seem as though the method we have just presented is an overly convoluted way of finding the solution to a system of linear equations where there are two variables and two unknowns. Normally we would prefer to use a familiar and simpler technique for solving simultaneous equations of this type. The significance of the matrix inversion method is easier to understand when working with matrices of order 3ร—3 and above. Also, the inverse of a square matrix is something that we would likely take an independent interest in, regardless of the actual problem that we are trying to solve, so it may be the case that we will calculate this matrix irrespective of the problem involving it that we are actually trying to solve.

There is a subtler point that we must also consider. Suppose that, in the example above, the system of linear equations was exactly the same except for the quantities on the right-hand side of both equations, as was encoded by the vector v. In order to solve the problem, we would still need to find the inverse matrix ๐ด๏Šฑ๏Šง and complete the calculation uv=๐ด๏Šฑ๏Šง. In this sense, finding the inverse matrix is a task that will solve the system of linear equations for any vector v. Furthermore, it may not be possible to find ๐ด๏Šฑ๏Šง because the matrix ๐ด has a determinant of zero. In this situation, the value of v would be irrelevant, because it would not be possible to solve the problem.

Example 2: Using the Inverse Matrix to Solve a System of Linear Equations (with the Gaussโ€“Jordan Method)

Solve the system of the linear equations โˆ’๐‘ฅ+๐‘ฆ+๐‘ง=8,โˆ’2๐‘ฅ+๐‘ฆโˆ’๐‘ง=โˆ’5,6๐‘ฅโˆ’3๐‘ฆ=โˆ’6, using the inverse of a matrix.


We begin by assigning the values ๐ด=๏˜โˆ’111โˆ’21โˆ’16โˆ’30๏ค,=๏—๐‘ฅ๐‘ฆ๐‘ง๏ฃ,=๏˜8โˆ’5โˆ’6๏ค.uv

This allows us to write the above system of linear equations as ๏˜โˆ’111โˆ’21โˆ’16โˆ’30๏ค๏—๐‘ฅ๐‘ฆ๐‘ง๏ฃ=๏˜8โˆ’5โˆ’6๏ค.

Equivalently, we can now define the equations in the very neat form ๐ด=.uv

Phrased this way, we now aim to find u, as this vector contains all of the unknown variables ๐‘ฅ,๐‘ฆ, and ๐‘ง. In order to do this, we will use first assume that the inverse ๐ด๏Šฑ๏Šง exists and we then multiply the left-hand side of the equation above by this matrix: ๐ด(๐ด)=๐ด.๏Šฑ๏Šง๏Šฑ๏Šงuv

Given that by definition ๐ด๐ด=๐ผ๏Šฑ๏Šง๏Šฉ where ๐ผ๏Šฉ is the identity matrix and also given that ๐ผ๐ต=๐ต๏Šฉ for any matrix ๐ต with order 3ร—๐‘, we have


We now know how to express v in terms of the inverse matrix ๐ด๏Šฑ๏Šง, which we must now calculate. To do this, we will use the Gaussโ€“Jordan elimination method for calculating the inverse of a square matrix. We remind ourselves of the two matrices ๐ด=๏˜โˆ’111โˆ’21โˆ’16โˆ’30๏ค,๐ผ=๏˜100010001๏ค,๏Šฉ which we then join together as ๏’๐ด๐ผ๏ž=โŽกโŽขโŽขโŽฃโˆ’111100โˆ’21โˆ’10106โˆ’30001โŽคโŽฅโŽฅโŽฆ.๏Šฉ

If the inverse ๐ด๏Šฑ๏Šง exists, then we will be able to use elementary row operations to change the above matrix into the form ๏’๐ผ๐ด๏ž๏Šฉ๏Šฑ๏Šง. First we highlight the pivot, which is the first nonzero entry in each row: โŽกโŽขโŽขโŽฃโˆ’111100โˆ’21โˆ’10106โˆ’30001โŽคโŽฅโŽฅโŽฆ.

The โˆ’1 entry in the top-left corner is fairly convenient but it will be more helpful if this entry had a value of 1. We quickly scale the top row with the operation ๐‘Ÿโ†’โˆ’๐‘Ÿ๏Šง๏Šง, giving โŽกโŽขโŽขโŽฃ1โˆ’1โˆ’1โˆ’100โˆ’21โˆ’10106โˆ’30001โŽคโŽฅโŽฅโŽฆ.

To achieve the desired form, we must obtain the identity matrix ๐ผ๏Šฉ in the left-hand side of the joined matrix. The identity matrix has a 1 in the top-left entry and the rest of the entries in this column are zero. It is therefore necessary to remove the two pivot entries in the first column using the row operations ๐‘Ÿโ†’๐‘Ÿ+2๐‘Ÿ๏Šจ๏Šจ๏Šง and ๐‘Ÿโ†’๐‘Ÿโˆ’6๐‘Ÿ๏Šฉ๏Šฉ๏Šง: โŽกโŽขโŽขโŽฃ1โˆ’1โˆ’1โˆ’1000โˆ’1โˆ’3โˆ’210036601โŽคโŽฅโŽฅโŽฆ.

For similar reasons to the ones above, we would prefer the pivot in the second row to have a value of 1, so we perform the row operation ๐‘Ÿโ†’โˆ’๐‘Ÿ๏Šจ๏Šจ: โŽกโŽขโŽขโŽฃ1โˆ’1โˆ’1โˆ’1000132โˆ’10036601โŽคโŽฅโŽฅโŽฆ.

Now we will remove the pivot in the third row, since this is directly below the pivot in the second row. The row operation ๐‘Ÿโ†’๐‘Ÿโˆ’3๐‘Ÿ๏Šฉ๏Šฉ๏Šจ gives the matrix โŽกโŽขโŽขโŽฃ1โˆ’1โˆ’1โˆ’1000132โˆ’1000โˆ’3031โŽคโŽฅโŽฅโŽฆ.

At this stage, it might be tempting to immediately perform the row operation ๐‘Ÿโ†’โˆ’13๐‘Ÿ๏Šฉ๏Šฉ, which would introduce fractions into the third row and hence the remainder of the calculations. Although it is by no means necessary, it is usually preferable to avoid this if possible. Because of this, we instead choose the row operation ๐‘Ÿโ†’โˆ’๐‘Ÿ๏Šฉ๏Šฉ, which gives โŽกโŽขโŽขโŽฃ1โˆ’1โˆ’1โˆ’1000132โˆ’100030โˆ’3โˆ’1โŽคโŽฅโŽฅโŽฆ.

We also complete the row operations ๐‘Ÿโ†’3๐‘Ÿ๏Šง๏Šง and ๐‘Ÿโ†’3๐‘Ÿ๏Šจ๏Šจ: โŽกโŽขโŽขโŽฃ3โˆ’3โˆ’3โˆ’3000396โˆ’300030โˆ’3โˆ’1โŽคโŽฅโŽฅโŽฆ.

We have completed these row operations as a preparatory measure. Now we will remove the nonzero entries which are above the pivot in the third row, using the row operations ๐‘Ÿโ†’๐‘Ÿโˆ’3๐‘Ÿ๏Šจ๏Šจ๏Šฉ and ๐‘Ÿโ†’๐‘Ÿ+๐‘Ÿ๏Šง๏Šง๏Šฉ. The resulting matrix is โŽกโŽขโŽขโŽฃ3โˆ’30โˆ’3โˆ’3โˆ’10306630030โˆ’3โˆ’1โŽคโŽฅโŽฅโŽฆ.

We now have the penultimate step of removing the nonzero entry above the pivot in the second row. The row operation ๐‘Ÿโ†’๐‘Ÿ+๐‘Ÿ๏Šง๏Šง๏Šจ gives โŽกโŽขโŽขโŽฃ3003320306630030โˆ’3โˆ’1โŽคโŽฅโŽฅโŽฆ.

Instead of achieving the form ๏’๐ผ๐ด๏ž๏Šฉ๏Šฑ๏Šง, we have instead produced the matrix ๏’3๐ผ๐ด๏ž๏Šฉ๏Šฑ๏Šง. This is certainly not a failure on our part, since we can now write that ๐ด=13๏˜3326630โˆ’3โˆ’1๏ค.๏Šฑ๏Šง

It can be checked that ๐ด๐ด=๐ผ๏Šฑ๏Šง๏Šฉ, which means that we have found the correct inverse. We can now solve the problem by using equation (4). We have that uv=๐ด=13๏˜3326630โˆ’3โˆ’1๏ค๏˜8โˆ’5โˆ’6๏ค=13๏˜โˆ’3021๏ค=๏˜โˆ’107๏ค.๏Šฑ๏Šง

This gives the final answers that ๐‘ฅ=โˆ’1, ๐‘ฆ=0, and ๐‘ง=7. It can be checked in the original system of linear equations that these are the correct values.

In the question above, we used the Gaussโ€“Jordan method for finding the inverse matrix of the corresponding system of linear equations. Using row operations to manipulate a matrix is a fundamental skill in linear algebra and questions like the one above are an excellent source of practice. Nonetheless, there are other methods that can be used to calculate the inverse of a matrix that may be preferable depending on the matrix involved. In the following example we will use the adjoint matrix method to calculate the necessary matrix inverse. This method is often considered to be preferable for calculating the inverse of a matrix, especially for matrices of order 3ร—3, although it applies to square matrices of any order.

Example 3: Using the Inverse Matrix to Solve a System of Linear Equations (with the Adjoint Matrix Method)

Use the inverse of a matrix to solve the system of the linear equations โˆ’4๐‘ฅโˆ’2๐‘ฆโˆ’9๐‘ง=โˆ’8,โˆ’3๐‘ฅโˆ’2๐‘ฆโˆ’6๐‘ง=โˆ’3,โˆ’๐‘ฅ+๐‘ฆโˆ’6๐‘ง=7.


We will first create the matrices ๐ด=๏˜โˆ’4โˆ’2โˆ’9โˆ’3โˆ’2โˆ’6โˆ’11โˆ’6๏ค,=๏—๐‘ฅ๐‘ฆ๐‘ง๏ฃ,=๏˜โˆ’8โˆ’37๏ค.uv

The system of linear equations can be encoded equivalently by the matrix multiplication ๏˜โˆ’4โˆ’2โˆ’9โˆ’3โˆ’2โˆ’6โˆ’11โˆ’6๏ค๏—๐‘ฅ๐‘ฆ๐‘ง๏ฃ=๏˜โˆ’8โˆ’37๏ค.

This allows the simplest expression of the system of equations, as ๐ด=uv. By multiplying the left-hand side by the inverse ๐ด๏Šฑ๏Šง and then simplifying, we can express the vector u by the expression


We would like to calculate u, since this vector has entries that are the unknown variables ๐‘ฅ,๐‘ฆ, and ๐‘ง. To use the equation above to find u, we must first calculate ๐ด๏Šฑ๏Šง. To do this we will use the adjoint matrix method, which is described as follows.

Using the adjoint matrix method means that we must calculate the determinant of ๐ด. We use Sarrusโ€™ rule, which gives


Since the determinant is nonzero, we know that the matrix ๐ด is nonsingular and hence the inverse ๐ด๏Šฑ๏Šง does exist. We have already used 3 of the matrix minors of ๐ด in calculating |๐ด|, but to use the adjoint matrix method to calculate ๐ด๏Šฑ๏Šง, it is necessary to list all 9 matrix minors of ๐ด: ๐ด=๏”โˆ’2โˆ’61โˆ’6๏ ,๐ด=๏”โˆ’3โˆ’6โˆ’1โˆ’6๏ ,๐ด=๏”โˆ’3โˆ’2โˆ’11๏ ,๐ด=๏”โˆ’2โˆ’91โˆ’6๏ ,๐ด=๏”โˆ’4โˆ’9โˆ’1โˆ’6๏ ,๐ด=๏”โˆ’4โˆ’2โˆ’11๏ ,๐ด=๏”โˆ’2โˆ’9โˆ’2โˆ’6๏ ,๐ด=๏”โˆ’4โˆ’9โˆ’3โˆ’6๏ ,๐ด=๏”โˆ’4โˆ’2โˆ’3โˆ’2๏ .๏Šง๏Šง๏Šง๏Šจ๏Šง๏Šฉ๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Šฉ๏Šฉ๏Šง๏Šฉ๏Šจ๏Šฉ๏Šฉ

For these 2ร—2 matrices, we can calculate the determinants but we must remember to include the parity term that is used in creating the adjoint matrix. This included, we have +|๐ด|=+||โˆ’2โˆ’61โˆ’6||=18,โˆ’|๐ด|=โˆ’||โˆ’3โˆ’6โˆ’1โˆ’6||=โˆ’12,+|๐ด|=+||โˆ’3โˆ’2โˆ’11||=โˆ’5,โˆ’|๐ด|=โˆ’||โˆ’2โˆ’91โˆ’6||=โˆ’21,+|๐ด|=+||โˆ’4โˆ’9โˆ’1โˆ’6||=15,โˆ’|๐ด|=โˆ’||โˆ’4โˆ’2โˆ’11||=6,+|๐ด|=+||โˆ’2โˆ’9โˆ’2โˆ’6||=โˆ’6,โˆ’|๐ด|=โˆ’||โˆ’4โˆ’9โˆ’3โˆ’6||=3,+|๐ด|=+||โˆ’4โˆ’2โˆ’3โˆ’2||=2.๏Šง๏Šง๏Šง๏Šจ๏Šง๏Šฉ๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Šฉ๏Šฉ๏Šง๏Šฉ๏Šจ๏Šฉ๏Šฉ

The cofactor matrix is populated by the right-hand terms of the above 9 equations: ๐ถ=๏˜18โˆ’12โˆ’5โˆ’21156โˆ’632๏ค.

The adjoint matrix is the transpose of the cofactor matrix: adj(๐ด)=๏˜18โˆ’21โˆ’6โˆ’12153โˆ’562๏ค.

The inverse matrix is written in terms of the adjoint matrix and the determinant that we calculated in equation (6), using the formula ๐ด=1|๐ด|(๐ด)=โˆ’13๏˜18โˆ’21โˆ’6โˆ’12153โˆ’562๏ค.๏Šฑ๏Šงadj

Now that we know ๐ด๏Šฑ๏Šง, we can solve the original system of linear equations by using equation (5): uv=๐ด=โˆ’13๏˜18โˆ’21โˆ’6โˆ’12153โˆ’562๏ค๏˜โˆ’8โˆ’37๏ค=โˆ’13๏˜โˆ’1237236๏ค=๏˜41โˆ’24โˆ’12๏ค.๏Šฑ๏Šง

This means that the solution to the original problem is ๐‘ฅ=41, ๐‘ฆ=โˆ’24, and ๐‘ง=โˆ’12.

The questions above can be approached with an abstract simplicity once the system of equations is reduced to the matrix equation ๐ด=uv. The benefits of such an expression is that we can treat it in a detached, algebraic sense, which makes it easy to see that the system can be solved by using linear algebra to achieve the matrix equation uv=๐ด๏Šฑ๏Šง. Treating the problem only in this abstract way, however, would belie the computational complexity that is incurred when trying to calculate the inverse matrix ๐ด๏Šฑ๏Šง. Furthermore, it is obviously not possible to solve the system of equations if we do not have an exact form for ๐ด๏Šฑ๏Šง, which will normally involve either the Gaussโ€“Jordan method or the adjoint matrix method. The ability to change perspective between the abstract view and the computational view is a defining characteristic of linear algebra, wherein we must frequently shift our perspective in order to fully understand the problem that we are working with and the techniques that we might employ to solve it. For many mathematicians, this is one of the joys of studying linear algebra, but even if this is not the case for everyone, it would be hard not to empathise with this perspective in this particuar situation, given the examples above.

Key Points

  • A system of linear equations can be encoded by the matrix equation ๐ด=uv, where the aim is to solve the system by finding u.
  • If ๐ด is a square matrix and is invertible, then we can find the matrix ๐ด๏Šฑ๏Šง either by the Gaussโ€“Jordan method or by the adjoint matrix method.
  • If the inverse ๐ด๏Šฑ๏Šง can be found, then we can use linear algebra to find uv=๐ด๏Šฑ๏Šง.

Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy.