Lesson Explainer: Image and Kernel of Linear Transformation Mathematics

In this explainer, we will learn how to find the image and basis of the kernel of a linear transformation.

Very often, we will be interested in solving a system of linear equations that is encoded by matrix equations rather than being written out as full equations. There are several advantages to writing the system of equation in matrix form, not least of which is being neater and less confusing in completing calculations. Typically, we would take a general system of linear equations ๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘,๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘,โ‹ฎโ‹ฎโ‹ฎโ‹ฑโ‹ฎโ‹ฎโ‹ฎ๐‘Ž๐‘ฅ+๐‘Ž๐‘ฅโ‹ฏ๐‘Ž๐‘ฅ=๐‘๏Šง๏Šง๏Šง๏Šง๏Šจ๏Šจ๏Šง๏Š๏Š๏Šง๏Šจ๏Šง๏Šง๏Šจ๏Šจ๏Šจ๏Šจ๏Š๏Š๏Šจ๏‰๏Šง๏Šง๏‰๏Šจ๏Šจ๏‰๏Š๏Š๏‰ and prefer to express this by the matrix equation โŽ›โŽœโŽœโŽ๐‘Ž๐‘Žโ‹ฏ๐‘Ž๐‘Ž๐‘Žโ‹ฏ๐‘Žโ‹ฎโ‹ฎโ‹ฑโ‹ฎ๐‘Ž๐‘Žโ‹ฏ๐‘ŽโŽžโŽŸโŽŸโŽ โŽ›โŽœโŽœโŽ๐‘ฅ๐‘ฅโ‹ฎ๐‘ฅโŽžโŽŸโŽŸโŽ =โŽ›โŽœโŽœโŽœโŽ๐‘๐‘โ‹ฎ๐‘โŽžโŽŸโŽŸโŽŸโŽ .๏Šง๏Šง๏Šง๏Šจ๏Šง๏Š๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Š๏‰๏Šง๏‰๏Šจ๏‰๏Š๏Šง๏Šจ๏Š๏Šง๏Šจ๏‰

The matrix on the left-hand side is called the โ€œcoefficient matrixโ€ and, out of the three matrices above, it is the matrix that we are most interested in, as it entirely encodes how the variables ๐‘ฅ,๐‘ฅ,โ€ฆ,๐‘ฅ๏Šง๏Šจ๏Š will be transformed in order to describe the full system of linear equations. If we were to further define the three matrices as ๐ด=โŽ›โŽœโŽœโŽ๐‘Ž๐‘Žโ‹ฏ๐‘Ž๐‘Ž๐‘Žโ‹ฏ๐‘Žโ‹ฎโ‹ฎโ‹ฑโ‹ฎ๐‘Ž๐‘Žโ‹ฏ๐‘ŽโŽžโŽŸโŽŸโŽ ,โƒ‘๐‘ฅ=โŽ›โŽœโŽœโŽ๐‘ฅ๐‘ฅโ‹ฎ๐‘ฅโŽžโŽŸโŽŸโŽ ,โƒ‘๐‘=โŽ›โŽœโŽœโŽœโŽ๐‘๐‘โ‹ฎ๐‘โŽžโŽŸโŽŸโŽŸโŽ ,๏Šง๏Šง๏Šง๏Šจ๏Šง๏Š๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Š๏‰๏Šง๏‰๏Šจ๏‰๏Š๏Šง๏Šจ๏Š๏Šง๏Šจ๏‰ then we could write the system of equations as ๐ดโƒ‘๐‘ฅ=โƒ‘๐‘, which is a much neater expression. This also allows us to define a concept that appears frequently in linear algebra, especially in the more abstract interpretations of the subject.

Definition: Kernel of a Matrix

Consider a matrix ๐ด with order ๐‘šร—๐‘›. Then, the โ€œkernelโ€ of ๐ด is denoted ker(๐ด) and is defined as the set of all matrices โƒ‘๐‘ฅ of order ๐‘›ร—1 such that ๐ดโƒ‘๐‘ฅ=โƒ‘0, where โƒ‘0 is the zero matrix of order ๐‘šร—1. This is often referred to as finding the solution space of the homogeneous system associated to ๐ด.

In the definition above, it is more common to refer to both โƒ‘๐‘ฅ and โƒ‘0 as โ€œvectorsโ€ or โ€œcolumn vectors.โ€ Whichever name is used, finding the kernel of the matrix ultimately requires us to find all possible ๐‘ฅ,๐‘ฅ,โ€ฆ,๐‘ฅ๏Šง๏Šจ๏Š such that โŽ›โŽœโŽœโŽ๐‘Ž๐‘Žโ‹ฏ๐‘Ž๐‘Ž๐‘Žโ‹ฏ๐‘Žโ‹ฎโ‹ฎโ‹ฑโ‹ฎ๐‘Ž๐‘Žโ‹ฏ๐‘ŽโŽžโŽŸโŽŸโŽ โŽ›โŽœโŽœโŽ๐‘ฅ๐‘ฅโ‹ฎ๐‘ฅโŽžโŽŸโŽŸโŽ =โŽ›โŽœโŽœโŽ00โ‹ฎ0โŽžโŽŸโŽŸโŽ .๏Šง๏Šง๏Šง๏Šจ๏Šง๏Š๏Šจ๏Šง๏Šจ๏Šจ๏Šจ๏Š๏‰๏Šง๏‰๏Šจ๏‰๏Š๏Šง๏Šจ๏Š

This is often referred to as solving the homogeneous system of linear equations that is associated to the matrix ๐ด, which is achieved with the familiar recourse to Gaussโ€“Jordan elimination (or any equivalent method). We will demonstrate how to find the kernel of a matrix in the following example.

Suppose that we wanted to find the kernel of the matrix ๐ด=๏€1101โˆ’132140โˆ’21๏Œ.

Then, by definition, ker(๐ด) is the set of all vectors โƒ‘๐‘ฅ such that ๐ดโƒ‘๐‘ฅ=โƒ‘0. In other words, we must find all ๐‘ฅ,๐‘ฅ,๐‘ฅ,๐‘ฅ๏Šง๏Šจ๏Šฉ๏Šช such that ๏€1101โˆ’132140โˆ’21๏ŒโŽ›โŽœโŽœโŽ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅโŽžโŽŸโŽŸโŽ =โŽ›โŽœโŽœโŽ0000โŽžโŽŸโŽŸโŽ .๏Šง๏Šจ๏Šฉ๏Šช

We can do this by taking the coefficient matrix and using row operations to manipulate this matrix into reduced echelon form, which is more often known as Gaussโ€“Jordan elimination. This process is normally helped by highlighting the pivots, which are the first nonzero entries in each row. For matrix ๐ด, the pivots are shown as follows: ๐ด=๏€1101โˆ’132140โˆ’21๏Œ.

To achieve the reduced echelon form of ๐ด, we must first ensure that the first column is populated only by zeros, except for the first entry. We can eliminate the nonzero entries in the second and third rows with row operations ๐‘Ÿโ†’๐‘Ÿ+๐‘Ÿ๏Šจ๏Šจ๏Šง and ๐‘Ÿโ†’๐‘Ÿโˆ’4๐‘Ÿ๏Šฉ๏Šฉ๏Šง, which give the matrix ๏€110104220โˆ’4โˆ’2โˆ’3๏Œ.

Next, we need to remove the nonzero entry below the pivot in the second row, which requires the row operation ๐‘Ÿโ†’๐‘Ÿ+๐‘Ÿ๏Šฉ๏Šฉ๏Šจ, giving ๏€11010422000โˆ’1๏Œ.

For posterity, we change the sign of the pivot in the third row with the row operation ๐‘Ÿโ†’โˆ’๐‘Ÿ๏Šฉ๏Šฉ. This gives ๏€110104220001๏Œ.

Currently, there are two nonzero entries above the pivot in the third row. These must be culled, which is achieved with the joint row operations ๐‘Ÿโ†’๐‘Ÿโˆ’2๐‘Ÿ๏Šจ๏Šจ๏Šฉ and ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šง๏Šง๏Šฉ: ๏€110004200001๏Œ.

We are reasonably close to achieving the reduced echelon form. The preparatory step ๐‘Ÿโ†’14๐‘Ÿ๏Šจ๏Šจ gives the matrix โŽ›โŽœโŽœโŽ1100011200001โŽžโŽŸโŽŸโŽ , and then the final blow is dealt with the row operation ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šง๏Šง๏Šจ: โŽ›โŽœโŽœโŽœโŽ10โˆ’120011200001โŽžโŽŸโŽŸโŽŸโŽ .

This matrix is now in reduced echelon form, meaning that we have effectively solved the homogeneous system of linear equations that is defined by the original matrix ๐ด. To properly express ker(๐ด), it is best to write the corresponding system of linear equations of the above matrix: ๐‘ฅโˆ’12๐‘ฅ=0,๐‘ฅ+12๐‘ฅ=0,๐‘ฅ=0.๏Šง๏Šฉ๏Šจ๏Šฉ๏Šช

The three variables ๐‘ฅ๏Šง, ๐‘ฅ๏Šจ, and ๐‘ฅ๏Šช are associated with columns of the reduced echelon matrix that contain pivots, and we will express these in terms of the remaining variable ๐‘ฅ๏Šฉ, which corresponds to the only column of that matrix that does not contain a pivot. We will express ๐‘ฅ๏Šง, ๐‘ฅ๏Šจ, ๐‘ฅ๏Šช in terms of ๐‘ฅ๏Šฉ as shown: ๐‘ฅ=12๐‘ฅ,๐‘ฅ=โˆ’12๐‘ฅ,๐‘ฅ=0.๏Šง๏Šฉ๏Šจ๏Šฉ๏Šช

We also have the trivial equation ๐‘ฅ=๐‘ฅ๏Šฉ๏Šฉ, which we can include in the above equations to give the full solution ๐‘ฅ=12๐‘ฅ,๐‘ฅ=โˆ’12๐‘ฅ,๐‘ฅ=๐‘ฅ,๐‘ฅ=0.๏Šง๏Šฉ๏Šจ๏Šฉ๏Šฉ๏Šฉ๏Šช

It is now simply a matter of writing the solution in vector form and by means of an independent parameter. Given that ker(๐ด) is all โƒ‘๐‘ฅ that solves the associated homogeneous system of linear equations, the kernel is therefore any โƒ‘๐‘ฅ such that โƒ‘๐‘ฅ=โŽ›โŽœโŽœโŽ๐‘ฅ๐‘ฅ๐‘ฅ๐‘ฅโŽžโŽŸโŽŸโŽ =๐‘ โŽ›โŽœโŽœโŽœโŽœโŽ12โˆ’1210โŽžโŽŸโŽŸโŽŸโŽŸโŽ .๏Šง๏Šจ๏Šฉ๏Šช

We can set the arbitrary parameter ๐‘  to take any value, and this will produce a vector โƒ‘๐‘ฅ that solves the homogeneous system. In this sense, ker(๐ด) is the set of vectors that is generated by this parameter ๐‘  when used as a scalar multiple for the set of vectors โŽงโŽชโŽชโŽจโŽชโŽชโŽฉโŽ›โŽœโŽœโŽœโŽœโŽ12โˆ’1210โŽžโŽŸโŽŸโŽŸโŽŸโŽ โŽซโŽชโŽชโŽฌโŽชโŽชโŽญ.

Although we will not explain the definition here, we would therefore say that the above set is the โ€œbasisโ€ for ker(๐ด). The basis of any transformation represented by a matrix ๐ด is a way of describing the key ingredients that are necessary for defining the associated vector space. The โ€œdimensionโ€ of the basis of a vector space is related to the concepts of the matrix rank and the matrix nullity, which are two core properties associated with every matrix.

Example 1: Finding the Basis of a Solution Space Where the Coefficient Matrix Is of Order 3 ร— 3

Find a basis for the solution space of the system ๏€0โˆ’121011โˆ’25๏Œ๏€ฟ๐‘ฅ๐‘ฆ๐‘ง๏‹=๏€000๏Œ.

Answer

To find the basis for the solution space, we first need to find the row-reduced echelon form of the matrix ๐ด=๏€0โˆ’121011โˆ’25๏Œ.

Before we begin the process of Gaussโ€“Jordan elimination to reduce this matrix to reduced echelon form, we will highlight the first nonzero element of each row, also known as pivots: ๐ด=๏€0โˆ’121011โˆ’25๏Œ.

If it is possible to keep a 1 entry in the top-left corner, then we would ordinarily choose to do this. We make the row swap ๐‘Ÿโ†”๐‘Ÿ๏Šง๏Šฉ, which gives the matrix ๏€1โˆ’251010โˆ’12๏Œ.

Now we need to eliminate the nonzero entry that is below the pivot in the first row. We use the elementary row operation ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šจ๏Šจ๏Šง to give the matrix ๏€1โˆ’2502โˆ’40โˆ’12๏Œ.

Now that the pivot in the first row is the only nonzero entry in the column it occupies, we can move on to the second column. Before doing this, we observe that the second row has a factor of 2 that can be removed by the row operation ๐‘Ÿโ†’12๐‘Ÿ๏Šจ๏Šจ. This gives the matrix ๏€1โˆ’2501โˆ’20โˆ’12๏Œ.

Now we need to eliminate the nonzero entry that is below the pivot in the second row. We use the row operation ๐‘Ÿโ†’๐‘Ÿ+๐‘Ÿ๏Šฉ๏Šฉ๏Šจ, which produces a matrix that has a zero row: ๏€1โˆ’2501โˆ’2000๏Œ.

We are one step away from reduced echelon form. The last step is to remove the nonzero entry that is above the pivot in the second row, which is completed using the row operation ๐‘Ÿโ†’๐‘Ÿ+2๐‘Ÿ๏Šง๏Šง๏Šจ, giving ๏€10101โˆ’2000๏Œ.

This matrix is in reduced echelon form and as such represents the solution to the original system. Now we can convert this matrix into a corresponding set of linear equations, where we omit the third row of the matrix because this is a zero row: ๐‘ฅ+๐‘ง=0,๐‘ฆโˆ’2๐‘ง=0.

We can now solve these equations for the variables associated with one of the pivot columns in the reduced echelon form of the matrix, which in this case are ๐‘ฅ and ๐‘ฆ. This gives ๐‘ฅ=โˆ’๐‘ง,๐‘ฆ=2๐‘ง.

For the sake of completeness, we must include the parameter ๐‘ง in the statement of our solution, so we append the trivial equation ๐‘ง=๐‘ง to give ๐‘ฅ=โˆ’๐‘ง,๐‘ฆ=2๐‘ง,๐‘ง=๐‘ง.

We can then write the solution as a vector: ๏€ฟ๐‘ฅ๐‘ฆ๐‘ง๏‹=๐‘ ๏€โˆ’121๏Œ.

Therefore, the basis for solution space is given by the vector ๏ฐ๏€โˆ’121๏Œ๏ผ.

One of the most persistently rewarding aspects of linear algebra is that the majority of the definitions, theorems, and algorithms applies to matrices covering a range of sizes. For example, in the questions above, we used Gaussโ€“Jordan elimination on matrices of order 3ร—4 and 3ร—3 to find the reduced echelon form and hence a solution to the associated homogeneous system of equations. There was, as ever, no real reason that we chose to work with matrices of this particular order. Given that Gaussโ€“Jordan elimination can be applied to matrices of any order, there is no reason why the technique that we have already studied should not apply to matrices of a larger order. In the remaining two examples, we will consider matrices with order 4ร—4. As we will see, providing that we are comfortable with the usage of row operations to achieve reduced echelon form, these questions are conceptually no more difficult than working with a matrix of any other order, although on average we will have to perform more row operations to find the reduced echelon form.

Example 2: Finding the Basis of a Solution Space Where the Coefficient Matrix Is of Order 4 ร— 4

Find the general solution to the system of linear equations โŽ›โŽœโŽœโŽ1101211210110โˆ’111โŽžโŽŸโŽŸโŽ ๏ƒ๐‘ฅ๐‘ฆ๐‘ง๐‘ค๏=โŽ›โŽœโŽœโŽ0000โŽžโŽŸโŽŸโŽ .

Hence, find a basis for its solution space.

Answer

In order to find a basis for the solution space of the above system of linear equations, we must first take the coefficient matrix ๐ด=โŽ›โŽœโŽœโŽ1101211210110โˆ’111โŽžโŽŸโŽŸโŽ  and manipulate this into reduced echelon form. We will highlight the first nonzero entries in each row, which are known as the pivot entries: ๐ด=โŽ›โŽœโŽœโŽ1101211210110โˆ’111โŽžโŽŸโŽŸโŽ .

Given that there is a 1 entry in the top-left corner, we will use this to remove the remaining nonzero entries in the first column. We can employ the two elementary row operations ๐‘Ÿโ†’๐‘Ÿโˆ’2๐‘Ÿ๏Šจ๏Šจ๏Šง and ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šฉ๏Šฉ๏Šง to give โŽ›โŽœโŽœโŽ11010โˆ’1100โˆ’1100โˆ’111โŽžโŽŸโŽŸโŽ .

The second row is identical to the third row, which means that we can remove this. We know that the criteria of reduced echelon form is that any zero rows are at the bottom of the matrix, so we preemptively use the row swap operation ๐‘Ÿโ†”๐‘Ÿ๏Šจ๏Šช: โŽ›โŽœโŽœโŽ11010โˆ’1110โˆ’1100โˆ’110โŽžโŽŸโŽŸโŽ .

Now we can create a zero row in the fourth row by using the operation ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šช๏Šช๏Šฉ, which gives โŽ›โŽœโŽœโŽ11010โˆ’1110โˆ’1100000โŽžโŽŸโŽŸโŽ .

For the sake of neatness, we scale the pivots in the second and the third rows by using ๐‘Ÿโ†’โˆ’๐‘Ÿ๏Šจ๏Šจ and ๐‘Ÿโ†’โˆ’๐‘Ÿ๏Šฉ๏ŠฉโŽ›โŽœโŽœโŽ110101โˆ’1โˆ’101โˆ’100000โŽžโŽŸโŽŸโŽ .

The pivot in the third row is directly below the pivot in the second row, and hence it must be removed using the row operation ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šฉ๏Šฉ๏Šจ, giving โŽ›โŽœโŽœโŽ110101โˆ’1โˆ’100010000โŽžโŽŸโŽŸโŽ .

The pivot in the third row has two nonzero entries above it, which must be removed with the two row operations ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šง๏Šง๏Šฉ and ๐‘Ÿโ†’๐‘Ÿ+๐‘Ÿ๏Šจ๏Šจ๏Šฉ. This gives โŽ›โŽœโŽœโŽ110001โˆ’1000010000โŽžโŽŸโŽŸโŽ .

For the final row operation, we only need to remove the nonzero entry above the pivot in the second row. This is achieved using ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šง๏Šง๏Šจ, which gives a matrix that is now in reduced echelon form: โŽ›โŽœโŽœโŽ101001โˆ’1000010000โŽžโŽŸโŽŸโŽ .

Ignoring the zero row in the fourth row, we write out this system of linear equations in full form as ๐‘ฅ+๐‘ง=0,๐‘ฆโˆ’๐‘ง=0,๐‘ค=0.

There are three variables that are associated with pivot columns, and these are ๐‘ฅ, ๐‘ฆ, and ๐‘ค. The remaining variable, ๐‘ง, is not associated with a pivot column. We then take the above three equations and solve each of them in terms of the pivot variables, giving ๐‘ฅ=โˆ’๐‘ง,๐‘ฆ=๐‘ง,๐‘ค=0.

For the nonpivot variable ๐‘ง, we can also write the rather trivial equation ๐‘ง=๐‘ง. Injecting this in between the second and third lines in the above equations gives ๐‘ฅ=โˆ’๐‘ง,๐‘ฆ=๐‘ง,๐‘ง=๐‘ง,๐‘ค=0.

Written in vector form, the solution is ๏ƒ๐‘ฅ๐‘ฆ๐‘ง๐‘ค๏=๐‘ โŽ›โŽœโŽœโŽโˆ’1110โŽžโŽŸโŽŸโŽ .

The basis for the solution space is given by the set โŽงโŽจโŽฉโŽ›โŽœโŽœโŽโˆ’1110โŽžโŽŸโŽŸโŽ โŽซโŽฌโŽญ.

In the above examples, we found a basis that contained one vector, which means that the dimension of the basis is 1. This may not always be the case, because the basis of any space might have a dimension larger than 1 and we would not usually know this until we had performed at least part of the Gaussโ€“Jordan elimination process. In the following example, we will consider a matrix with order 4ร—4 that will have a basis with a dimension of 2. Although the solution will be a little more convoluted, it will principally be the same as in the two examples that we gave above. Instead of one vector, the basis of the solution space will feature two vectors, but otherwise, the final result will be expressed as it was in the previous examples.

Example 3: Finding the Basis of a Solution Space Where the Coefficient Matrix Is of Order 4 ร— 4

Find a general solution of the system of linear equations โŽ›โŽœโŽœโŽ11011โˆ’11031123303โŽžโŽŸโŽŸโŽ ๏ƒ๐‘ฅ๐‘ฆ๐‘ง๐‘ค๏=โŽ›โŽœโŽœโŽ0000โŽžโŽŸโŽŸโŽ  and hence find a basis for its solution space.

Answer

Before finding the basis of this system of linear equations, we will need to find the reduced echelon form of the coefficient matrix on the left-hand side. We label this matrix as ๐ด=โŽ›โŽœโŽœโŽ11011โˆ’11031123303โŽžโŽŸโŽŸโŽ .

Before performing Gaussโ€“Jordan elimination to find the reduced echelon form of ๐ด, we highlight the pivots, which are the first nonzero entries of each row: ๐ด=โŽ›โŽœโŽœโŽ11011โˆ’11031123303โŽžโŽŸโŽŸโŽ .

We need to remove the nonzero entries that are directly below the pivot in the first row. This can be achieved by the three row operations ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šจ๏Šจ๏Šง, ๐‘Ÿโ†’๐‘Ÿโˆ’3๐‘Ÿ๏Šฉ๏Šฉ๏Šง, and ๐‘Ÿโ†’๐‘Ÿโˆ’3๐‘Ÿ๏Šช๏Šช๏Šง, which gives โŽ›โŽœโŽœโŽ11010โˆ’21โˆ’10โˆ’21โˆ’10000โŽžโŽŸโŽŸโŽ .

We have already produced a zero row in the final row and we can see that the third row is the same as the second row, meaning that this can also be turned into a zero row using the row operation ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šฉ๏Šฉ๏Šจ: โŽ›โŽœโŽœโŽ11010โˆ’21โˆ’100000000โŽžโŽŸโŽŸโŽ .

The matrix is close to being in reduced echelon form. The pivot in the second row can be scaled to produce an entry of value 1 with the row operation ๐‘Ÿโ†’โˆ’12๐‘Ÿ๏Šจ๏Šจ, which gives โŽ›โŽœโŽœโŽœโŽ110101โˆ’121200000000โŽžโŽŸโŽŸโŽŸโŽ .

To put the matrix in reduced echelon form, we now only need the row operation ๐‘Ÿโ†’๐‘Ÿโˆ’๐‘Ÿ๏Šง๏Šง๏Šจ, which gives the final form โŽ›โŽœโŽœโŽœโŽœโŽ10121201โˆ’121200000000โŽžโŽŸโŽŸโŽŸโŽŸโŽ .

To find the basis of the linear system, we need to take the matrix above and write this as a set of equations: ๐‘ฅ+12๐‘ง+12๐‘ค=0,๐‘ฆโˆ’12๐‘ง+12๐‘ค=0.

There are two variables, ๐‘ฅ and ๐‘ฆ, that are associated to columns that contain pivots. The two remaining variables, ๐‘ง and ๐‘ค, are associated with columns that do not contain pivot variables. We will express the former variables in terms of the latter variables by solving the two equations for these to give ๐‘ฅ=โˆ’12๐‘งโˆ’12๐‘ค,๐‘ฆ=12๐‘งโˆ’12๐‘ค.

For the nonpivot variables, we could also choose to write ๐‘ง=๐‘ง and ๐‘ค=๐‘ค, making the complete set of equations ๐‘ฅ=โˆ’12๐‘งโˆ’12๐‘ค,๐‘ฆ=12๐‘งโˆ’12๐‘ค,๐‘ง=๐‘ง,๐‘ค=๐‘ค.

Then, the complete solution can be parameterized with two variables ๐‘  and ๐‘ก and written in vector form as ๏ƒ๐‘ฅ๐‘ฆ๐‘ง๐‘ค๏=๐‘ โŽ›โŽœโŽœโŽœโŽœโŽโˆ’121210โŽžโŽŸโŽŸโŽŸโŽŸโŽ +๐‘กโŽ›โŽœโŽœโŽœโŽœโŽโˆ’12โˆ’1201โŽžโŽŸโŽŸโŽŸโŽŸโŽ .

The basis of the solution space is, therefore, โŽงโŽชโŽชโŽจโŽชโŽชโŽฉโŽ›โŽœโŽœโŽœโŽœโŽโˆ’121210โŽžโŽŸโŽŸโŽŸโŽŸโŽ ,โŽ›โŽœโŽœโŽœโŽœโŽโˆ’12โˆ’1201โŽžโŽŸโŽŸโŽŸโŽŸโŽ โŽซโŽชโŽชโŽฌโŽชโŽชโŽญ.

We can see that, in many ways, finding the basis for a solution space is essentially the same as performing Gaussโ€“Jordan elimination to achieve reduced echelon form and then just writing the solution in a particular format. This does not quite do justice to the concept of a โ€œbasis,โ€ which is of far greater importance than our treatment of the topic in this explainer would suggest. As with so many topics in linear algebra, the ability to calculate a basis is dependent on us having a strong understanding of one of the several key concepts (in this case, Gaussโ€“Jordan elimination). This interplay between an abstract concept and a widely applicable computational method is one of the distinct hallmarks of linear algebra and occurs repeatedly throughout the subject.

Key Points

  • The kernel of a matrix ๐ด is denoted ker(๐ด) and is the set of all vectors โƒ‘๐‘ฅ that solve the equation ๐ดโƒ‘๐‘ฅ=โƒ‘0.
  • The kernel is also referred to as the solution space of the corresponding homogeneous system of linear equations.
  • The kernel of ๐ด is calculated by finding the reduced echelon form of this matrix using Gaussโ€“Jordan elimination and then writing the solution in a particular way.
  • The basis of the kernel of ๐ด is the set of vectors that correspond to the nonpivot columns in the reduced echelon form of ๐ด, once any trivial equations for the nonpivot variables have been included.

Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy.