Lesson Explainer: Image and Kernel of Linear Transformation | Nagwa Lesson Explainer: Image and Kernel of Linear Transformation | Nagwa

Lesson Explainer: Image and Kernel of Linear Transformation Mathematics

Join Nagwa Classes

Attend live Mathematics sessions on Nagwa Classes to learn more about this topic from an expert teacher!

In this explainer, we will learn how to find the image and basis of the kernel of a linear transformation.

Very often, we will be interested in solving a system of linear equations that is encoded by matrix equations rather than being written out as full equations. There are several advantages to writing the system of equation in matrix form, not least of which is being neater and less confusing in completing calculations. Typically, we would take a general system of linear equations 𝑎𝑥+𝑎𝑥𝑎𝑥=𝑏,𝑎𝑥+𝑎𝑥𝑎𝑥=𝑏,𝑎𝑥+𝑎𝑥𝑎𝑥=𝑏 and prefer to express this by the matrix equation 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑥𝑥𝑥=𝑏𝑏𝑏.

The matrix on the left-hand side is called the “coefficient matrix” and, out of the three matrices above, it is the matrix that we are most interested in, as it entirely encodes how the variables 𝑥,𝑥,,𝑥 will be transformed in order to describe the full system of linear equations. If we were to further define the three matrices as 𝐴=𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎,𝑥=𝑥𝑥𝑥,𝑏=𝑏𝑏𝑏, then we could write the system of equations as 𝐴𝑥=𝑏, which is a much neater expression. This also allows us to define a concept that appears frequently in linear algebra, especially in the more abstract interpretations of the subject.

Definition: Kernel of a Matrix

Consider a matrix 𝐴 with order 𝑚×𝑛. Then, the “kernel” of 𝐴 is denoted ker(𝐴) and is defined as the set of all matrices 𝑥 of order 𝑛×1 such that 𝐴𝑥=0, where 0 is the zero matrix of order 𝑚×1. This is often referred to as finding the solution space of the homogeneous system associated to 𝐴.

In the definition above, it is more common to refer to both 𝑥 and 0 as “vectors” or “column vectors.” Whichever name is used, finding the kernel of the matrix ultimately requires us to find all possible 𝑥,𝑥,,𝑥 such that 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑥𝑥𝑥=000.

This is often referred to as solving the homogeneous system of linear equations that is associated to the matrix 𝐴, which is achieved with the familiar recourse to Gauss–Jordan elimination (or any equivalent method). We will demonstrate how to find the kernel of a matrix in the following example.

Suppose that we wanted to find the kernel of the matrix 𝐴=110113214021.

Then, by definition, ker(𝐴) is the set of all vectors 𝑥 such that 𝐴𝑥=0. In other words, we must find all 𝑥,𝑥,𝑥,𝑥 such that 110113214021𝑥𝑥𝑥𝑥=0000.

We can do this by taking the coefficient matrix and using row operations to manipulate this matrix into reduced echelon form, which is more often known as Gauss–Jordan elimination. This process is normally helped by highlighting the pivots, which are the first nonzero entries in each row. For matrix 𝐴, the pivots are shown as follows: 𝐴=110113214021.

To achieve the reduced echelon form of 𝐴, we must first ensure that the first column is populated only by zeros, except for the first entry. We can eliminate the nonzero entries in the second and third rows with row operations 𝑟𝑟+𝑟 and 𝑟𝑟4𝑟, which give the matrix 110104220423.

Next, we need to remove the nonzero entry below the pivot in the second row, which requires the row operation 𝑟𝑟+𝑟, giving 110104220001.

For posterity, we change the sign of the pivot in the third row with the row operation 𝑟𝑟. This gives 110104220001.

Currently, there are two nonzero entries above the pivot in the third row. These must be culled, which is achieved with the joint row operations 𝑟𝑟2𝑟 and 𝑟𝑟𝑟: 110004200001.

We are reasonably close to achieving the reduced echelon form. The preparatory step 𝑟14𝑟 gives the matrix 1100011200001, and then the final blow is dealt with the row operation 𝑟𝑟𝑟: 10120011200001.

This matrix is now in reduced echelon form, meaning that we have effectively solved the homogeneous system of linear equations that is defined by the original matrix 𝐴. To properly express ker(𝐴), it is best to write the corresponding system of linear equations of the above matrix: 𝑥12𝑥=0,𝑥+12𝑥=0,𝑥=0.

The three variables 𝑥, 𝑥, and 𝑥 are associated with columns of the reduced echelon matrix that contain pivots, and we will express these in terms of the remaining variable 𝑥, which corresponds to the only column of that matrix that does not contain a pivot. We will express 𝑥, 𝑥, 𝑥 in terms of 𝑥 as shown: 𝑥=12𝑥,𝑥=12𝑥,𝑥=0.

We also have the trivial equation 𝑥=𝑥, which we can include in the above equations to give the full solution 𝑥=12𝑥,𝑥=12𝑥,𝑥=𝑥,𝑥=0.

It is now simply a matter of writing the solution in vector form and by means of an independent parameter. Given that ker(𝐴) is all 𝑥 that solves the associated homogeneous system of linear equations, the kernel is therefore any 𝑥 such that 𝑥=𝑥𝑥𝑥𝑥=𝑠121210.

We can set the arbitrary parameter 𝑠 to take any value, and this will produce a vector 𝑥 that solves the homogeneous system. In this sense, ker(𝐴) is the set of vectors that is generated by this parameter 𝑠 when used as a scalar multiple for the set of vectors 121210.

Although we will not explain the definition here, we would therefore say that the above set is the “basis” for ker(𝐴). The basis of any transformation represented by a matrix 𝐴 is a way of describing the key ingredients that are necessary for defining the associated vector space. The “dimension” of the basis of a vector space is related to the concepts of the matrix rank and the matrix nullity, which are two core properties associated with every matrix.

Example 1: Finding the Basis of a Solution Space Where the Coefficient Matrix Is of Order 3 × 3

Find a basis for the solution space of the system 012101125𝑥𝑦𝑧=000.

Answer

To find the basis for the solution space, we first need to find the row-reduced echelon form of the matrix 𝐴=012101125.

Before we begin the process of Gauss–Jordan elimination to reduce this matrix to reduced echelon form, we will highlight the first nonzero element of each row, also known as pivots: 𝐴=012101125.

If it is possible to keep a 1 entry in the top-left corner, then we would ordinarily choose to do this. We make the row swap 𝑟𝑟, which gives the matrix 125101012.

Now we need to eliminate the nonzero entry that is below the pivot in the first row. We use the elementary row operation 𝑟𝑟𝑟 to give the matrix 125024012.

Now that the pivot in the first row is the only nonzero entry in the column it occupies, we can move on to the second column. Before doing this, we observe that the second row has a factor of 2 that can be removed by the row operation 𝑟12𝑟. This gives the matrix 125012012.

Now we need to eliminate the nonzero entry that is below the pivot in the second row. We use the row operation 𝑟𝑟+𝑟, which produces a matrix that has a zero row: 125012000.

We are one step away from reduced echelon form. The last step is to remove the nonzero entry that is above the pivot in the second row, which is completed using the row operation 𝑟𝑟+2𝑟, giving 101012000.

This matrix is in reduced echelon form and as such represents the solution to the original system. Now we can convert this matrix into a corresponding set of linear equations, where we omit the third row of the matrix because this is a zero row: 𝑥+𝑧=0,𝑦2𝑧=0.

We can now solve these equations for the variables associated with one of the pivot columns in the reduced echelon form of the matrix, which in this case are 𝑥 and 𝑦. This gives 𝑥=𝑧,𝑦=2𝑧.

For the sake of completeness, we must include the parameter 𝑧 in the statement of our solution, so we append the trivial equation 𝑧=𝑧 to give 𝑥=𝑧,𝑦=2𝑧,𝑧=𝑧.

We can then write the solution as a vector: 𝑥𝑦𝑧=𝑠121.

Therefore, the basis for solution space is given by the vector 121.

One of the most persistently rewarding aspects of linear algebra is that the majority of the definitions, theorems, and algorithms applies to matrices covering a range of sizes. For example, in the questions above, we used Gauss–Jordan elimination on matrices of order 3×4 and 3×3 to find the reduced echelon form and hence a solution to the associated homogeneous system of equations. There was, as ever, no real reason that we chose to work with matrices of this particular order. Given that Gauss–Jordan elimination can be applied to matrices of any order, there is no reason why the technique that we have already studied should not apply to matrices of a larger order. In the remaining two examples, we will consider matrices with order 4×4. As we will see, providing that we are comfortable with the usage of row operations to achieve reduced echelon form, these questions are conceptually no more difficult than working with a matrix of any other order, although on average we will have to perform more row operations to find the reduced echelon form.

Example 2: Finding the Basis of a Solution Space Where the Coefficient Matrix Is of Order 4 × 4

Find the general solution to the system of linear equations 1101211210110111𝑥𝑦𝑧𝑤=0000.

Hence, find a basis for its solution space.

Answer

In order to find a basis for the solution space of the above system of linear equations, we must first take the coefficient matrix 𝐴=1101211210110111 and manipulate this into reduced echelon form. We will highlight the first nonzero entries in each row, which are known as the pivot entries: 𝐴=1101211210110111.

Given that there is a 1 entry in the top-left corner, we will use this to remove the remaining nonzero entries in the first column. We can employ the two elementary row operations 𝑟𝑟2𝑟 and 𝑟𝑟𝑟 to give 1101011001100111.

The second row is identical to the third row, which means that we can remove this. We know that the criteria of reduced echelon form is that any zero rows are at the bottom of the matrix, so we preemptively use the row swap operation 𝑟𝑟: 1101011101100110.

Now we can create a zero row in the fourth row by using the operation 𝑟𝑟𝑟, which gives 1101011101100000.

For the sake of neatness, we scale the pivots in the second and the third rows by using 𝑟𝑟 and 𝑟𝑟1101011101100000.

The pivot in the third row is directly below the pivot in the second row, and hence it must be removed using the row operation 𝑟𝑟𝑟, giving 1101011100010000.

The pivot in the third row has two nonzero entries above it, which must be removed with the two row operations 𝑟𝑟𝑟 and 𝑟𝑟+𝑟. This gives 1100011000010000.

For the final row operation, we only need to remove the nonzero entry above the pivot in the second row. This is achieved using 𝑟𝑟𝑟, which gives a matrix that is now in reduced echelon form: 1010011000010000.

Ignoring the zero row in the fourth row, we write out this system of linear equations in full form as 𝑥+𝑧=0,𝑦𝑧=0,𝑤=0.

There are three variables that are associated with pivot columns, and these are 𝑥, 𝑦, and 𝑤. The remaining variable, 𝑧, is not associated with a pivot column. We then take the above three equations and solve each of them in terms of the pivot variables, giving 𝑥=𝑧,𝑦=𝑧,𝑤=0.

For the nonpivot variable 𝑧, we can also write the rather trivial equation 𝑧=𝑧. Injecting this in between the second and third lines in the above equations gives 𝑥=𝑧,𝑦=𝑧,𝑧=𝑧,𝑤=0.

Written in vector form, the solution is 𝑥𝑦𝑧𝑤=𝑠1110.

The basis for the solution space is given by the set 1110.

In the above examples, we found a basis that contained one vector, which means that the dimension of the basis is 1. This may not always be the case, because the basis of any space might have a dimension larger than 1 and we would not usually know this until we had performed at least part of the Gauss–Jordan elimination process. In the following example, we will consider a matrix with order 4×4 that will have a basis with a dimension of 2. Although the solution will be a little more convoluted, it will principally be the same as in the two examples that we gave above. Instead of one vector, the basis of the solution space will feature two vectors, but otherwise, the final result will be expressed as it was in the previous examples.

Example 3: Finding the Basis of a Solution Space Where the Coefficient Matrix Is of Order 4 × 4

Find a general solution of the system of linear equations 1101111031123303𝑥𝑦𝑧𝑤=0000 and hence find a basis for its solution space.

Answer

Before finding the basis of this system of linear equations, we will need to find the reduced echelon form of the coefficient matrix on the left-hand side. We label this matrix as 𝐴=1101111031123303.

Before performing Gauss–Jordan elimination to find the reduced echelon form of 𝐴, we highlight the pivots, which are the first nonzero entries of each row: 𝐴=1101111031123303.

We need to remove the nonzero entries that are directly below the pivot in the first row. This can be achieved by the three row operations 𝑟𝑟𝑟, 𝑟𝑟3𝑟, and 𝑟𝑟3𝑟, which gives 1101021102110000.

We have already produced a zero row in the final row and we can see that the third row is the same as the second row, meaning that this can also be turned into a zero row using the row operation 𝑟𝑟𝑟: 1101021100000000.

The matrix is close to being in reduced echelon form. The pivot in the second row can be scaled to produce an entry of value 1 with the row operation 𝑟12𝑟, which gives 110101121200000000.

To put the matrix in reduced echelon form, we now only need the row operation 𝑟𝑟𝑟, which gives the final form 10121201121200000000.

To find the basis of the linear system, we need to take the matrix above and write this as a set of equations: 𝑥+12𝑧+12𝑤=0,𝑦12𝑧+12𝑤=0.

There are two variables, 𝑥 and 𝑦, that are associated to columns that contain pivots. The two remaining variables, 𝑧 and 𝑤, are associated with columns that do not contain pivot variables. We will express the former variables in terms of the latter variables by solving the two equations for these to give 𝑥=12𝑧12𝑤,𝑦=12𝑧12𝑤.

For the nonpivot variables, we could also choose to write 𝑧=𝑧 and 𝑤=𝑤, making the complete set of equations 𝑥=12𝑧12𝑤,𝑦=12𝑧12𝑤,𝑧=𝑧,𝑤=𝑤.

Then, the complete solution can be parameterized with two variables 𝑠 and 𝑡 and written in vector form as 𝑥𝑦𝑧𝑤=𝑠121210+𝑡121201.

The basis of the solution space is, therefore, 121210,121201.

We can see that, in many ways, finding the basis for a solution space is essentially the same as performing Gauss–Jordan elimination to achieve reduced echelon form and then just writing the solution in a particular format. This does not quite do justice to the concept of a “basis,” which is of far greater importance than our treatment of the topic in this explainer would suggest. As with so many topics in linear algebra, the ability to calculate a basis is dependent on us having a strong understanding of one of the several key concepts (in this case, Gauss–Jordan elimination). This interplay between an abstract concept and a widely applicable computational method is one of the distinct hallmarks of linear algebra and occurs repeatedly throughout the subject.

Key Points

  • The kernel of a matrix 𝐴 is denoted ker(𝐴) and is the set of all vectors 𝑥 that solve the equation 𝐴𝑥=0.
  • The kernel is also referred to as the solution space of the corresponding homogeneous system of linear equations.
  • The kernel of 𝐴 is calculated by finding the reduced echelon form of this matrix using Gauss–Jordan elimination and then writing the solution in a particular way.
  • The basis of the kernel of 𝐴 is the set of vectors that correspond to the nonpivot columns in the reduced echelon form of 𝐴, once any trivial equations for the nonpivot variables have been included.

Join Nagwa Classes

Attend live sessions on Nagwa Classes to boost your learning with guidance and advice from an expert teacher!

  • Interactive Sessions
  • Chat & Messaging
  • Realistic Exam Questions

Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy