In this explainer, we will learn how to combine the operations of addition, subtraction, scalar multiplication, and transposing matrices.
Once a matrix has been defined, there are many operations that can be performed on it. At the simplest level, two matrices of equal dimension can be combined with two of the most familiar mathematical tools: addition and subtraction. There is also the action of scalar multiplication performed on a matrix, to some extent mimicking our conventional understanding of multiplication. These operations alone would be enough to endow linear algebra with properties that are interesting enough to warrant a full analysis and discussion. The operation of transposition, in itself an apparently inert concept, can also be combined with addition, subtraction, and scalar multiplication to produce algebraic structures that are far richer than conventional algebra. This explainer will explore all of these operations and the ways in which they can be combined and utilized.
At this stage, it is worth noting that other operations exist in linear algebra, such as matrix exponentiation and inversion, which make their own significant contributions to this field. Equally, the introduction of concepts such as the determinant and the trace can be combined with this understanding, each producing a related algebraic structure that interacts with all of the operations that we have previously described. Although such ideas are beyond the scope of this explainer, please be aware that the definitions, theorems, and examples below are only part of a much larger picture.
Definition: Matrix Addition and Subtraction
Consider two matrices and , both having order and being described by the expressions
The matrix is then created by adding the two matrices entry by entry. In other words, if then
Similarly, subtraction is also performed entry by entry. If and we define then
The first point to note is that and can only be combined using addition or subtraction if they have the same order. If we had the matrices then there would be no way to combine them using either addition or subtraction. This is because has 3 rows and 2 columns, whereas has 2 rows and 4 columns.
Now we consider two new matrices:
These matrices both have 2 rows and 3 columns and so can be combined using either addition or subtraction. To add the two matrices, we work entry by entry:
Subtraction is also completed entry-by-entry:
We will now practice one example of this.
Example 1: Addition of Matrices
Working entry by entry, we have that
Although we were not asked to calculate it, we could also show that
The operations of addition and subtraction appear relatively innocuous when in isolation, and it is seldom the case that adding or subtracting two matrices will be interesting in itself. Instead, it is more likely that these operations will be combined with other, different operations. One of the most common is the matrix transpose, which we now define.
Definition: Matrix Transpose
Consider a matrix with rows and columns, which is specified by the formula
Then, the matrix “transpose” is a matrix with rows and columns that is calculated from the elements of by the formula
The original matrix has order and the matrix transpose has order .
Matrix transposition is usually thought of as “switching the rows for the columns” or “flipping along the diagonal entries.” Both of these concepts are equivalent and can be demonstrated by example. Consider the matrix which is of order . The matrix transpose, , will therefore be of order : where the entries represent quantities that we have not yet calculated.
To populate the entries of this matrix, we take the first row of and write these entries in order as the first column of , as shown:
To populate the remaining entries, we now take the second row of and write the entries in order as the second column of :
Given the two matrices above, we can now see why we might describe the operation of transposition as switching the rows with the columns. We can also take both of these matrices and highlight only the diagonal entries:
We can see that and . Given that the diagonal entries are unchanged, we can now see why we have claimed to have “flipped” around these entries when transposing the original matrix.
Example 2: Transpose of a Matrix
Given that find .
Since is of order , the transpose will be of order , hence having the form
The diagonal entries will remain unchanged, giving
We then write the first row of as the first column of :
Finally, we write the second row of as the second column of :
In this explainer, we have already seen how matrix addition is only possible between two matrices if they have the same order, which can be combined with the knowledge that matrix transposition will take a matrix of order and produce a matrix of order . Unless , which would give a “square” matrix, the order of a matrix is different from the order of its transpose. Both of these ideas are covered in the following question.
Example 3: Addition and Transposition of Matrices
Given that find , if possible.
The matrix is of order and the matrix is of order . We know that the order of will therefore be , which is the same as the order of , meaning that these two matrices can be combined using addition (or subtraction). We calculate that which gives
Frequently, when working with conventional algebra, we are asked to find an unknown quantity from a given equation. The same question can be asked when working with linear algebra, although there are more possible operations that may be involved, such as transposition.
Example 4: Solving Matrix Equations with Addition and Transposition
Given that determine the matrix .
Note that is of order which means that must also be of order , as otherwise the sum would not be possible.
We can also recall the property that, for a matrix , taking the transpose of the matrix twice will return the original matrix . In other words, . We can therefore take the given equation and transpose both sides, giving
Since , the left-hand side of the equation is simplified:
Taking the transpose of the matrix on the right-hand side gives
We can now subtract from both sides, giving
We can check that this matrix does indeed solve the original equation.
Matrix multiplication between two matrices and is well defined providing that the orders of the two matrices are compatible. By this we mean that we can define the matrix product as long as has the same number of columns as has rows. In other words, if is of order and is of order , then the matrix product is well defined because has columns and has rows. The resulting matrix is of order .
Generally, the matrix product has a different value to the matrix product , meaning that generally and showing that matrix multiplication is generally “not commutative.” This is in contrast to conventional algebra, where two numbers and are known to be “commutative” over multiplication, meaning that . Furthermore, if the orders of and allow the matrix product to to exist, this does not mean that the product exists. In fact, and are only both well defined if is of order and is of order .
Although matrix multiplication has a more elaborate definition for matrices with large order, for matrices this product is easy to define. Consider the two matrices
The easiest way to avoid making errors when calculating the matrix product is to work entry by entry. Since has order and has order , the matrix has order . Therefore, we are looking to find a matrix of the form where the represents unknown entries to be calculated. To calculate the entry in the first row and first column of , we combine the first row of with the first column of as shown:
The entry is then calculated by combining the elements together in order. The highlighted green entry has the value , giving
The entry in the first row and second column of is calculated using the first row of and the second column of as shown: where we have performed the calculation .
To calculate the entry in the second row and first column of , we take the first row of and the second column of : where we have calculated that . Finally, we use the second row of and the second column of to calculate that after calculating that .
Note that calculating in this instance will give a different result to , namely, that
In this explainer alone, we have described the matrix operations of addition/subtraction, transposition, and multiplication (for matrices, at least). Although there are plenty more operations involving matrices, we will now practice two questions where all of the ideas covered in this explainer are combined together.
Example 5: Solving Matrix Equations
Given that determine .
We first need to calculate from the rightmost equation. First recall that for any matrix ; then, we can take the matrix transpose of to get
Simplifying the left-hand side and taking the transpose of the right-hand side gives
Taking the matrix from the left-hand side of the equation to the right-hand side gives
We were initially asked to calculate the matrix . To do this, we first transpose to find
After that we compute the matrix product
Then, we take the transpose to obtain
With more knowledge of linear algebra, the calculations above could have been slightly simplified by utilizing the matrix property that . When working only with addition/subtraction, transposition, and multiplication, there are a range of other algebraic properties that emerge, giving rise to an algebraic system with structures that would never occur in conventional algebra. This set of rules is only enriched by the inclusion of other matrix operations such as scaling, inversion, and exponentiation, with related concepts such as the determinant and the trace also contributing to this.
The key points from this explainer are as follows:
- Matrix addition or subtraction is only well defined between two matrices if they have the same order.
- Matrix addition or subtraction is completed entry by entry.
- For a matrix of order , the matrix transpose returns a matrix of order .
- The matrix transpose switches the rows with the columns. Equivalently, the matrix transpose “flips” around the diagonal entries.
- The matrix product is only defined if has order and has order .
- Matrix multiplication is not commutative, meaning that generally .