Explainer: Transpose of a Matrix

In this explainer, we will learn how to find the transpose of a matrix, elements of a given row and column after transposing, and a matrix’s dimensions after transposing.

The transpose of a matrix 𝐴 is a relatively new concept in linear algebra. The main ideas of this field were developed over several millennia, arguably beginning around the years 300–200 BC as a way of solving systems of linear equations. A better, more complete understanding of linear algebra was developed in the late 1600s, principally by Leibniz and Lagrange, with the introduction of essential concepts such as the determinant. The eminent mathematician Gauss worked intensively on linear algebra in the early 1800s, eventually coauthoring the powerful Gauss-Jordan elimination algorithm to solve systems of linear equations.

Given the range of sophisticated concepts that drove the embryonic study of linear algebra, it is perhaps surprising that a relatively simple conceptβ€”the matrix transposeβ€”was not defined until 1858 by Cayley, by which point many key pillars of linear algebra had already been constructed and well understood. Despite this development occurring relatively late, the matrix transpose was so important as a concept that it forms the basis of many theorems and results that are studied by all students of linear algebra.

We will begin by defining the matrix transpose and will then illustrate this concept with an example, before completing some more problems.

Definition: Matrix Transpose

Consider a matrix 𝐴 that is specified by the formula 𝐴=ο€Ήπ‘Žο….

The matrix β€œtranspose” 𝐴 is then a matrix that is composed of the elements of 𝐴 by the formula 𝐴=ο€Ήπ‘Žο….οŒ³ο…οƒ

As ever in linear algebra, the definition of this particular concept is not completely clear until it has been demonstrated by examples. Transposing a matrix 𝐴 has the effect of β€œflipping” the matrix along the diagonal entries. An alternative way of viewing this operation is that the transpose of 𝐴 switches the rows with the columns. Therefore, if 𝐴 has π‘š rows and 𝑛 columns, then the transpose 𝐴 will have 𝑛 rows and π‘š columns. Again, this is easier to demonstrate than it is to describe, so we will now provide an illustrative example.

Consider the matrix 𝐴=30βˆ’2147.

This matrix has 3 rows and 2 columns and therefore the transpose will have 2 rows and 3 columns, hence having the form 𝐴=ο’βˆ—βˆ—βˆ—βˆ—βˆ—βˆ—οž, where the βˆ— symbols represent values that are yet to be calculated.

Now we populate 𝐴 by taking the first row of 𝐴, 𝐴=30βˆ’2147, and writing the elements in the same order but now as the first column of 𝐴: 𝐴=3βˆ—βˆ—0βˆ—βˆ—ο .

We then write the second row of 𝐴, 𝐴=30βˆ’2147, as the second column of 𝐴: 𝐴=3βˆ’2βˆ—01βˆ—ο .

Finally, we write the third row of 𝐴, 𝐴=30βˆ’2147, as the third column of 𝐴: 𝐴=3βˆ’24017, hence completing the matrix transpose.

Note that if we write 𝐴 and 𝐴 next to each other and highlight only the diagonal entries, as below, 𝐴=30βˆ’2147,𝐴=3βˆ’24017, then we observe that the diagonal entries are unchanged. This is true whenever we take the transpose of a matrix and the reason why we often simply refer to the transpose of a matrix as β€œflipping” along the diagonal entries.

To explain this, we refer to the definition above. For a matrix 𝐴=ο€Ήπ‘Žο…οƒο…, the transpose is calculated using the same entries but referring to the row position as the column position and vice versa, which is encapsulated by the expression 𝐴=ο€Ήπ‘Žο…οŒ³ο…οƒ. For example, the entry π‘ŽοŠ¨οŠ§ refers to the entry in the second row and the first column of 𝐴. Switching the 𝑖 index and the 𝑗 index gives π‘ŽοŠ§οŠ¨, which corresponds to the entry in the first row and second column. This can be observed for the matrices 𝐴 and 𝐴 above.

However, the diagonal entries are where the row and column number are the same, meaning that 𝑖=𝑗, giving the entries π‘Žοƒοƒ. Even if the row index and the column index are switched, the result is the same entry π‘Žοƒοƒ. Therefore, all diagonal entries are unchanged by transposition, which is a key guiding result when computing the transpose of a matrix.

Example 1: Finding the Transpose of a Matrix

Find the transpose of the matrix 6βˆ’56168.


We label this matrix as 𝐴. This has 2 rows and 3 columns, which means that 𝐴 will have 3 rows and 2 columns. Therefore, 𝐴 will take the form 𝐴=ο–βˆ—βˆ—βˆ—βˆ—βˆ—βˆ—ο’, where the βˆ— represent entries that must be found.

Since the diagonal entries are unchanged when transposing a matrix, we highlight these in the original matrix, 6βˆ’56168, and copy them into the transpose matrix, as shown: 𝐴=6βˆ—βˆ—6βˆ—βˆ—ο€.

We then highlight the first row in the original matrix, 𝐴=6βˆ’56168, and write these as the first column of the transpose matrix: 𝐴=6βˆ—βˆ’566βˆ—ο€.

Then, we highlight the second row of 𝐴=6βˆ’56168 and write these entries in order as the second column of the transpose matrix 𝐴=61βˆ’5668.

Given that the matrix transpose is usually straightforward to calculate, it is unlikely that this operation would be interesting unless it had either some special algebraic properties or some useful applications. As luck would have it, the matrix transpose has both. As well as being useful in the definition of symmetric and skew-symmetric matrices (both of which are highly important concepts), the matrix transpose is endowed with a range of convenient algebraic properties, one of which is as follows.

Theorem: Double Application of the Matrix Transpose

For a matrix 𝐴, applying the matrix transpose twice returns the original matrix. In other words, 𝐴=𝐴.

This may be obvious, given that the transpose of a matrix would flip it along the diagonal entries and then applying the transpose again would simply flip it back. Using the alternative understanding, the matrix transpose would switch the rows and columns and applying this action again would switch them back. However, to properly illustrate that this is indeed the case, we consider the following example.

Example 2: The Transpose of the Transpose

Given the matrix, 𝐴=ο”βˆ’84341βˆ’1, find ο€Ήπ΄ο…οŒ³οŒ³.


The matrix 𝐴 has 2 rows and 3 columns and so the matrix 𝐴 will have 3 rows and 2 columns: 𝐴=ο–βˆ—βˆ—βˆ—βˆ—βˆ—βˆ—ο’.

Knowing that the diagonal entries are unchanged, we immediately populate these entries in 𝐴: 𝐴=ο”βˆ’84341βˆ’1,𝐴=ο˜βˆ’8βˆ—βˆ—1βˆ—βˆ—ο€.

The first row of 𝐴 then becomes the first column of 𝐴: 𝐴=ο”βˆ’84341βˆ’1,𝐴=ο˜βˆ’8βˆ—413βˆ—ο€.

Then, the second row of 𝐴 becomes the second column of 𝐴: 𝐴=ο”βˆ’84341βˆ’1,𝐴=ο˜βˆ’84413βˆ’1.

Now we wish to find the transpose of 𝐴, which we denote ο€Ήπ΄ο…οŒ³οŒ³. Due to 𝐴 having 3 rows and 2 columns, the transpose ο€Ήπ΄ο…οŒ³οŒ³ will have 2 rows and 3 columns: 𝐴=ο’βˆ—βˆ—βˆ—βˆ—βˆ—βˆ—οž.

We can identify that 𝐴 and ο€Ήπ΄ο…οŒ³οŒ³ have the same number of rows and columns, which is encouraging since otherwise there would have been no possibility of the two matrices being equal. As before, we first populate the diagonal entries of the unknown matrix: 𝐴=ο˜βˆ’84413βˆ’1,𝐴=ο”βˆ’8βˆ—βˆ—βˆ—1βˆ—ο .

Now we rewrite the first row of the left-hand matrix as the first column of the right-hand matrix: 𝐴=ο˜βˆ’84413βˆ’1,𝐴=ο”βˆ’8βˆ—βˆ—41βˆ—ο .

The same process is then applied for the second row and the second column: 𝐴=ο˜βˆ’84413βˆ’1,𝐴=ο”βˆ’84βˆ—41βˆ—ο .

Finally, we write the entries in the third row as the entries of the third column: 𝐴=ο˜βˆ’84413βˆ’1,𝐴=ο”βˆ’84341βˆ’1.

We have therefore shown, in this example, that 𝐴=𝐴.

We could have equally proven this result with reference to the definition that 𝐴=ο€Ήπ‘Žο…,𝐴=ο€Ήπ‘Žο….οƒο…οŒ³ο…οƒ

Given that taking the transpose switches the row index with the column index, we would find that 𝐴=ο€Ήπ‘Žο…,𝐴=ο€Ήπ‘Žο…,οŒ³ο…οƒοŒ³οŒ³οƒο… thus showing that 𝐴=(𝐴).

As we have already discussed, transposing a matrix once has the effect of switching the number of rows and columns. In other words, if 𝐴 has π‘š rows and 𝑛 columns, then 𝐴 will have 𝑛 rows and π‘š columns. This result can alternatively be summarized by the following theorem and example.

Theorem: Matrix Order and Transpose

If a matrix 𝐴 has order π‘šΓ—π‘›, then 𝐴 has order π‘›Γ—π‘š.

A consequence of this theorem is that if 𝐴 is a square matrix then 𝐴 will also be a square matrix of the same order. This can be easily shown by specifying that 𝐴 must have the same number of rows and columns, hence making it a square matrix with an order of 𝑛×𝑛. Even if we switch the rows for the columns in the transpose matrix, there will still be the same number of rows and columns, meaning that 𝐴 will also be a matrix of order 𝑛×𝑛 and hence will be a square matrix of the same dimension as the original matrix 𝐴.

Example 3: The Order of a Transpose Matrix

If 𝑋 is a matrix of order 4Γ—1, then what is the order of the matrix π‘‹οŒ³?


For a matrix with order π‘šΓ—π‘›, the transpose of the matrix has order π‘›Γ—π‘š. Following this result, if 𝑋 has order 4Γ—1 then the transpose π‘‹οŒ³ is a matrix of order 1Γ—4.

Now that we are more familiar with calculating the transpose of a matrix, we will solve two problems featuring this idea. Note that, in the following problems, the transpose of a matrix appears as part of a series of other algebraic operations involving matrices, which is very often the case when working in linear algebra.

Example 4: Equations Involving Matrix Transposition

Given that 𝐡=1βˆ’37βˆ’3,ο€Ήπ΅βˆ’π΅ο…=𝐴, determine the value of π‘Ž+π‘ŽοŠ§οŠ¨οŠ¨οŠ§.


The order of 𝐡 is 2Γ—2, meaning that this is a square matrix. Therefore, 𝐡 will also be a square matrix of order 2Γ—2. Given that 𝐡=1βˆ’37βˆ’3, we find that 𝐡=17βˆ’3βˆ’3.

This gives π΅βˆ’π΅=1βˆ’37βˆ’3ο βˆ’ο”17βˆ’3βˆ’3=0βˆ’10100.

We are asked to calculate 𝐴=ο€Ήπ΅βˆ’π΅ο…οŒ³οŒ³, which gives 𝐴=ο€Ήπ΅βˆ’π΅ο…=0βˆ’10100=010βˆ’100.

It can be observed that the matrix 𝐴 is equal to the negative of its own transpose, which is represented algebraically as 𝐴=βˆ’π΄οŒ³. This means that 𝐴 is in fact a β€œskew-symmetric” matrix which is an important type of matrix that is defined with reference to the matrix transpose. It is the case with all skew-symmetric matrices that π‘Ž+π‘Ž=0, which is validated in the matrix 𝐴 above, where we find that π‘Ž+π‘Ž=0.

Example 5: Transposition and Matrix Subtraction

Given the matrices π‘Œ=ο”βˆ’422βˆ’7,𝑋=44βˆ’1βˆ’7, does (π‘Œβˆ’π‘‹)=π‘Œβˆ’π‘‹οŒ³οŒ³οŒ³?


First, we calculate (π‘Œβˆ’π‘‹)=ο€Όο”βˆ’422βˆ’7ο βˆ’ο”44βˆ’1βˆ’7=ο”βˆ’8βˆ’230=ο”βˆ’83βˆ’20.

For the right-hand side of the given equation, we first observe that π‘Œ is equal to its own transpose (meaning that this is a β€œsymmetric” matrix). We can therefore write π‘Œ=π‘ŒοŒ³ and hence simplify the following calculation: π‘Œβˆ’π‘‹=π‘Œβˆ’π‘‹=ο”βˆ’422βˆ’7ο βˆ’ο”44βˆ’1βˆ’7=ο”βˆ’422βˆ’7ο βˆ’ο”4βˆ’14βˆ’7=ο”βˆ’83βˆ’20.

We have therefore shown for this example that (π‘Œβˆ’π‘‹)=π‘Œβˆ’π‘‹οŒ³οŒ³οŒ³.

The example above actually points towards a much more general result which relates together the operation of transposition and the operations of addition and subtraction. To demonstrate this result, we define the matrices 𝐴=ο”βˆ’371983,𝐡=48βˆ’4770.

We first choose to calculate (𝐴+𝐡)=ο€Όο”βˆ’371983+48βˆ’4770=115βˆ’316153=1161515βˆ’33.

Next, we calculate 𝐴+𝐡=ο”βˆ’371983+48βˆ’4770=ο˜βˆ’397813+4787βˆ’40=1161515βˆ’33.

It is the case in this example that (𝐴+𝐡)=𝐴+𝐡. Had we wished to, we could also have shown that (π΄βˆ’π΅)=π΄βˆ’π΅οŒ³οŒ³οŒ³. These two results are not accidental and can be summarized by the following theorem.

Theorem: Matrix Transpose and Addition/Subtraction

If 𝐴 and 𝐡 are two matrices of the same order, then (𝐴±𝐡)=𝐴±𝐡.

We would say that matrix transposition is β€œdistributive” with respect to addition and subtraction.

There are many other key properties of matrix transposition that are defined in reference to other concepts in linear algebra, such as the determinant, matrix multiplication, and matrix inverses. When working in linear algebra, knowledge of the matrix transpose is therefore a vital and robust part of any mathematician’s tool kit.

Key Points

  1. For a matrix defined as 𝐴=ο€Ήπ‘Žο…οƒο…, the transpose matrix is defined as 𝐴=ο€Ήπ‘Žο…οŒ³ο…οƒ.
  2. In practical terms, the matrix transpose is usually thought of as either (a) flipping along the diagonal entries or (b) β€œswitching” the rows for columns.
  3. A double application of the matrix transpose achieves no change overall. In other words, 𝐴=𝐴.
  4. If the order of 𝐴 is π‘šΓ—π‘›, then the order of 𝐴 is π‘›Γ—π‘š.
  5. The matrix transpose is β€œdistributive” with respect to matrix addition and subtraction, being summarized by the formula (𝐴±𝐡)=𝐴±𝐡.

Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy.