As well as the normal matrix product, we’ll also need another type of matrix multiplication referred to as the Kronecker productThe Kronecker product is sometimes referred to as the tensor product, not quite correctly. There is a technical difference which is not important for our purposes. When I first learned about the Kronecker we generally referred to it as the tensor, so I might slip and use that word sometimes..

The normal matrix product and the Kronecker product are producing fundamentally different things. The normal product is simply composition of operators or transformations. If I multiply matrices \(A\) and \(B\), and then apply it to a vector, it’s just as though I applied one and then the other. The Kronecker product on the other hand, applies different operators to different spaces or subspaces. If I have an operator \(A\) that applies to the \(x\)- and \(y\)- axes, and an operator \(B\) that applies to the \(z\)- and \(w\)-axes, then their Kronecker product \(C=A\otimes B\) applies to the full space spanned by \((x,y,z,w)\). However, the \((x,y)\) subspace is still transformed the same way it would have been if we had just applied \(A\), and the \((z,w)\) subspace is transformed as if we had only applied \(B\). What we’ve achieved is creating an operator that now applies to the full space, but the individual subspaces are affected in the same way. The Kronecker product allows us to produce higher-dimensional operators, while the standard product produces composite operators.

For our purposes we will define the Kronecker product by $$A\otimes B = \begin{bmatrix}A_{00}B & A_{01}B & \dots & A_{0n}B \\ A_{10}B & A_{11}B & \dots & A_{1n}B\\ \vdots & \vdots & \ddots &\vdots\\ A_{m0}B & A_{m1}B & \dots & A_{mn}B\end{bmatrix},$$ where \(A\) is an \(m\times n\) matrix and \(B\) has any size. This can be interpreted as literally writing a copy of matrix \(B\) for each element of \(A\), and multiplying it by the corresponding element of \(A\). Obviously this will tend to produce very large matrices, which is why we will prefer to solve actual calculations with a computer. It’s very easy to do in principle, but because we will often be dealing with high-dimensional spaces they are impractical to write down by hand. It also turns out the code will feature a slightly tricky looping structure, so it’s not as easy to write down as some of the other functions we’ve written. When I first wrote it, I shamelessly pilfered it from Rosetta Code, and then modified it to produce a list comprehensionAssuming nobody has taken it down, the list comprehension version you find on Rosetta code was submitted by me. version.

I won’t bore you with all of those details. Instead I’ll just write the function, and then send you here if you want to break it down a little further. Comparing the normal `for`

loop version and the list comprehension version will make it more clear than I could in any reasonable amount of text.

def kronecker(matrix1, matrix2): count = range(len(matrix2)) return [[num1 * num2 for num1 in elem1 for num2 in matrix2[row]] for elem1 in matrix1 for row in count]

This is definitely hard to parse and highlights nicely why list comprehensions aren’t *always* the best choice. I may return here later to update this page with a full explanation.

The reason we want the Kronecker product is twofold:

**We can use it to combine qubit states**: When we eventually talk about qubits, we’ll learn that they each occupy their own individual 2-dimensional space. If we want two or more qubits, we need to combine these spaces together and we do that with the Kronecker product. If we take the Kronecker product of two qubit states, which are represented as 2-dimensional complex vectors, we end up with a 4-dimensional state vector which must occupy a 4-dimensional space.**Most operators are only defined for a single qubit**: To apply them to higher numbers of qubits we need to create their higher-dimensional analogues and we do that through the Kronecker product.

For example, if we have two qubits called \(|\psi\rangle\) and \(|\phi\rangle\) then their full state is denoted \(|\phi\rangle\otimes|\phi\rangle\). If we want to apply operator \(A\) to the first qubit and \(B\) to the second, then we write \(A\otimes B\) to form the full operator and apply it by writing $$(A\otimes B)(|\psi\rangle\otimes|\phi\rangle)\equiv A|\psi\rangle\otimes B|\phi\rangle.$$ I’ll clarify both of these points further when we actually start working with qubits.

Let’s take a break from the more abstract parts of the mathematics. Next time we’ll talk briefly about the first thing we’ll need to build a quantum computer: qubits.

## Summary

So far our functional repertoire is:

`rows(matrix)`

: count the number of rows in a matrix.`columns(matrix)`

: count the number of columns in a matrix.`add(matrix1, matrix2)`

: compute the element-wise sum of two matrices.`subtract(matrix1, matrix2)`

: compute the element-wise difference of two matrices.`zeroes(num_rows, num_cols=-1)`

: generate a matrix of the given size filled with zeroes.`ones(num_rows, num_cols=-1)`

: generate a matrix of the given size filled with ones.`eye(size)`

: generate a square identity matrix of the given size.`scalar_matrix(scalar, matrix)`

: compute the product of a scalar value and a matrix.`get_row(matrix, row_num)`

: grab a particular matrix row and return it as a valid matrix.`get_column(matrix, col_num)`

: grab a particular matrix column and return it as a valid matrix.`normalize(vector)`

: returns a vector scaled to a norm of 1.`transpose(matrix)`

: returns the transpose of a matrix or vector.`conjugate_matrix(matrix)`

: computes the element-wise conjugate of a matrix.`adjoint(matrix)`

: computes the conjugate transpose of a matrix.`is_unitary(matrix)`

: checks a matrix for unitarity.`kronecker(matrix)`

: computes the Kronecker of two matrices.

Previous: Normalization and unitary matrices

Next: Qubits and quantum circuits