3.3. Hilbert spaces and tensor calculus

We’ll need the mathematics of Hilbert spaces, as that’s where quantum state vectors live. However, as mentioned earlier, it is assumed that the reader has completed at least an introductory level course in some quantum theory, so this section will be brief. For a more detailed review, any introductory text on quantum mechanics will do.

A Hilbert space is a vector space equipped with an inner product. In particular we are concerned with finite-dimensional complex Hilbert spaces. We will try to stick exclusively to Dirac’s bra-ket notation, where \(|x\rangle,|y\rangle,|z\rangle\), etc. are vectors in a complex Hilbert space \(\mathcal H\). The length of a vector in this space is the usual Euclidean norm, also called the 2-norm, and we will generally assume and insist that the vectors are normalized to be of unit length.

A Hilbert space has a number of basis vectors (or states) equal to the number of dimenions of the space. For example, a 3-dimensional complex Hilbert space may have basis statesI will try to always zero-index my states and vectors (that is, start at subscript 0 rather than 1) because that is the usual convention in software programming, which is where this series will eventually lead. \(|x_0\rangle,|x_1\rangle,|x_2\rangle\). A general state is then a linear combination of the basis states, i.e.$$|\psi\rangle=\sum_{i=0}^d\alpha_i|x_i\rangle,$$where \(\alpha_i\in\mathbb C\). A \(d\)-dimensional Hilbert space is also equipped with the concept of operators, which can usually be represented by square matrices of size \(d\times d\). In particular for quantum mechanics we are interested in normal and unitary operators. Unitary operators \(U\) and normal operators \(N\) are respectively defined to be those which satisfy$$U^*U=UU^*=I,\quad N^*N=NN^*,$$ where \(I\) is the identity operator of appropriate size and the asterisk denotes the adjoint (complex conjugate transpose). Unitary operators are a subclass of normal operators. We will also be using projection operators, defined by$$P=P^2=P^*.$$We will sometimes need to use the matrix representation of operators, especially when solving these problems in software. Methods for finding this matrix representation, as well as various other theorems and results related to Hilbert spaces and operators, may be introduced as needed. One of the most important properties of unitary operators for our purpose is that they are always self-inverse, and we’ll want to keep this in mind going forward.

To represent composite quantum systems of multiple potentially correlated subsystems, we use the Kronecker productVery often, not quite correctly, called the tensor product. in the standard basis, which in our field is usually referred to as the computational basis. Taking \(A\) as an \(m\times n\) matrix and \(B\) as a matrix of any size, the most general definition of the Kronecker product that we will need is given by$$
A\otimes B=\begin{bmatrix}
A_{0,0}B & A_{0,1}B & \dots & A_{0,n-1}B \\
A_{1,0}B & A_{1,1}B & \dots & A_{1,n-1}B \\
\vdots & \vdots & \ddots & \vdots \\
A_{m-1,0}B & A_{m-1,1} & \dots & A_{m-1,n-1}B
\end{bmatrix},$$where each element \(A_{ij}B\) is itself a matrix with the same size as \(B\). I will list the properties of the Kronecker product as they become necessary. They can all be derived directly from the definition just stated.

The Kronecker product can also be used to combine Hilbert spaces into larger Hilbert spaces. The Hilbert space \(\mathcal H\otimes\mathcal G\) is defined as the space which is spanned by \(\{|h_i\rangle\otimes|g_i\rangle\}\) where \(\{|h_i\rangle\}\) are the basis vectors in \(\mathcal H\) and \(\{|g_i\rangle\}\) are the basis vectors in \(\mathcal G\). The definition above has given us all we need to calculate these basis elements: just take the Kronecker product of each pair of basis states according to this definition.

We won’t always be using the matrix representation of operators, so we need a basis-free definition of the Kronecker product to work with. The Kronecker product of two operators \(A\otimes B\) is defined as the operator which has the following action:$$A\otimes B:|x\rangle\otimes|y\rangle\mapsto A|x\rangle\otimes B|y\rangle.$$The salient point is that the operator on the left acts only on the state or space on the left, and likewise for right. It is not generally the case that the rightmost operator will even be defined in a way that allows it to act on the leftmost space, and vice versa. For Kronecker products of more than two spaces or states, the \(n\)th operator acts only on the \(n\)th state or space. Operators that are to act only on one part of a product space may be written as $$A\otimes I\otimes I\otimes\dots,\quad I\otimes A\otimes I\otimes \dots,\quad\textrm{etc.}$$