Skip to content

Jupyter notebook mode

from qutip import *
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 100

Lecture 8: The Lindblad Master Equation


Interactive notebook

In this interactive notebook, we will introduce a tool for calculating the average evolution of a system while it is being continuously measured. In particular, it will allow us calculate the resulting density matrix for the case that we do not know the measurement outcome! Of course, if we start with a pure state, as these measurements occur, we will lose information about our state, and its purity will decrease (with some exceptions).


8.1. Introduction

We will not derive the Lindblad equation here, but some intuition will be provided.

When we measure some some system, label it , we let it interact with a measurement apparatus, label it , such that we can extract information about the state of via . Once we have extracted this information from with sufficient accuracy, we collapse the state of .

The interaction between the system and apparatus leads to entanglement between the two, and to properly describe them, we need a joint density matrix that describes the combined system .

Let's assume we measure but lose track of the outcome.

  • How could we then write down an accurate density matrix for the system ?
  • Put more directly, how can we obtain from ?

One way, mentioned last time,

  1. is to average over all possible outcomes, including collapses at all possible times, which is the Monte Carlo Wave Function (MCWF) method.

  2. Another method is to take the "partial trace" of the joint density matrix to obtain an effective description of as an average response to the (unknown) measurement. This is denoted as follows

and, from this, the Lindblad equation can be derived (with some skipped steps):

  1. The first part should be familiar -- it is just the time-dependent Schrodinger equation, which describes the coherent evolution in the absence of measurement.
  2. The second term is known as the Liouvillian, and appears to be more complicated.

    • is a collapse operator
    • These operators have units of , i.e.
    • After a "quantum jump", the state of collapses onto an eigenstate of , which are related to our observables

For a concrete example, we may measure the position of a harmonic oscillator with an average rate of "measurements" per second. Since we are measuring only one quantity, , so the single collapse operator takes the form

We could also consider a spin-1/2 particle in a magnetic field oriented in the -direction, giving the Hamiltonian

We will measure photons emitted by a spin in the state , which is similar to how magneto-resonance imaging (MRI) works. Since the up state can only decay to the ground state, and the ground state can't decay any further, we can use the spin-lowering operator as our collapse operator:

Analogy with coupled LC circuits

If "guessing" the collapse operator in this way is unsatisfying, then we can try to be more detailed in the following way.

We can imagine two LC circuits that are weakly coupled, where one of them models a spin oscillating in a magnetic field, and the other represents the bath into which photons are emitted by the spin when it decays. The energy of the system is

where the last term represents the weak coupling via a mutual inductance.

The operators correspond to the spin system, and the operators to the photon system. Analyzing the terms:

  • creates excitations in both systems and violates energy conservation (technically, this term is "fast rotating" and averages out in certain approximations)

  • does nothing if we are in a dark room, i.e. we assume the photon bath begins empty (in its ground state)

  • has the same issue as the first term, so we neglect it as well

  • represents the transfer of 1 quantum of energy (a photon) from the spin to the bath (decay), so this is the process we are interested in

This picture also gives us a method of estimating the rate of the collapse operator, from the strength of the mutual inductance coupling. (For reference, a timescale of 8 hours has been observed in Nature Materials 17 313 (2018), but spins typically decay much more quickly)

Returning to our problem, we assume the spin starts out in ,

  1. so we can write down and (dropping the "hat" for brevity)
  2. We can then compute all of the terms needed to form the Lindbladian:
  3. Which allows us to form a matrix of differential equations to solve for the time evolution of :
  4. Writing out the equations explicity:
  5. The boundary conditions for these differential equations are that only , with the other three elements equal to at .
  6. The solutions are

So the population in the excited state exponentially decays to the ground state. What if we had started with a superposition? Then, the off-diagonal terms (the coherences) of the initial density matrix would not be 0, and their differential equations would also have decaying solutions. Interestingly, the factor of in the Lindbladian's coherences would cause the initial coherences to decay with a rate , where is the average time for the initial population to decay. This means that even in the absence of a pure dephasing effect, energy relaxation alone sets an upper limit on the longevity of coherences in initial superposition states.

8.2. Decay of a spin with the Lindblad master equation

Now, we will use QuTiP to solve the problem discussed above: the decay of a spin due to the emission of a photon. Activate the interactive mode for more insight!

Setting up the problem

Initial state

We will first create initial state in the 2-dimensional Hilbert space.

Note that basis(N,0) is completely identical to the function fock(N,0) we used earlier. We use this one here just because it is (maybe) strange to call the levels of a spin "Fock" states.

In the last line, we will print out the state we created to make sure it is what we think it is.

N = 2
psi0 = basis(N,0)
psi0 = psi0.unit()
print(psi0)
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[1.]
 [0.]]

Pauli matrices

Now, we need an operator with the following matrix for our "collapse" operator, which is the

This is identical to the "creation" operator, as it turns out, for the harmonic oscillator (HO). (This is because spin-up has higher energy but it is the first row of the basis state, whereas the first row for the HO is the ground state.).

But, again, it would be confusing to write code for a spin that uses HO operators. Fortunately, QuTiP has a built-in functions to generate the Pauli matrices:

http://qutip.org/docs/latest/apidoc/functions.html#qutip.operators.sigmam

We will use sigmam() to make the annihilation operator:

sig_m = sigmam()
print(sig_m)
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = False
Qobj data =
[[0. 0.]
 [1. 0.]]

And, we can double-check that it does what we think it should:

print(sig_m*psi0)
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
Qobj data =
[[0.]
 [1.]]

Good, this is indeed what we expect!

Hamiltonian

Now, we need the Hamiltonian. For the Hamiltonian, we will pick just the operator:

H = 0.5 * sigmaz()
print(H)
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
Qobj data =
[[ 0.5  0. ]
 [ 0.  -0.5]]

This implicitly means that we are choosing a Larmor frequency:

The may seem mega-fast ( Hz), and it is! It turns out, however, that when QuTiP solves time-dependent Hamiltonians, it rescales the time-variables in the problem such that . Our choice of leads to an angular Larmor frequency of:

in "QuTiP" variables. This is not a big deal: it just changes the rescaling of the time axis of any plots. But one should be careful to remember what the time axis means when performing a simulation.

Even more convenient for plotting, though, is to choose so that the oscillations we will see, for example in for a superposition state will have a period of 1 second (1 time unit):

w = 1 * 2 * np.pi 
H = 0.5 * w * sigmaz()

Solving the master equation

  • We now have everything we need to solve the (Lindblad) master equation!
  • Look up, just in case: QuTiP Master Equation Solver Documentation

  • To do this, we need to use the qutip function mesolve(): QuTiP mesolve() Documentation

  • For QuTiP to solve the master equation, we need to give it a few things:

    • H: the Hamiltonian
    • psi0: the quantum state at (density matrix or wave function)
    • t: A list of times for which you would like the solution
    • c_ops: A python "list" of collapse operators

For our collapse operator, we will need to scale the operator by , where is the average rate that the collapse is applied (the average number of collapses that occur per second):

Like , the rate has the units equivalent to 1 / second. If we choose , then with our choice of , how many oscillations will we see in for exmaple if we started with ? For a function , we would then observe oscillations by the time the amplitude decays by .

Since we are performing only one measurement, we all we have to do is put this operator in square brackets to create our c_ops python list:

gamma = 0.1 * w
c_ops = [np.sqrt(gamma) * sig_m]

Now, we need to create the time array at which we want QuTiP to give us the solution of the master equation:

t = np.linspace(0, 10, 401) 

And then we can just call mesolve():

result = mesolve(H, psi0, t, c_ops)

Finally, we will use the projection operator to extract values of the density matrix to plot:

plt.plot(t,np.real(expect(result.states, projection(2,0,0))), label=r'$\rho_{11}$');
plt.plot(t,expect(result.states, projection(2,1,1)), label=r'$\rho_{22}$');
plt.ylabel(r"$\rho_{ij}$")
plt.xlabel("t (QuTiP 'seconds')")
plt.legend()
plt.show()
plt.plot(t,np.real(expect(result.states, projection(2,0,0))), label=r'$\rho_{11}$');
plt.plot(t,expect(result.states, projection(2,1,1)), label=r'$\rho_{22}$');
plt.ylabel(r"$\rho_{ij}$")
plt.xlabel("t (QuTiP 'seconds')")
plt.legend()
plt.show()

svg