F
0 E
d E
d W E
E
E
F
~
2
nn'
2
2
2
i E
~
~
~
~
E
E
E
2
1
E
d W E
E
E
F
(7.121)
nn'
2
2
2
i E
.
~
38 An alternative popular measure of the spectral density of a process F( t) is S
2
F() F f / d = 4 SF(), where
= /2 is the “cyclic” frequency (measured in Hz).
Chapter 7
Page 25 of 50
QM: Quantum Mechanics
Due to the smallness of the parameter (which should be much smaller than all genuine energies of the problem, including k B T, , En, and En’), each of the internal integrals in Eq. (121) is dominated by
~
an infinitesimal vicinity of one point, E . In these vicinities, the state densities, the matrix
elements, and the Gibbs probabilities do not change considerably, and may be taken out of the integral, which may be then worked out explicitly:39
~
~
S
lim
2
2
F
E
d
E
d
0
E
d
W F
W F
~
~
2
i E
i E
2
~
~
i E
~
2
i E
~
lim
0
E
d
W F
E
d
W F
E
d
2
2
~
2
~
2
2
E
E
(7.122)
2
2
W F
W F
Ed,
2
~
where the indices mark the functions’ values at the special points E , i.e. E
n = En’ . The
physics of these points becomes simple if we interpret the state n, for which the equilibrium Gibbs distribution function equals Wn, as the initial state of the environment, and n’ as its final state. Then the top-sign point corresponds to En’ = En – , i.e. to the result of emission of one energy quantum of the “observation” frequency by the environment to the system s of our interest, while the bottom-sign point En’ = En + , corresponds to the absorption of such quantum by the environment. As Eq. (122) shows, both processes give similar, positive contributions into the force fluctuations.
The situation is different for the Fourier image of the response function G(),40
( ) G i
( ) e
d
,
(7.123) Generalized
susceptibility
0
that is usually called either the generalized susceptibility or the response function – in our case, of the environment. Its physical meaning is that according to Eq. (107), the complex function () = ’() +
i ” () relates the Fourier amplitudes of the generalized coordinate and the generalized force: 41
F ˆ
( ) ˆ
x .
(7.124)
The physics of its imaginary part ” () is especially clear. Indeed, if x represents a sinusoidal classical process, say
x
i
t
x
i t
x
x( t) x cos
0
0
t
e
e
,
i.e.
0
x x
,
(7.125)
0
2
2
2
39 Using, e.g., MA Eq. (6.5a). (The imaginary parts of the integrals vanish, because the integration in infinite limits may be always re-centered to the finite points .) A math-enlightened reader may have noticed that the integrals might be taken without the introduction of small , using the Cauchy theorem – see MA Eq. (15.1).
40 The integration in Eq. (123) may be extended to the whole time axis, – < < +, if we complement the definition (107) of the function G() for > 0 with its definition as G( ) = 0 for < 0, in correspondence with the causality principle.
41 In order to prove this relation, it is sufficient to plug expression
i t
x ˆ x ˆ e , or any sum of such exponents,
s
into Eqs. (107) and then use the definition (123). This (simple) exercise is highly recommended to the reader.
Chapter 7
Page 26 of 50
QM: Quantum Mechanics
then, in accordance with the correspondence principle, Eq. (124) should hold for the c-number complex amplitudes F and x, enabling us to calculate the time dependence of the force as i
t
i t
i
t
i t
x
F( t) F e
F e
x e
0
x e
i t *
e
e i t
2
(7.126)
x 0
χ' i "
e i
t χ' i "
e i t
x χ'
t
"
t
0
cos
sin .
2
We see that ” () weighs the force’s part (frequently called quadrature) that is /2-shifted from the coordinate x, i.e. is in phase with its velocity, and hence characterizes the time-average power flow from the system into its environment, i.e. the energy dissipation rate:42
x 2
P F t
( ) x t() x
.
(7.127)
0 χ' cos t
" sin t x sin t
0
0 "
2
Let us calculate this function from Eqs. (108) and (123), just as we have done for the spectral density of fluctuations:
2
2
1
~
i
E
"
( ) Im G
( ) e
d
W F
lim
Im
i
exp
.
c.c
e
n
nn'
0
i
e
d
0
n, n'
i
2
0
2
1
1
W F
lim
Im
n
nn'
0
~
~
n, n'
E i E i
2
W F
lim
(7.128)
n
nn'
0
n, n
~
E
.
2
'
~ E 2
2
2
Making the transfer (118) from the double sum to the double integral, and then the integration variable transfer (120), we get
~
~
~
E
E
E
2
~
" () lim
dE
W E
E
E
0
F
dE
2
2
nn'
~
2
E 2 2
(7.129)
~
~
~
E
E
E
2
~
W E E E
F
dE
nn'
2
2
~
2
E
.
2 2
Now using the same argument about the smallness of parameter as above, we may take the spectral densities, the matrix elements of force, and the Gibbs probabilities out of the integrals, and work out the remaining integrals, getting a result very similar to Eq. (122):
" ()
(7.130)
2
2
W F
W F
Ed.
42 The sign minus in Eq. (127) is due to the fact that according to Eq. (90), F is the force exerted on our system ( s) by the environment, so that the force exerted by our system on the environment is – F. With this sign clarification, the expression P Fx Fv for the instant power flow is evident if x is the usual Cartesian coordinate of a 1D particle. However, according to analytical mechanics (see, e.g., CM Chapters 2 and 10), it is also valid for any
{generalized coordinate, generalized force} pair which forms the interaction Hamiltonian (90).
Chapter 7
Page 27 of 50
QM: Quantum Mechanics
In order to relate these two results, it is sufficient to notice that according to Eq. (24), the Gibbs probabilities W are related by a coefficient depending on only the temperature T and observation frequency :
~
E
1
E / 2
W W E
,
(7.131)
W E
exp
W E exp
2
2 Z
k T
2
B
k T
B
so that both the spectral density (122) and the dissipative part (130) of the generalized susceptibility may be expressed via the same integral over the environment energies:
S
(7.132)
F
cosh
W
E
2
2
F
F
dE,
2 k T
B
"
2 sinh
W
(7.133)
E 2
2
F
F
Ed,
2 k T
B
and hence are universally related as
Fluctuation-
S ()
"
()coth
.
(7.134)
F
dissipation
2
2 k T
B
theorem
This is, finally, the much-celebrated Callen-Welton’s fluctuation-dissipation theorem (FDT). It reveals a fundamental, intimate relationship between these two effects of the environment (“no dissipation without fluctuation”) – hence the name. A curious feature of the FDT is that Eq. (134) includes the same function of temperature as the average energy (26) of a quantum oscillator of frequency , though, as the reader could witness, the notion of the oscillator was by no means used in its derivation. As will see in the next section, this fact leads to rather interesting consequences and even conceptual opportunities.
In the classical limit, << k B T, the FDT is reduced to
2 k T
k T Im
B
B
( )
S
( )
"
( )
.
(7.135)
F
2
In most systems of interest, the last fraction is close to a finite (positive) constant within a substantial range of relatively low frequencies. Indeed, expanding the right-hand side of Eq. (123) into the Taylor series in small , we get
0 i ..., with 0 G d ,
and
G d
.
(7.136)
0
0
Since the temporal Green’s function G is real by definition, the Taylor expansion of ” () Im() at
= 0 starts with the linear term , where is a certain real coefficient , and unless = 0, is dominated by this term at small . The physical sense of the constant becomes clear if we consider an environment that provides a force described by a simple, well-known kinematic friction law ˆ
F ˆ x,
with 0 ,
(7.137)
where is usually called the drag coefficient. For the Fourier images of coordinate and force, this gives the relation F = i x, so that according to Eq. (124),
Chapter 7
Page 28 of 50
QM: Quantum Mechanics
Ohmic
" () Im
dissipation
i,
i.e.
0.
(7.138)
With this approximation, and in the classical limit, the FDT (134) is reduced to the well-known Nyquist formula:43
k T
Nyquist
S
B
( )
,
i.e. F 2 4 k T d .
(7.139)
F
formula
f
B
According to Eq. (112), if such a constant spectral density44 persisted at all frequencies, it would correspond to a delta-correlated process F( t), with
K ( ) 2 S ( )
0 ( ) 2 k T ( )
(7.140)
F
F
B
- cf. Eqs. (82) and (83). Since in the classical limit the right-hand side of Eq. (109) is negligible, and the correlation function may be considered an even function of time, the symmetrized function under the integral in Eq. (113) may be rewritten just as F() F(0). In the limit of relatively low observation frequencies (in the sense that is much smaller than not only the quantum frontier k B T/ but also the frequency scale of the function ” ()/), Eq. (138) may be used to recast Eq. (135) in the form45
"
1
lim
F F0
d
0
.
(7.141)
k T
B
0
To conclude this section, let me return for a minute to the questions formulated in our earlier discussion of dephasing in the two-level model. In that problem, the dephasing time scale is T 2 = 1/2 D.
Hence the classical approach to the dephasing, used in Sec. 3, is adequate if D << k B T. Next, we may identify the operators f ând ˆ participating in Eq. (70) with, respectively, F ˆ
and x ˆ participating
z
in the general Eq. (90). Then the comparison of Eqs. (82), (89), and (140) yields
1
4 k T
Classical
2
B
D
,
(7.142)
2
dephasing
T 2
43 Actually, the 1928 work by H. Nyquist was about the electronic noise in resistors, just discovered experimentally by his Bell Labs colleague John Bertrand Johnson. For an Ohmic resistor, as the dissipative
“environment” of the electric circuit it is connected with, Eq. (137) is just the Ohm’s law, and may be recast as either V = –R( dQ/dt) = RI, or I = –G( d/ dt) = GV. Thus for the voltage V across an open circuit,
corresponds to its resistance R, while for current I in a short circuit, to its conductance G = 1/ R. In this case, the fluctuations described by Eq. (139) are referred to as the Johnson-Nyquist noise. (Because of this important application, any model leading to Eq. (138) is commonly referred to as the Ohmic dissipation, even if the physical nature of the variables x and F is quite different from voltage and current.) 44 A random process whose spectral density may be reasonably approximated by a constant is frequently called the white noise, because it is a random mixture of all possible sinusoidal components with equal weights, reminding the spectral composition of the natural white light.
45 Note that in some fields (especially in physical kinetics and chemical physics), this particular limit of the Nyquist formula is called the Green-Kubo (or just “Kubo”) formula. However, in the view of the FDT
development history (described above), it is much more reasonable to associate these names with Eq. (109) – as it is done in most fields of physics.
Chapter 7
Page 29 of 50
QM: Quantum Mechanics
so that, for the model described by Eq. (137) with a temperature-independent drag coefficient , the rate of dephasing by a classical environment is proportional to its temperature.
7.5. The Heisenberg-Langevin approach
The fluctuation-dissipation theorem offers a very simple and efficient, though limited approach to the analysis of the system of interest ( s in Fig. 1). It is to write its Heisenberg equations (4.199) of motion of the relevant operators, which would now include the environmental force operator, and explore these equations using the Fourier transform and the Wiener-Khinchin theorem (112)-(113). This approach to classical equations of motion is commonly associated with the name of Langevin,46 so that its extension to dynamics of Heisenberg-picture operators is frequently referred to as the Heisenberg-Langevin (or “quantum Langevin”, or “Langevin-Lax”47) approach to open system analysis.
Perhaps the best way to describe this method is to demonstrate how it works for the very important case of a 1D harmonic oscillator, so that the generalized coordinate x of Sec. 4 is just the oscillator’s coordinate. For the sake of simplicity, let us assume that the environment provides the simple Ohmic dissipation described by Eq. (137) – which is a very good approximation in many cases.
As we already know from Chapter 5, the Heisenberg equations of motion for operators of coordinate and momentum of the oscillator, in the presence of an external force F( t), are ˆ p
ˆ x
,
ˆ
2
p m ˆ
ˆ
x F,
(7.143)
0
m
so that using Eqs. (92) and (137), we get
p ˆ
x ˆ
,
p ˆ m 2
x ˆ x F ˆ~
ˆ
.
(7.144)
0
t
m
Combining Eqs. (144), we may write their system as a single differential equation
x
m ˆ x ˆ m 2
x F ˆ~
ˆ
,
(7.145)
0
t
which is similar to the well-known classical equation of motion of a damped oscillator under the effect of an external force. In the view of Eqs. (5.29) and (5.35), whose corollary the Ehrenfest theorem (5.36) is, this may look not surprising, but please note again that the approach discussed in the previous section justifies such quantitative description of the drag force in quantum mechanics – necessarily in parallel with the accompanying fluctuation force.
For the Fourier images of the operators, defined similarly to Eq. (115), Eq. (145) gives the following relation,
46 A 1908 work by Paul Langevin was the first systematic development of Einstein’s ideas (1905) on the Brownian motion, using the random force language, as an alternative to Smoluchowski’s approach using the probability density language – see Sec. 6 below.
47 Indeed, perhaps the largest credit for the extension of the Langevin approach to quantum systems belongs to Melvin J. Lax, whose work in the early 1960s was motivated mostly by quantum electronics applications – see, e.g., his monograph M. Lax, Fluctuation and Coherent Phenomena in Classical and Quantum Physics, Gordon and Breach, 1968, and references therein.
Chapter 7
Page 30 of 50
QM: Quantum Mechanics
F
x ˆ
,
(7.146)
m
2 2
0
i
which should be also well known to the reader from the classical theory of forced oscillations.48
However, since these Fourier components are still Heisenberg-picture operators, and their “values” for different generally do not commute, we have to tread carefully. The best way to proceed is to write a copy of Eq. (146) for frequency (- ’), and then combine these equations to form a symmetrical combination similar to that used in Eq. (114). The result is
1
1
1
ˆ x ˆ x
ˆ x ˆ
ˆ ˆ
ˆ
ˆ
x
F F
F F
(7.147)
2
'
'
m
2
i
0
.
2
2
2
'
'
Since the spectral density definition similar to Eq. (114) is valid for any observable, in particular for x, Eq. (147) allows us to relate the symmetrized spectral densities of coordinate and force: S ()
S ()
S ()
F
F
.
(7.148)
x
m
2
i
m
0
2
2
2 20
2
2
2
Now using an analog of Eq. (116) for x, we can calculate the coordinate’s variance:
S ()
d
2
x
K ( )
0
,
(7.149)
x
S ()
d 2
F
x
2
2
2
2
2
0 m
0
where now, in contrast to the notation used in Sec. 4, the sign … means averaging over the usual statistical ensemble of many systems of interest – in our current case, of many harmonic oscillators.
If the coupling to the environment is so weak that the drag coefficient is small (in the sense that the oscillator’s dimensionless Q-factor is large, Q mω 0/ >> 1), this integral is dominated by the resonance peak in a narrow vicinity, – 0 << 0, of its resonance frequency, and we can take the relatively smooth function SF() out of the integral, thus reducing it to a table form:49
d
d
2
x
2 S ( )
S
F
0
2
F
m 2
m
0
2
(
)
2
2
0
2
0
2
2
0
0 2
(7.150)
1
d
1
S ( )
2 S ( )
F
0
S
F
0
m
F
m
m
0
2
(
)
2
(2
/ )2 1
0
0
.
2 2
2
0
With the account of the FDT (134) and of Eq. (138), this gives50
2
x
coth
0
coth
0 .
(7.151)
2
m 2
0
2 k T
2 m
2 k T
0
B
0
B
48 If necessary, see CM Sec. 5.1.
49 See, e.g., MA Eq. (6.5a).
50 Note that this calculation remains correct even if the dissipation’s dispersion law deviates from the Ohmic model (138), provided that the drag coefficient is replaced with its effective value Im(0)/0, because the effects of the environment are only felt, by the oscillator, at its oscillation frequency.
Chapter 7
Page 31 of 50
QM: Quantum Mechanics
But this is exactly Eq. (48), which was derived in Sec. 2 from the Gibbs distribution, without any explicit account of the environment – though keeping it in mind by using the notion of the thermally-equilibrium ensemble.51
Notice that in the final form of Eq. (151) the coefficient , which characterizes the oscillator-to-environment interaction strength, has canceled! Does this mean that in Sec. 4 we toiled in vain? By no means. First of all, the result (150), augmented by the FDT (134), has an important conceptual value.
For example, let us consider the low-temperature limit k B T << 0 where Eq. (151) is reduced to 2
x
2
0
x
.
(7.152)
2 m
2
0
Let us ask a naïve question: what exactly is the origin of this coordinate’s uncertainty? From the point of view of the usual quantum mechanics of absolutely closed (Hamiltonian) systems, there is no doubt: this non-vanishing variance of the coordinate is the result of the final spatial extension of the ground-state wavefunction (2.275), reflecting Heisenberg’s uncertainty relation – which in turn results from the fact that the operators of coordinate and momentum do not commute. However, from the point of view of the Heisenberg-Langevin equation (145), the variance (152) is an inalienable part of the oscillator’s
~
response to the fluctuation force F t exerted by the environment at frequencies 0. Though it is impossible to refute the former, absolutely legitimate point of view, in many applications it is easier to subscribe to the latter standpoint and treat the coordinate’s uncertainty as the result of the so-called quantum noise of the environment, which, in equilibrium, obeys the FTD (134). This notion has received numerous confirmations in experiments that did not include any oscillators with their own frequencies 0 close to the noise measurement frequency .52
The second advantage of the Heisenberg-Langevin approach is that it is possible to use Eq.
(148) to calculate the (experimentally measurable!) distribution Sx(), i.e. decompose the fluctuations into their spectral components. This procedure is not restricted to the limit of small (i.e. of large Q); for any damping, we may just plug the FDT (134) into Eq. (148). For example, let us have a look at the so-called quantum diffusion. A free 1D particle, moving in a viscous medium providing it with the Ohmic damping (137), may be considered as the particular case of a 1D harmonic oscillator (145), but with 0 = 0, so that combining Eqs. (134) and (149), we get
2
S
( ) d
1
F
x
2
2
coth
d
.
(7.153)
2 2
2
0 (
m
)
2 2
2
0 (
m
)
2
2 k T
B
This integral has two divergences. The first one, of the type d/2 at the lower limit, is just a classical effect: according to Eq. (85), the particle’s displacement variance grows with time, so it cannot have a finite time-independent value that Eq. (153) tries to calculate. However, we still can use that result to single out the quantum effects on diffusion – say, by comparing it with a similar but purely classical case. These effects are prominent at high frequencies, especially if the quantum noise overcomes the thermal noise before the dynamic cut-off, i.e. if
51 By the way, the simplest way to calculate SF(), i.e. to derive the FDT, is to require that Eqs. (48) and (150) give the same result for an oscillator with any eigenfrequency . This is exactly the approach used by H. Nyquist (for the classical case) – see also SM Sec. 5.5.
52 See, for example, R. Koch et al., Phys. Lev. B 26, 74 (1982).
Chapter 7
Page 32 of 50
QM: Quantum Mechanics
k T
B
.
(7.154)
m
In this case, there is a broad range of frequencies where the quantum noise gives a substantial contribution to the integral:
/ m
/ m
Quantum
d
2
1
diffusion
x
2
d
(7.155)
ln
~ .
2
Q
2
mk T
B
k T /
k T /
B
B
Formally, this contribution diverges at either m 0 or T 0, but this logarithmic (i.e. extremely weak) divergence is readily quenched by almost any change of the environment model at very high frequencies, where the “Ohmic” approximation (136) becomes unrealistic.
The Heisenberg-Langevin approach is very powerful, because its straightforward generalizations enable analyses of fluctuations in virtually arbitrary linear systems, i.e. the systems described by linear differential (or integro-differential) equations of motion, including those with many degrees of freedom, and distributed systems ( continua), and such systems prevail in many fields of physics. However, this approach also its limitations. The main of them is that if the equations of motion of the Heisenberg operators are not linear, there is no linear relation, such as Eq. (146), between the Fourier images of the generalized forces and the generalized coordinates, and as the result, there is no simple relation, such as Eq. (148), between their spectral densities. In other words, if the Heisenberg equations of motion are nonlinear, there is no regular simple way to use them to calculate the statistical properties of the observables.
For example, let us return to the dephasing problem described by Eqs. (68)-(70), and assume that the deterministic and fluctuating parts of the effective force – f exerted by the environment, are characterized by relations similar, respectively, to Eqs. (124) and (134). Now writing the Heisenberg equations of motion for the two remaining spin operators, and using the commutation relations between them, we get
1
1
2
2
ˆ~
ˆ
ˆ
ˆ
ˆ
ˆ
,
ˆ
,
ˆ
ˆ
ˆ
ˆ
,
(7.156)
x
H
x
x c f
z
z y c f
z
c
f
y
z
z
i
i
and a similar equation for ˆ . Such nonlinear equations cannot be used to calculate the statistical y
properties of the Pauli operators in this system exactly – at least analytically.
For some calculations, this problem may be circumvented by linearization: if we are only interested in small fluctuations of the observables, their nonlinear Heisenberg equations of motion, such as Eq. (156), may be linearized with respect to small deviations of the operators about their (generally, time-dependent) deterministic “values”, and then the resulting linear equations for the operator variations may be solved either as has been demonstrated above, or (if the deterministic “values” evolve in time) using their Fourier expansions. Sometimes such approach gives relatively simple and important results,53 but for many other problems, this approach is insufficient, leaving a lot of space for alternative methods.
53 For example, the formula used for processing the experimental results by R. Koch et al. (mentioned above), had been derived in this way. (This derivation will be suggested to the reader as an exercise.) Chapter 7
Page 33 of 50
QM: Quantum Mechanics
7.6. Density matrix approach
The main alternative approach to the dynamics of open quantum systems, which is essentially a generalization of the one discussed in Sec. 2, is to extract the final results of interest from the dynamics of the density operator of our system s. Let us discuss this approach in detail.54
We already know that the density matrix allows the calculation of the expectation value of any observable of the system s – see Eq. (5). However, our initial recipe (6) for the density matrix element calculation, which requires the knowledge of the exact state (2) of the whole Universe, is not too practicable, while the von Neumann equation (66) for the density matrix evolution is limited to cases in which probabilities Wj of the system states are fixed – thus excluding such important effects as the energy relaxation. However, such effects may be analyzed using a different assumption – that the system of interest interacts only with a local environment that is very close to its thermally-equilibrium state described, in the stationary-state basis, by a diagonal density matrix with the elements (24).
This calculation is facilitated by the following general observation. Let us number the basis states of the full local system (the system of our interest plus its local environment) by l, and use Eq. (5) to write
A
ˆ
Tr w
A
ˆ
ˆ
ˆ
,
(7.157)
l A w
ll'
l'l
l Al' l' w ll
l, l'
l,l'
where w îs the density operator of this local system. At a weak interaction between the system s and the l
local environment e, their states reside in different Hilbert spaces, so that we can write l s e ,
(7.158)
j
k
and if the observable A depends only on the coordinates of the system s of our interest, we may reduce Eq. (157) to the form similar to Eq. (5):
ˆ
A e s A s e e s ˆ w s e
k
j
j'
k '
k '
j'
l
j
k
j,j' ; k,k'
(7.159)
ˆ
A s e ˆ w e s Tr ( A ˆ)
w ,
jj'
j'
k
l
k
j
j
j, j'
k
where
w ˆ e w ˆ e Tr w ˆ ,
(7.160)
k
l
k
k
l
k
showing how exactly the density operator w ôf the system s may be calculated from w ˆ .
l
Now comes the key physical assumption of this approach: since we may select the local environment e to be much larger than the system s of our interest, we may consider the composite system l as a Hamiltonian one, with time-independent probabilities of its stationary states, so that for the description of the evolution in time of its full density operator w ˆ (again, in contrast to that, w ˆ , of the l
system of our interest) we may use the von Neumann equation (66). Partitioning its right-hand side in accordance with Eq. (68), we get:
i w ˆ
H ˆ , w ˆ H ˆ , w ˆ H ˆ , w ˆ .
(7.161)
l
s l e l int l
54 As in Sec. 4, the reader not interested in the derivation of the basic equation (181) of the density matrix evolution may immediately jump to the discussion of this equation and its applications.
Chapter 7
Page 34 of 50
QM: Quantum Mechanics
The next step is to use the perturbation theory to solve this equation in the lowest order in ˆ
H , which
int
would yield, for the evolution of w, a non-vanishing contribution due to the interaction. For that, Eq.
(161) is not very convenient, because its right-hand side contains two other terms, of a much larger scale than the interaction Hamiltonian. To mitigate this technical difficulty, the interaction picture that was discussed at the end of Sec. 4.6, is very natural. (It is not necessary though, and I will use this picture mostly as an exercise of its application – unfortunately, the only example I can afford in this course.) As a reminder, in that picture (whose entities will be marked with index “I”, with the unmarked operators assumed to be in the Schrödinger picture), both the operators and the state vectors (and hence the density operator) depend on time. However, the time evolution of the operator of any observable A is described by an equation similar to Eq. (67), but with the unperturbed part of the Hamiltonian only – see Eq. (4.214). In model (68), this means
ˆ
ˆ
ˆ
i A A , H
I
I 0.
(7.162)
where the unperturbed Hamiltonian consists of two parts defined in different Hilbert spaces: ˆ
ˆ
ˆ
H H H
0
s
e .
(7.163)
On the other hand, the state vector’s dynamics is governed by the interaction evolution operator ˆ u that I
obeys Eqs. (4.215). Since this equation, using the interaction-picture Hamiltonian (4.216), ˆ
H ˆ† ˆ
u H ˆ u ,
(7.164)
I
0
int 0
is absolutely similar to the ordinary Schrödinger equation using the full Hamiltonian, we may repeat all arguments given at the beginning of Sec. 3 to prove that the dynamics of the density operator in the interaction picture of a Hamiltonian system is governed by the following analog of the von Neumann equation (66):
i ˆ
ˆ
w H , ˆ w ,
(7.165)
I
I I
where the index l is dropped for the notation simplicity. Since this equation is similar in structure (with the opposite sign) to the Heisenberg equation (67), we may use the solution Eq. (4.190) of the latter equation to write its analog:
ˆ w t u t
w
u t
.
(7.166)
I
Î ,0 ˆ (0) ˆ†
l
I ,0
It is also straightforward to verify that in this picture, the expectation value of any observable A may be found from an expression similar to the basic Eq. (5):
A
ˆ
Tr A ˆ w
I
I ,
(7.167)
showing again that the interaction and Schrödinger pictures give the same final results.
In the most frequent case of factorable interaction (90),55 Eq. (162) is simplified for both operators participating in that product – for each one in its own way. In particular, for A ˆ x ˆ , it yields 55 A similar analysis of a more general case, when the interaction with the environment has to be represented as a sum of products of the type (90), may be found, for example, in the monograph by K. Blum, Density Matrix Theory and Applications, 3rd ed., Springer, 2012.
Chapter 7
Page 35 of 50
QM: Quantum Mechanics
i x ˆ
.
(7.168)
I
x ˆ H ˆ
,
I
0
x ˆ H ˆ
,
x ˆ H ˆ
,
I
s
I e
Since the coordinate operator is defined in the Hilbert space of our system s, it commutes with the Hamiltonian of the environment, so that we finally get
i x ˆ
.
(7.169)
I
x ˆ H ˆ
,
I
s
On the other hand, if A ˆ F ˆ , this operator is defined in the Hilbert space of the environment, and commutes with the Hamiltonian of the unperturbed system s. As a result, we get
i F ˆ
I
F ˆ H ˆ
,
I
e
.
(7.170)
This means that with our time-independent unperturbed Hamiltonians, H ând H ˆ , the time s
e
evolution of the interaction-picture operators is rather simple. In particular, the analogy between Eq.
(170) and Eq. (93) allows us to immediately write the following analog of Eq. (94):
i
i
F ˆ
,
(7.171)
I t
H êxp
t ˆ
ˆ
0 exp
e F
H te
so that in the stationary-state basis n of the environment,
i
i
E E
n
n'
F ˆ
,
(7.172)
I
t
( ) exp
( )
0 exp
( )
0 exp
nn'
E t
n Fnn'
E t
n' Fnn'
i
t
and similarly (but in the basis of the stationary states of system s) for operator x ˆ . As a result, the right-hand side of Eq. (164) may be also factored:
ˆ
†
i
i
H t u t
H u t
H H t F
x
H H t
I
ˆ0 ˆ
0
,
înt 0 0
, exp
ˆ ˆ
s
e
ˆˆexp ˆ ˆ
s
e
(7.173)
i ˆ
i ˆ
i ˆ ˆ
i ˆ
exp H t ˆ x exp H t exp
H t F(0) exp H t x
s
s
e
e
I t ˆ
ˆ
F t
I .
So, the transfer to the interaction picture has taken some time, but now it enables a smooth ride.56
Indeed, just as in Sec. 4, we may rewrite Eq. (165) in the integral form:
t
1
w ˆ
ˆ
, ˆ
;
(7.174)
I t
H I t' w I t' dt'
i
plugging this result into the right-hand side of Eq. (165), we get
t
t
1
1
w ˆ
,
(7.175)
I t
2 H Î t H ˆ
,
I t' , w
Î t' dt' 2 x ˆ t( F ˆ
) t
( ), x ˆ t'
( F ˆ
) t'
( ), w ˆ t'
( )
I
dt'
where, for the notation’s brevity, from this point on I will strip the operators x ând F ôf their index
“I ” . (I hope their time dependence indicates the interaction picture clearly enough.) 56 If we used either the Schrödinger or the Heisenberg picture instead, the forthcoming Eq. (175) would pick up a rather annoying multitude of fast-oscillating exponents, of different time arguments, on its right-hand side.
Chapter 7
Page 36 of 50
QM: Quantum Mechanics
So far, this equation is exact (and cannot be solved analytically), but this is a good time to notice that even if we approximate the density operator on its right-hand side by its unperturbed, factorable
“value” (corresponding to no interaction between the system s and its thermally-equilibrium environment e),57
ˆ w t' ˆ w t' ˆ w ,
with e ˆ w e
W ,
(7.176)
I
e
n
e
n'
n nn'
where en are the stationary states of the environment and Wn are the Gibbs probabilities (24), Eq. (175) still describes nontrivial time evolution of the density operator. This is exactly the first non-vanishing approximation (in the weak interaction) we have been looking for. Now using Eq. (160), we find the equation of evolution of the density operator of the system of our interest:
t
1
ˆ
w t
,
(7.177)
2 Tr x ˆ t
( F ˆ
) t
( ), x ˆ t'
( F ˆ
) t'
( ), w ˆ t'
( ) w ˆ dt'
n
e
where the trace is over the stationary states of the environment. To spell out the right-hand side of Eq.
(177), note again that the coordinate and force operators commute with each other (but not with themselves at different time moments!) and hence may be swapped at will, so that we may write Tr ..., ...,... ˆ
ˆ
ˆ
ˆ
ˆ
Tr
ˆ ˆ
ˆ
ˆ
ˆ
Tr
ˆ ˆ
n
x t x t'
w t' n F t F t' we x t
w t' x t' n F t w F
e
t'
x ˆ t' ˆ
w t' x ˆ t
ˆ
Tr
ˆ ˆ
ˆ
ˆ
ˆ Tr ˆ ˆ
ˆ
n F t' w F
e
t
w t' x t' x t n w F
e
t' F t
ˆ x tˆ x t' ˆ
w t' F t F t' W x t w t' x t'
F
t W F t'
nn' n'n
ˆ
n
ˆ ˆ nn' n' n'n
n, n'
n, n'
ˆ
(7.178)
x t' ˆ
w t' ˆ x t F t' W F t w t' x t' x t
W F
t' F
t
nn'
n'
n'n ˆ ˆ ˆ
n
nn' n'n .
n, n'
n, n'
Since the summation over both indices n and n’ in this expression is over the same energy level set (of all stationary states of the environment), we may swap these indices in any of the sums. Doing this only in the terms including the factors Wn’, we turn them into Wn, so that this factor becomes common: Tr
n ...,
...,...
Wn ˆ x tˆ x t' ˆ
w t' Fnn' t Fn'n t' ˆ x t ˆ
w t' ˆ x t' Fn'n t Fnn' t'
n, n'
(7.179)
ˆ x t' ˆ w ˆ x t F
n'n t' Fnn' t
ˆ w ˆ x t' ˆ x t Fnn' t' Fn'n t.
Now using Eq. (172), we get
~
~
iE t t'
iE t t'
x ˆ t x ˆ t' ˆ
w t'
exp
x ˆ t ˆ
w t' x ˆ t'
exp
Tr ..., ...,...
W F 2
n
n
nn'
~
~
n, n'
iE t t'
iE t t'
x ˆ t' ˆ
w t' x ˆ t
exp
ˆ
w t' x ˆ t' x ˆ t
exp
~
2
E t t'
~
2
E t t'
W F
cos
x t x t' w t'
i
W F
x t x t' w t'
n
nn'
ˆ ,ˆ , ˆ
sin
n
nn'
ˆ ,ˆ , ˆ . (7.180)
n, n'
n,n'
Comparing the two double sums participating in this expression with Eqs. (108) and (111), we see that they are nothing else than, respectively, the symmetrized correlation function and the temporal Green’s 57 For the notation simplicity, the fact that here (and in all following formulas) the density operator w ôf the system s of our interest is taken in the interaction picture, is just implied.
Chapter 7
Page 37 of 50
QM: Quantum Mechanics
function (multiplied by /2) of the time-difference argument = t – t’ 0. As the result, Eq. (177) takes a compact form:
t
t
1
i
Density
ˆ
w t
K t t' x ˆ t
( ), x ˆ t'
( ), w ˆ t'
( ) dt'
G t t' x ˆ t
( ), x ˆ t'
( ), w ˆ t'
( ) dt' . (7.181) matrix:
2
F
time
2
evolution
Let me hope that the readers (especially the ones who have braved through this derivation) enjoy this beautiful result as much as I do. It gives an equation for the time evolution of the density operator of the system of our interest ( s), with the effects of its environment represented only by two real, c-number functions of τ: one ( KF) describing the fluctuation force exerted by the environment, and the other one ( G) representing its ensemble-averaged environment’s response to the system’s evolution. And most spectacularly, these are exactly the same functions that participate in the alternative, Heisenberg-Langevin approach to the problem, and hence related to each other by the fluctuation-dissipation theorem (134).
After a short celebration, let us acknowledge that Eq. (181) is still an integro-differential equation, and needs to be solved together with Eq. (169) for the system coordinate’s evolution. Such equations do not allow explicit analytical solutions, besides a few very simple (and not very interesting) cases. For most applications, further simplifications should be made. One of them is based on the fact (which was already discussed in Sec. 3) that both environmental functions participating in Eq. (181) tend to zero when their argument becomes much larger than the environment’s correlation time c, independent of the system-to-environment coupling strength. If the coupling is sufficiently weak, the time scales Tnn’ of the evolution of the density matrix elements, following from Eq. (181), are much longer than this correlation time, and also the characteristic time scale of the coordinate operator’s evolution. In this limit, all arguments t’ of the density operator, giving substantial contributions to the right-hand side of Eq. (181), are so close to t that it does not matter whether its argument is t’ or just t.
This simplification, w( t’) w( t), is known as the Markov approximation. 58
However, this approximation alone is still insufficient for finding the general solution of Eq.
(181). Substantial further progress is possible in two important cases. The most important of them is when the intrinsic Hamiltonian H ôf the system s of our of interest does not depend on time explicitly s
and has a discrete eigenenergy spectrum En,59 with well-separated levels: E E
.
(7.182)
n
n'
Tnn'
Let us see what does this condition yield for Eq. (181), rewritten for the matrix elements in the stationary state basis, in the Markov approximation:
58 Named after Andrey Andreyevich Markov (1856-1922; in older Western literature, “Markoff”), a mathematician famous for his general theory of the so-called Markov processes, whose future development is completely determined by its present state, but not its pre-history.
59 Here, rather reluctantly, I will use this standard notation, En, for the eigenenergies of our system of interest ( s), in hope that the reader would not confuse these discrete energy levels with the quasi-continuous energy levels of its environment ( e), participating in particular in Eqs. (108) and (111). As a reminder, by this stage of our calculations, the environment levels have disappeared from our formulas, leaving behind their functionals KF( ) and G().
Chapter 7
Page 38 of 50
QM: Quantum Mechanics
t
t
1
i
w
K t t' x ˆ t
( ), x ˆ t'
( ), w ˆ
dt'
G t t' x ˆ t
( ), x ˆ t'
( ), w ˆ
dt' .
(7.183)
nn'
2
F
nn'
nn'
2
After spelling out the commutators, the right-hand side of this expression includes four operator products, which differ “only” by the operator order. Let us first have a look at one of these products,
x ˆ t
( ) x ˆ t'
( ) ˆ
w
x
t
( ) x
t'
( ) w
,
(7.184)
nn'
nm mm'
m'n'
m,m'
where the indices m and m’ run over the same set of stationary states of the system s of our interest as the indices n and n’. According to Eq. (169) with a time-independent Hs, the matrix elements xnn’ (in the stationary state basis) oscillate in time as exp{ i nn’t}, so that
x ˆ t
( ) x ˆ t'
( ) ˆ
w
x x
exp i t t' w
,
(7.185)
nn'
nm mm'
nm
mm' m'n'
m, m'
where on the right-hand side, the coordinate matrix elements are in the Schrödinger picture, and the usual notation (6.85) is used for the quantum transition frequencies:
E E .
(7.186)
nn'
n
n'
According to the condition (182), frequencies nn’ with n n’ are much higher than the speed of evolution of the density matrix elements (in the interaction picture!) – on both the left-hand and right-hand sides of Eq. (183). Hence, on the right-hand side of Eq. (183), we may keep only the terms that do not oscillate with these frequencies nn’, because rapidly-oscillating terms would give negligible contributions to the density matrix dynamics.60 For that, in the double sum (185) we should save only the terms proportional to the difference ( t – t’) because they will give (after the integration over t’) a slowly changing contribution to the right-hand side.61 These terms should have nm + mm’ = 0, i.e. ( En –
Em) + ( Em – Em’) En – Em’ = 0. For a non-degenerate energy spectrum, this requirement means m’ = n; as a result, the double sum is reduced to a single one:
x ˆ t
( ) x ˆ t'
( ) w ˆ w
x x exp i
t t'
w
x
2 exp i t t' .
(7.187)
nn'
nn' nm mn
nm nn' nm
nm
m
m
Another product, w ˆ x ˆ t'
( ) x ˆ t
( ) , which appears on the right-hand side of Eq. (183), may be simplified
nn'
absolutely similarly, giving
w ˆ x ˆ t' () x ˆ t() x 2 exp i t' t w .
(7.188)
nn'
n'm
n'm nn'
m
These expressions hold whether n and n’ are equal or not. The situation is different for two other products on the right-hand side of Eq. (183), with w sandwiched between x( t) and x( t’). For example,
x ˆ t
( ) w ˆ x ˆ t'
( )
x
t
( ) w
x
t'
( )
x w
x
exp i t t' .
(7.189)
nn'
nm
mm'
m'n'
nm mm' m'n'
nm
m'n'
m, m'
m, m'
60 This is essentially the same rotating-wave approximation (RWA) as was used in Sec. 6.5.
61 As was already discussed in Sec. 4, the lower-limit substitution ( t’ = –) in the integrals participating in Eq.
(183) gives zero, due to the finite-time “memory” of the system, expressed by the decay of the correlation and response functions at large values of the time delay = t – t’.
Chapter 7
Page 39 of 50
QM: Quantum Mechanics
For this term, the same requirement of having a fast oscillating function of ( t – t’) only, yields a different condition: nm + m’n’ = 0, i.e.
E E
E
E
.
(7.190)
n
m
m'
n' 0
Here the double sum’s reduction is possible only if we make an additional assumption that all interlevel energy distances are unique, i.e. our system of interest has no equidistant levels (such as in the harmonic oscillator). For the diagonal elements ( n = n’), the RWA requirement is reduced to m = m’, giving sums over all diagonal elements of the density matrix:
x ˆ t() w ˆ x ˆ t' ()
x
2 exp i t t' w .
(7.191)
nn
nm
nm mm
m
(Another similar term, x ˆ t'
( ) w ˆ x ˆ t
( ) , is just a complex conjugate of (191).) However, for off-diagonal
nn
matrix elements ( n n’), the situation is different: Eq. (190) may be satisfied only if m = n and also m’
= n’, so that the double sum is reduced to just one, non-oscillating term:
ˆ x( t) ˆ w ˆ x( t' ) x w x , for n n'.
(7.192)
nn'
nn
nn'
n'n'
The second similar term, x ˆ t'
( ) w ˆ x ˆ t
( ) , is exactly the same, so that in one of the integrals of Eq. (183),
nn
these terms add up, while in the second one, they cancel.
This is why the final equations of evolution look differently for diagonal and off-diagonal elements of the density matrix. For the former case ( n = n’), Eq. (183) is reduced to the so-called master equation 62 relating diagonal elements wnn of the density matrix, i.e. the energy level occupancies Wn: 63
2
1
W x
K
n
nm
F W
W
n
m
exp
i
nm
exp
i
nm
2
m n
0
(7.193)
i
G W W
n
m
exp
i
nm
exp
i
nm
d ,
2
where t – t’. Changing the summation index notation from m to n’, we may rewrite the master equation in its canonical form
W
W
W ,
(7.194) Master
n
n' n
n'
n n'
n
equation
n' n
where the coefficients
2
2
1
Interlevel
Γ
x
K
G
dt'
(7.195) transition
n' n
nn'
F cos
nn'
sin
,
2
nn'
rates
0
are called the interlevel transition rates.64 Eq. (194) has a very clear physical meaning of the level occupancy dynamics (i.e. the balance of the probability flows W) due to quantum transitions between 62 The master equations, first introduced to quantum mechanics in 1928 by W. Pauli, are sometimes called the
“Pauli master equations”, or “kinetic equations”, or “rate equations”.
63 As Eq. (193) shows, the term with m = n would vanish and thus may be legitimately excluded from the sum.
64 As Eq. (193) shows, the result for n n’ is described by Eq. (195) as well, provided that the indices n and n’ are swapped in all components of its right-hand side, including the swap nn’ n’n = – nn’.
Chapter 7
Page 40 of 50
Essential Graduate Physics
QM: Quantum Mechanics
the energy levels (see Fig. 7), in our current case caused by the interaction between the system of our interest and its environment.
higher levels
E
W
n
n
Fig. 7.7. Probability flows in a discrete-
spectrum system. Solid arrows: the
n' n
n n'
exchange between the two energy levels, n
E
W
and n’, described by one term in the master
n'
n'
equation (194); dashed arrows: other
transitions to/from these two levels.
lower levels
The Fourier transforms (113) and (123) enable us to express the two integrals in Eq. (195) via, respectively, the symmetrized spectral density SF() of environment force fluctuations and the imaginary part ” () of the generalized susceptibility, both at frequency = nn’. After that we may use the fluctuation-dissipation theorem (134) to exclude the former function, getting finally65
Transition
1
2
"
nn'
2
2
(
)
rates via
x
"
x
nn'
.
(7.196)
n' n
nn'
nn' coth
1
” ()
2
k T
nn'
exp E E
k T
B
n
n' /
B
1
Note that since the imaginary part ” of the generalized susceptibility is an odd function of frequency, Eq. (196) is in compliance with the Gibbs distribution for arbitrary temperature. Indeed, according to this equation, the ratio of the “up” and “down” rates for each pair of levels equals
"
"
E
E
n' n
nn'
n'n
exp n
n'
.
(7.197)
exp{( E E ) / k T} 1 exp{( E E ) / k T} 1
k T
n n'
n
n'
B
n'
n
B
B
On the other hand, according to the Gibbs distribution (24), in thermal equilibrium the level populations should be in the same proportion. Hence, Eq. (196) complies with the so-called detailed balance equation,
Detailed
W
W
,
n n n'
n' n' n
(7.198)
balance
valid in the equilibrium for each pair { n, n’}, so that all right-hand sides of all Eqs. (194), and hence the time derivatives of all Wn vanish – as they should. Thus, the stationary solution of the master equations indeed describes the thermal equilibrium correctly.
The system of master equations (194), frequently complemented by additional terms on their right-hand sides, describing interlevel transitions due to other factors (e.g., by an external ac force with a frequency close to one of nn’), is the key starting point for practical analyses of many quantum systems, notably including optical quantum amplifiers and generators (lasers). It is important to remember that 65 It is straightforward (and highly recommended to the reader) to show that at low temperatures ( k B T << En’ –
En), Eq. (196) gives the same result as the Golden Rate formula (6.111), with A = x. (The low-temperature condition ensures that the initial occupancy of the excited level n is negligible, as was assumed at the derivation of Eq. (6.111).)
Chapter 7
Page 41 of 50
QM: Quantum Mechanics
they are strictly valid only in the rotating-wave approximation, i.e. if Eq. (182) is well satisfied for all n and n’ of substance.
For a particular but very important case of a two-level system (with, say, E 1 > E 2), the rate 12
may be interpreted (especially in the low-temperature limit k B T << 12 = E 1 – E 2, when 12 >> 21 ) as the reciprocal characteristic time 1/ T 1 12 of the energy relaxation process that brings the diagonal elements of the density matrix to their thermally-equilibrium values (24). For the Ohmic dissipation described by Eqs. (137)-(138), Eq. (196) yields
1
2
Energy
2
,
for k T
,
x
12
B
12
(7.199) relaxation
1 2
2
12
T
k T
k T
time
1
,
for
.
B
12
B
This relaxation time T 1 should not be confused with the characteristic time T 2 of the off-diagonal element decay, i.e. dephasing, which was already discussed in Sec. 3. In this context, let us see what do Eqs. (183) have to say about the dephasing rates. Taking into account our intermediate results (187)-
(192), and merging the non-oscillating components (with m = n and m = n’) of the sums Eq. (187) and (188) with the terms (192), which also do not oscillate in time, we get the following equation:66
1
w
K
nn'
F
2
xnm
exp i nm
2
xn'm
exp i n'm x
x
nn
n'n' 2
2
0
m n
m n'
(7.200)
i
G
2
x
nm
exp i nm
2
xn'm