Essential Graduate Physics by Konstantin K. Likharev - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

w x, x' ) Z

 ( x x' )

T

.

(7.40)

Since in this limit the average kinetic energy of the particle is not smaller than its potential energy in any fixed potential profile, Eq. (40) is the general property of the density matrix (33).

Let us discuss the following curious feature of Eq. (36): if we replace k B T with / i( t – t 0), and x’

with x 0, the un-normalized density matrix wZ for a free particle turns into the particle’s propagator – cf.

Eq. (2.49). This is not just an occasional coincidence. Indeed, in Chapter 2 we saw that the propagator of a system with an arbitrary stationary Hamiltonian may be expressed via the stationary eigenfunctions as 15 Due to the delta-normalization of the eigenfunction, the density matrix (34) for the free particle (and any system with a continuous eigenvalue spectrum) is normalized as





(

w x, x' ) Zdx'

(

w x, x' ) Zdx  .

1





Chapter 7

Page 8 of 50

Essential Graduate Physics

QM: Quantum Mechanics

En

 *

G( x, t; x , t ) 

0

0

 ( x)exp i

t t

( x )

n



  0  n 0 .

(7.41)

n

Comparing this expression with Eq. (33), we see that the replacements

i t

(  t )

1

0

,

x x' ,

(7.42)

k T

0

B

turn the pure-state propagator G into the un-normalized density matrix wZ of the same system in thermodynamic equilibrium. This important fact, rooted in the formal similarity of the Gibbs distribution (24) with the Schrödinger equation’s solution (1.69), enables a theoretical technique of the so-called thermodynamic Green’s functions, which is especially productive in condensed matter physics.16

For our current purposes, we can employ Eq. (42) to re-use some of the wave mechanics results, in particular, the following formula for the harmonic oscillator’s propagator

m

1/2

mx

x

t t

xx

0

0 

 2  20cos[ (  )]2

0

0

0 

G( x, t; x , t ) 

. (7.43)

0

0



exp



 2 i

 sin[ ( t t )]

i

t t

0

0

2 sin[

(  )]

0

0

which may be readily proved to satisfy the Schrödinger equation for the Hamiltonian (5.62), with the appropriate initial condition: G( x, t 0; x 0, t 0) = ( xx 0). Making the substitution (42), we immediately get 1/ 2

Harmonic

m

mx x'

oscillator

 k T xx'

0

0 

 2 2cosh /0 B  2 

(

w x, x') Z

in thermal



(7.44)

2sinh

equilibrium

 k T

k T

0

B

exp

/

2 sinh

 0 B 

.

/

As a sanity check, at very low temperatures, k B T << 0, both hyperbolic functions participating in this expression are very large and nearly equal, and it yields

 m

mx

m

mx'

0 1/ 4

2 

1/ 4

0

 

0

0 

2 

(

w x, x' ) Z

 

0

T

exp

0



  exp

 

 exp

 . (7.45)

 

 



 2 k T

B

   



In each of the expressions in square brackets we can readily recognize the ground state’s wavefunction (2.275) of the oscillator, while the middle exponent is just the statistical sum (24) in the low-temperature limit when it is dominated by the ground-level contribution:

  

Z

0

T

exp

0



.

(7.46)

 2 k T

B

As a result, Z in both parts of Eq. (45) may be canceled, and the density matrix in this limit is described by Eq. (31), with the ground state as the only state of the system. This is natural when the temperature is too low for the thermal excitation of any other state.

16 I will have no time to discuss this technique and have to refer the interested reader to special literature.

Probably, the most famous text of that field is A. Abrikosov, L. Gor’kov, and I. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics, Prentice-Hall, 1963. (Later reprintings are available from Dover.) Chapter 7

Page 9 of 50

Essential Graduate Physics

QM: Quantum Mechanics

Returning to arbitrary temperatures, Eq. (44) in coinciding arguments gives the following expression for the probability density:17

1/ 2

2

m

mx

(

w x, x) Z  (

w x)

0

0

0

Z  



(7.47)

2sinh

 

/ k T

k T

0

B

exp

tanh

.

2

B

This is just a Gaussian function of x, with the following variance:



x 2

0

coth

.

(7.48)

2 m

2 k T

0

B

To compare this result with our earlier ones, it is useful to recast it as

2

m





0

2

0

U

x

coth

0 .

(7.49)

2

4

2 k T

B

Comparing this expression with Eq. (26), we see that the average value of potential energy is exactly one-half of the total energy – the other half being the average kinetic energy. This is what we could expect, because according to Eqs. (5.96)-(5.97), such relation holds for each Fock state and hence should also hold for their classical mixture.

Unfortunately, besides the trivial case (30) of coinciding arguments, it is hard to give a straightforward interpretation of the density function in terms of the system’s measurements. This is a fundamental difficulty, which has been well explored in terms of the Wigner function (sometimes called the “Wigner-Ville distribution”)18 defined as

~

~

~

1

X

X

iPX  ~

Wigner

W ( X , P) 

w X

, X

exp



dX .

(7.50) function:

2



2

2 

 

definition

From the mathematical standpoint, this is just the Fourier transform of the density matrix in one of two new coordinates defined by the following relations (see Fig. 2):

~

~

x x'

~

X

X

X

, X x x',

that

so

x X

, x' X

.

(7.51)

2

2

2

Physically, the new argument X may be interpreted as the average position of the particle during

~

the time interval ( tt’), while X , as the distance passed by it during that time interval, so that P

characterizes the momentum of the particle during that motion. As a result, the Wigner function is a mathematical construct intended to characterize the system’s probability distribution simultaneously in the coordinate and the momentum space – for 1D systems, on the phase plane [ X, P], which we had discussed earlier – see Fig. 5.8. Let us see how fruitful this intention is.

17 I have to confess that this notation is imperfect, because strictly speaking, w( x, x’) and w( x) are different functions, and so are the functions w( p, p’) and w( p) used below. In the perfect world, I would use different letters for them all, but I desperately want to stay with “w” for all the probability densities, and there are not so many good fonts for this letter. Let me hope that the difference between these functions is clear from their arguments and the context.

18 It was introduced in 1932 by Eugene Wigner on the basis of a general ( Weyl-Wigner) transform suggested by Hermann Weyl in 1927 and re-derived in 1948 by Jean Ville on a different mathematical basis.

Chapter 7

Page 10 of 50

Essential Graduate Physics

QM: Quantum Mechanics

x'

X 2

~

x

Fig. 7.2. The coordinates X and X employed in the Weyl-

0

Wigner transform (50). They differ from the coordinates

~

X

obtained by the rotation of the reference frame by the angle

/4 only by factors 2 and 1/2, describing scale stretch.

2

First of all, we may write the Fourier transform reciprocal to Eq. (50):

~

~

~

X

X

iPX

w X

, X

W ( X , P) exp

dP



 

.

(7.52)

2

2 

  

~

For the particular case X  0 , this relation yields

w( X )  w( X , X )   W( X, P) dP .

(7.53)

Hence the integral of the Wigner function over the momentum P gives the probability density to find the system at point X – just as it does for a classical distribution function w cl( X, P).19

Next, the Wigner function has the similar property for integration over X. To prove this fact, we may first introduce the momentum representation of the density matrix, in full analogy with its coordinate representation (27):

(

w p, p' )  p ˆ w p' .

(7.54)

Inserting, as usual, two identity operators, in the form given by Eq. (4.252), into the right-hand side of this equality, we get the following relation between the momentum and coordinate representations: 1

ipx

ip'x'

w( p, p' ) 

dxdx' p x x w ˆ x' x' p'



 dxdx' exp  w( x, x' )exp

 . (7.55)

2 

  

 

This is of course nothing else than the unitary transform of an operator from the x-basis to the p-basis, similar to the first form of Eq. (4.272). For coinciding arguments, p = p’, Eq. (55) is reduced to 1

ip( x x' )

(

w p)  (

w p, p) 

 dxdx' (

w x, x' ) exp

.

(7.56)

2 

Now using Eq. (29) and then Eq. (4.265), this function may be represented as

1

*

ip( x x' )

w( p) 

W dxdx'  ( x) ( x)exp





   W

*

,

(7.57)

j

j

j

j

j p

j p

2 j

j

and hence interpreted as the probability density of the particle’s momentum at value p. Now, in the variables (51), Eq. (56) has the form

19 Such function, used to express the probability dW to find the system in a small area of the phase plane as dW = w cl( X, P) dXdP, is a major notion of the (1D) classical statistics – see, e.g., SM Sec. 2.1.

Chapter 7

Page 11 of 50

Essential Graduate Physics

QM: Quantum Mechanics

~

~

~

1

X

X

ipX  ~

(

w p) 

w X

, X

exp

d dX

X

.



(7.58)

2



2

2 

 

Comparing this equality with the definition (50) of the Wigner function, we see that

w( P)   W( X, P) dX .

(7.59)

Thus, according to Eqs. (53) and (59), the integrals of the Wigner function over either the coordinate or momentum give the probability densities to find the system at a certain value of the counterpart variable. This is of course the main requirement to any quantum-mechanical candidate for the best analog of the classical probability density, w cl( X, P).

Let us see at how does the Wigner function look for the simplest systems at thermodynamic equilibrium. For a free 1D particle, we can use Eq. (34), ignoring for simplicity the normalization issues:



~ 2

~

mk TX

iPX

B

~

W ( X , P)  exp

 

exp

X

d .

(7.60)

2 2



 

The usual Gaussian integration yields:

P 2 

W ( X , P)  const  exp

 .

(7.61)

 2 mk T

B

We see that the function is independent of X (as it should be for this translational-invariant system), and coincides with the Gibbs distribution (24). We could get the same result directly from classical statistics.

This is natural because as we know from Sec. 2.2, the free motion is essentially not quantized – at least in terms of its energy and momentum.

Now let us consider a substantially quantum system, the harmonic oscillator. Plugging Eq. (44) into Eq. (50), for that system in thermal equilibrium it is easy to show (and hence is left for reader’s exercise) that the Wigner function is also Gaussian, now in both its arguments:

2

2

2



mX

P 

W ( X , P)  const  exp

0

 C

 ,

(7.62)

2

2

m

though the coefficient C is now different from 1/ k B T , and tends to that limit only at high temperatures,

k B T >> 0. Moreover, for a Glauber state, the Wigner function also gives a very plausible result – a Gaussian distribution similar to Eq. (62), but properly shifted from the origin to the central point of the state – see Sec. 5.5.20

Unfortunately, for some other possible states of the harmonic oscillator, e.g., any pure Fock state with n > 0, the Wigner function takes negative values in some regions of the [ X, P] plane – see Fig. 3.21

(Such plots were the basis of my, admittedly very imperfect, classical images of the Fock states in Fig.

5.8.)

20 Please note that in the notation of Sec. 5.5, the capital letters X and P mean not the arguments of the Wigner function, but the Cartesian coordinates of the central point (5.102), i.e. the classical complex amplitude of the oscillations.

21 Spectacular experimental measurements of this function (for n = 0 and n = 1) were carried out recently by E.

Bimbard et al., Phys. Rev. Lett. 112, 033601 (2014).

Chapter 7

Page 12 of 50

Image 2385

Image 2386

Image 2387

Essential Graduate Physics

QM: Quantum Mechanics

Fig. 7.3. The Wigner functions W( X, P) of a harmonic oscillator, in a few of its stationary (Fock) states n: (a) n = 0, (b) n = 1; (c) n = 5. Graphics by J. S. Lundeen; adapted from http://en.wikipedia.org/wiki/Wigner_function as a public-domain material.

The same is true for most other quantum systems and their states. Indeed, this fact could be predicted just by looking at the definition (50) applied to a pure quantum state, in which the density function may be factored – see Eq. (31):

~

~

~

1

X  *

X

iPX  ~

W ( X , P) 

X

  X  exp

dX .

(7.63)

2



2 

2 

 

Changing the argument P (say, at fixed X), we are essentially changing the spatial “frequency” (wave number) of the wavefunction product’s Fourier component we are calculating, and we know that their Fourier images typically change sign as the frequency is changed. Hence the wavefunctions should have some high-symmetry properties to avoid this effect. Indeed, the Gaussian functions (describing, for example, the Glauber states, and in their particular case, the ground state of the harmonic oscillator) have such symmetry, but many other functions do not.

Hence if the Wigner function was taken seriously as the quantum-mechanical analog of the classical probability density w cl( X, P), we would need to interpret the negative probability of finding the particle in certain elementary intervals dXdP – which is hard to do. However, the function is still used for a semi-quantitative interpretation of mixed states of quantum systems.

7.3. Open system dynamics: Dephasing

So far we have discussed the density operator as something given at a particular time instant.

Now let us discuss how is it formed, i.e. its evolution in time, starting from the simplest case when the probabilities Wj participating in Eq. (15) are time-independent – by this or that reason, to be discussed in a moment. In this case, in the Schrödinger picture, we may rewrite Eq. (15) as

w ˆ t

( )   w t() W w t() .

(7.64)

j

j

j

j

Taking a time derivative of both sides of this equation, multiplying them by i, and applying Eq. (4.158) to the basis states wj, with the account of the fact that the Hamiltonian operator is Hermitian, we get Chapter 7

Page 13 of 50

Essential Graduate Physics

QM: Quantum Mechanics

i w ˆ

  i wt() W w t()  w t() W wt() j

j

j

j

j

j

j

  H ˆ w t() W w t()  w t() W w t( H ˆ

)

(7.65)

j

j

j

j

j

j

j

ˆ

ˆ

H w ( t) W w ( t)   w ( t) W w ( t) H.

j

j

j

j

j

j

j

j

Now using Eq. (64) again (twice), we get the so-called von Neumann equation 22

i w ˆ

von Neumann

   H ˆ, w ˆ .

(7.66) equation

Note that this equation is similar in structure to Eq. (4.199) describing the time evolution of time-independent operators in the Heisenberg picture operators:

i A ˆ

   A ˆ H ˆ

, ,

(7.67)

besides the opposite order of the operators in the commutator – equivalent to the change of sign of the right-hand side. This should not be too surprising, because Eq. (66) belongs to the Schrödinger picture of quantum dynamics, while Eq. (67), to its Heisenberg picture.

The most important case when the von Neumann equation is (approximately) valid is when the

“own” Hamiltonian H ôf the system s of our interest is time-independent, and its interaction with the s

environment is so small that its effect on the system’s evolution during the considered time interval is negligible, but it had lasted so long that it gradually put the system into a non-pure state – for example, but not necessarily, into the classical mixture (24).23 (This is an example of the second case discussed in Sec. 1, when we need the mixed-ensemble description of the system even if its current interaction with the environment is negligible.) If the interaction with the environment is stronger, and hence is not negligible at the considered time interval, Eq. (66) is generally not valid,24 because the probabilities Wj may change in time. However, this equation may still be used for a discussion of one major effect of the environment, namely dephasing (also called “decoherence”), within a simple model.

Let us start with the following general model a system interacting with its environment, which will be used throughout this chapter:

Interaction

ˆ

ˆ

ˆ

H H H  

,

(7.68)

s

e  

ˆ

H int

with

environment

22 In some texts, it is called the “Liouville equation”, due to its philosophical proximity to the classical Liouville theorem for the classical distribution function w cl( X, P) – see, e.g., SM Sec. 6.1 and in particular Eq. (6.5).

23 In the last case, the statistical operator is diagonal in the stationary state basis and hence commutes with the Hamiltonian. Hence the right-hand side of Eq. (66) vanishes, and it shows that in this basis, the density matrix is completely time-independent.

24 Very unfortunately, this fact is not explained in some textbooks, which quote the von Neumann equation without proper qualifications.

Chapter 7

Page 14 of 50

Essential Graduate Physics

QM: Quantum Mechanics

where {} denotes the (huge) set of degrees of freedom of the environment.25 Evidently, this model is useful only if we may somehow tame the enormous size of the Hilbert space of these degrees of freedom, and so work out the calculations all way to a practicably simple result. This turns out to be possible mostly if the elementary act of interaction of the system and its environment is in some sense small. Below, I will describe several cases when this is true; the classical example is the Brownian particle interacting with the molecules of the surrounding gas or fluid.26 (In this example, a single hit by a molecule changes the particle’s momentum by a minor fraction.) On the other hand, the model (68) is not very productive for a particle interacting with the environment consisting of similar particles, when a single collision may change its momentum dramatically. In such cases, the methods discussed in the next chapter are more relevant.

Now let us analyze a very simple model of an open two-level quantum system, with its intrinsic Hamiltonian having the form

H ˆ  c ˆ ,

(7.69)

s

z

z

similar to the Pauli Hamiltonian (4.163),27 and a factorable, bilinear interaction – cf. Eq. (6.145) and its discussion:

H ˆ

f ˆ

,

(7.70)

int

ˆ z

where f îs a Hermitian operator depending only on the set {} of environmental degrees of freedom (“coordinates”), defined in their Hilbert space – different from that of the two-level system. As a result, the operators f ˆ and H ˆ  commute with ˆ - and with any other intrinsic operator of the two-level e

z

system. Of course, any realistic H ˆ  is extremely complex, so that how much we will be able to e

achieve without specifying it, may be a pleasant surprise for the reader.

Before we proceed to the analysis, let us recognize two examples of two-level systems that may be described by this model. The first example is a spin-½ in an external magnetic field of a fixed direction (taken for the axis z), which includes both an average component B and a random

~

(fluctuating) component

( t) induced by the environment. As it follows from Eq. (4.163b), it may be

z

B

described by the Hamiltonian (68)-(70) with

ˆ

ˆ~

c  

B ,

and f  

B t 

.

(7.71)

z

z

z  

2

2

25 Note that by writing Eq. (68), we are treating the whole system, including the environment, as a Hamiltonian one. This can always be done if the accounted part of the environment is large enough so that the processes in the system s of our interest do not depend on the type of boundary between this part and the “external” (even larger) environment; in particular, we may assume the total system to be closed, i.e. Hamiltonian.

26 The theory of the Brownian motion, the effect first observed experimentally by biologist Robert Brown in the 1820s, was pioneered by Albert Einstein in 1905 and developed in detail by Marian Smoluchowski in 1906-1907

and Adriaan Fokker in 1913. Due to this historic background, in some older texts, the approach described in the balance of this chapter is called the “quantum theory of the Brownian motion”. Let me, however, emphasize that due to the later progress of experimental techniques, quantum-mechanical behaviors, including the environmental effects in them, have been observed in a rapidly growing number of various quasi-macroscopic systems, for which this approach is quite applicable. In particular, this is true for most systems being explored as possible qubits of prospective quantum computing and encryption systems – see Sec. 8.5 below.

27 As we know from Secs. 4.6 and 5.1, such Hamiltonian is sufficient to lift the energy level degeneracy.

Chapter 7

Page 15 of 50

Essential Graduate Physics

QM: Quantum Mechanics

Another example is a particle in a symmetric double-well potential Us (Fig. 4), with a barrier between them sufficiently high to be practically impenetrable, and an additional force F( t), exerted by the environment, so that the total potential energy is U( x, t) = Us( x) – F( t) x. If the force, including its

~

static part F and fluctuations Ft, is sufficiently weak, we can neglect its effects on the shape of potential wells and hence on the localized wavefunctions L,R, so that the force effect is reduced to the variation of the difference E L – E R = F( t) x between the eigenenergies. As a result, the system may be described by Eqs. (68)-(70) with

ˆ~

ˆ

c   F x

 / ;

2

f   F

.

(7.72)

z

tx / 2

U ( x)

s

L

 R

F t() x

Fig. 7.4. Dephasing in a double-well

system.

0

x

x

Let us start our general analysis of the model described by Eqs. (68)-(70) by writing the equation of motion for the Heisenberg operator ˆ  t :

z

i ˆ



  H

c

f  

(7.73)

z

 ˆ ˆ,

z

ˆ

 (  )

z

 ˆ , ˆ

z

z   0,

showing that in our simple model (68)-(70), the operator ˆ does not evolve in time. What does this z

mean for the observables? For an arbitrary density matrix of any two-level system,

w

w

11

12 

w  

 ,

(7.74)

w

w

21

22 

we can readily calculate the trace of operator ˆ

w ˆ. Indeed, since the operator traces are basis-

z

independent, we can do this in any basis, in particular in the usual z-basis:

1 0 

w

w 

Tr ˆ σ ˆ w







 

.

(7.75)

z

 Trσ w

z

11

12

Tr

w

w

W

W

11

22

1

2

0 1 w

w

 21

22 

Since, according to Eq. (5), ˆ may be considered the operator for the difference of the number z

of particles in the basis states 1 and 2, in the case (73) the difference W 1 – W 2 does not depend on time, and since the sum of these probabilities is also fixed, W 1 + W 2 = 1, both of them are constant. The physics of this simple result is especially clear for the model shown in Fig. 4: since the potential barrier separating the potential wells is so high that tunneling through it is negligible, the interaction with the environment cannot move the system from one well into another one.

It may look like nothing interesting may happen in such a simple situation, but in a minute we will see that this is not true. Due to the time independence of W 1 and W 2, we may use the von Neumann equation (66) to describe the density matrix evolution. In the usual z-basis:

Chapter 7

Page 16 of 50

Essential Graduate Physics

QM: Quantum Mechanics

w

w 

iw

11

12

  i

 H,  





 ˆ

w

c f

z

σ ,w

z

w

w 

21

22

(7.76)

w

w

w

 

ˆ

c f

z

1 0  



11

12



 

  

ˆ

,

c f

z

 0 2 12 .

0 1



w

w

 2 w

0 

 21

22 

21

This result means that while the diagonal elements, i.e., the probabilities of the states, do not evolve in time (as we already know), the off-diagonal elements do change; for example,

ˆ

iw  2( c f ) w ,

(7.77)

12

z

12

with a similar but complex-conjugate equation for w 21. The solution of this linear differential equation (77) is straightforward, and yields

c

2

2

z

t

w t

( )  w

ˆ

( )

0 exp

exp

( )

.

(7.78)

12

12

 i

t

 i f t' dt'

 

  0

The first exponent is a deterministic c-number factor, while in the second one ˆ

ˆ

f ( t)  f ( t

) is still an

operator in the Hilbert space of the environment, but from the point of view of the two-level system of our interest, it is a random function of time. The time-average part of this function may be included in cz, so in what follows, we will assume that it equals zero.

Let us start from the limit when the environment behaves classically.28 In this case, the operator in Eq. (78) may be considered as a classical random function of time f( t), provided that we average its effects over a statistical ensemble of many functions f( t) describing many (macroscopically similar) experiments. For a small time interval t = dt  0, we can use the Taylor expansion of the exponent, truncating it after the quadratic term:

2 dt

2 dt

1 

2 dt

2 dt

exp i

f ( t' ) dt'   1  i

f ( t' ) dt'

i

f ( t' ) dt' i

f ( t" )







dt"

2



0

0

 0

 0

(7.79)

2 dt

2 dt

dt

2 dt

dt

 1 i

f ( t' ) dt'

dt' dt" f ( t' ) f ( t" )  1

dt' dt"K ( t' t" .)

 

 

2

2

f

 0

 0

0

 0

0

Here we have used the facts that the statistical average of f( t) is equal to zero, while the second average, called the correlation function, in a statistically- (i.e. macroscopically-) stationary state of any environment may only depend on the time difference   t’ – t” :

Correlation

f ( t' ) f ( t" )  K ( t' t" )  K ( ).

function

f

f

(7.80)

If this difference is much larger than some time scale c, called the correlation time of the environment, the values f( t’) and f( t” ) are completely independent ( uncorrelated), as illustrated in Fig. 5a, so that at 

 , the correlation function has to tend to zero. On the other hand, at  = 0, i.e. t’ = t” , the correlation function is just the variance of f:

28 This assumption is not in contradiction with the need for the quantum treatment of the two-level system s, because a typical environment is large, and hence has a very dense energy spectrum, with the distances adjacent levels that may be readily bridged by thermal excitations of small energies, often making it essentially classical.

Chapter 7

Page 17 of 50

Essential Graduate Physics

QM: Quantum Mechanics

K ( )

0

2

f ,

(7.81)

f

and has to be positive. As a result, the function looks (semi-quantitatively) as shown in Fig. 5b.

(a)

(b)

f ( t)

f ( t' ) f ( t" )

t" t'

Fig. 7.5. (a) A typical random

process and (b) its correlation

0

0

t

t' t"

function – schematically.

c

Hence, if we are only interested in time differences  much longer than c, which is typically very short, we may approximate Kf() well with a delta function of the time difference. Let us take it in the following form, convenient for later discussion:

Phase

K ( )

2

 ( ) ,

(7.82) diffusion

f

D

coefficient

where D is a positive constant called the phase diffusion coefficient. The origin of this term stems from the very similar effect of classical diffusion of Brownian particles in a highly viscous medium. Indeed, the particle’s velocity in such a medium is approximately proportional to the external force. Hence, if the random hits of a particle by the medium’s molecules may be described by a force that obeys a law similar to Eq. (82), the velocity (along any Cartesian coordinate) is also delta-correlated: v( t) 

,

0

v( t' ) v( t" )  2 D ( t' t" ).

(7.83)

Now we can integrate the kinematic relation x  v, to calculate particle’s displacement from its initial position during a time interval [0, t] and its variance:

t

x t

( )  x( )

0   v t'

( ) dt' ,

(7.84)

0

t

t

t

t

t

t

x( t)  x( )

0 2 

v( t' ) dt' v( t" ) dt" dt' dt" v( t' ) v( t" )  dt' dt" 2 D ( t' t" )  2 Dt.

 

 

(7.85)

0

0

0

0

0

0

This is the famous law of diffusion, showing that the r.m.s. deviation of the particle from the initial point grows with time as (2 Dt)1/2, where the constant D is called the diffusion coefficient.

Returning to the diffusion of the quantum-mechanical phase, with Eq. (82) the last double integral in Eq. (79) yields 2 Dφdt, so that the statistical average of Eq. (78) is

c

2

w ( dt)  w ( )

0 exp i z dt 

.

(7.86)

12

12

1 2 D dt

Applying this formula to sequential time intervals,

2 c

2 c

w (2 dt)  w ( dt) exp i z dt  D dt w

 i

z

dt  D dt (7.87)

12

12

1 2  

( )

0 exp

2

12

1 2  2,

Chapter 7

Page 18 of 50

Essential Graduate Physics

QM: Quantum Mechanics

etc., for a finite time t = Ndt, in the limit N → ∞ and dt → 0 (at fixed t) we get N

c

2

1 

z

w t

( )  w (0) exp

.

(7.88)

12

12

 i

tlim  1 2 D t

N

 

N

By the definition of the natural logarithm base e,29 this limit is just exp{-2 Dt}, so that, finally: Two-level

2 a

a

t

system:

w ( t) 

2

w (0)exp

i

t exp 2 D t

w (0)exp

i

t exp

.

(7.89)

12

12



  

 12



 

dephasing

 

 

T 2 

So, due to coupling to the environment, the off-diagonal elements of the density matrix decay with some dephasing time T 2 = 1/2 D, providing a natural evolution from the density matrix (22) of a pure state to the diagonal matrix (23), with the same probabilities W 1,2, describing a fully dephased (incoherent) classical mixture.30

This simple model offers a very clear look at the nature of the decoherence: the random “force”

f( t), exerted by the environment, “shakes” the energy difference between two eigenstates of the system and hence the instantaneous velocity 2( cz + f)/ of their mutual phase shift φ( t) – cf. Eq. (22). Due to the randomness of the force, φ( t) performs a random walk around the trigonometric circle, so that the average of its trigonometric functions exp{± } over time gradually tends to zero, killing the off-diagonal elements of the density matrix. Our analysis, however, has left open two important issues: (i) Is this approach valid for a quantum description of a typical environment?

(ii) If yes, what is physically the D that was formally defined by Eq. (82)?

7.4. Fluctuation-dissipation theorem

Similar questions may be asked about a more general situation, when the Hamiltonian H ôf the s

system of interest ( s), in the composite Hamiltonian (68), is not specified at all, but the interaction between that system and its environment still has a bilinear form similar to Eqs. (70) and (6.130): ˆ

ˆ

H

  F{} ˆ x,

(7.90)

int

where x is some observable of our system s – say, its generalized coordinate or generalized momentum.

It may look incredible that in this very general situation one still can make a very simple and powerful statement about the statistical properties of the generalized force F, under only two (interrelated) conditions – which are satisfied in a huge number of cases of interest:

(i) the coupling of system s of interest to its environment e is weak – in the sense that the perturbation theory (see Chapter 6) is applicable, and

29 See, e.g., MA Eq. (1.2a) with n = – N/ 2Dt.

30 Note that this result is valid only if the approximation (82) may be applied at time interval dt which, in turn, should be much smaller than the T 2 in Eq. (88), i.e. if the dephasing time is much longer than the environment’s correlation time c. This requirement may be always satisfied by making the coupling to the environment sufficiently weak. In addition, in typical environments, c is very short. For example, in the original Brownian motion experiments with a-few-m pollen grains in water, it is of the order of the average interval between sequential molecular impacts, of the order of 10-21 s.

Chapter 7

Page 19 of 50

Essential Graduate Physics

QM: Quantum Mechanics

(ii) the environment may be considered as staying in thermodynamic equilibrium, with a certain temperature T, regardless of the process in the system of interest.31

This famous statement is called the fluctuation-dissipation theorem (FDT).32 Due to the importance of this fundamental result, let me derive it.33 Since by writing Eq. (68) we treat the whole system ( s + e) as a Hamiltonian one, we may use the Heisenberg equation (4.199) to write i F ˆ

   F ˆ H ˆ

,   F ˆ H ˆ

,

,

(7.91)

e

because, as was discussed in the last section, operator F ˆcommutes with both H ând x ˆ . Generally, s

very little may be done with this equation, because the time evolution of the environment’s Hamiltonian depends, in turn, on that of the force. This is where the perturbation theory becomes indispensable. Let us decompose the force operator into the following sum:

ˆ

F

ˆ~

ˆ~

ˆ

F F( t), with F( t)  0 ,

(7.92)

where (here and on, until further notice) the sign … means the statistical averaging over the environment alone, i.e. over an ensemble with absolutely similar evolutions of the system s, but random states of its environment.34 From the point of view of the system s, the first term of the sum (still an operator!) describes the average response of the environment to the system dynamics (possibly, including such irreversible effects as friction), and has to be calculated with a proper account of their interaction – as we will do later in this section. On the other hand, the last term in Eq. (92) represents random fluctuations of the environment, which exist even in the absence of the system s. Hence, in the first non-zero approximation in the interaction strength, the fluctuation part may be calculated ignoring the interaction, i.e. treating the environment as being in thermodynamic equilibrium:

ˆ~

 ˆ~ ˆ

iF   F, H

.

(7.93)

e eq 

Since in this approximation the environment’s Hamiltonian does not have an explicit dependence on time, the solution of this equation may be written by combining Eqs. (4.190) and (4.175): 31 The most frequent example of the violation of this condition is the environment’s overheating by the energy flow from system s. Let me leave it to the reader to estimate the overheating of a standard physical laboratory room by a typical dissipative quantum process – the emission of an optical photon by an atom. ( Hint: it is extremely small.)

32 The FDT was first derived by Herbert Callen and Theodore Allen Welton in 1951, on the background of an earlier derivation of its classical limit by Harry Nyquist in 1928.

33 The FDT may be proved in several ways that are shorter than the one given below – see, e.g., either the proof in SM Secs. 5.5 and 5.6 (based on H. Nyquist’s arguments), or the original paper by H. Callen and T. Welton, Phys.

Rev. 83, 34 (1951) – wonderful in its clarity. The longer approach I will describe here, besides giving the important Green- Kubo formula (109) as a byproduct, is a very useful exercise in the operator manipulation and the perturbation theory in its integral form – different from the differential forms used in Chapter 6. If the reader is not interested in this exercise, they may skip the derivation and jump straight to the result expressed by Eq.

(134), which uses the notions defined by Eqs. (114) and (123).

34 For usual (“ergodic”) environments, without intrinsic long-term memories, this statistical averaging over an ensemble of environments is equivalent to averaging over intermediate times – much longer than the correlation time c of the environment, but still much shorter than the characteristic time of evolution of the system under analysis, such as the dephasing time T 2 and the energy relaxation time T 1 – both still to be calculated.

Chapter 7

Page 20 of 50

Essential Graduate Physics

QM: Quantum Mechanics

i

i

F ˆ t 

 H êxp

t ˆ

ˆ

0 exp

.

(7.94)

e eq  F  

 H

t

e eq 

 

 

Let us use this relation to calculate the correlation function of the fluctuations F( t), defined similarly to Eq. (80), but taking care of the order of the time arguments (very soon we will see why):

~

~

i

i

i

i

FtFt'  

 H êxp

t ˆ

ˆ

ˆ

ˆ

ˆ

0 exp

exp

0 exp

.

(7.95)

e F  

 H te

 H t'

e

F 

 H t'

e

 

 

 

 

(Here, for the notation brevity, the thermal equilibrium of the environment is just implied.) We may calculate this expectation value in any basis, and the best choice for it is evident: in the environment’s stationary-state basis, the density operator of the environment, its Hamiltonian, and hence the exponents in Eq. (95) are all represented by diagonal matrices. Using Eq. (5), the correlation function becomes

~

~

FtFt'

i

i

i

i

Tr w ˆ

 H êxp

t F ˆ 0

H êxp

t

H êxp

t' F ˆ 0

H êxp

t'

e

  



e



e

  





e 

 

 

 

 



 

i

i

i

i

w ˆ

 H êxp

t F ˆ 0

H êxp

t

H êxp

t' F ˆ 0

H êxp

t'

e

  



e



e

  





e 

n

 

 

 

 

 nn

i

i

i

i

  W

E t F êxp

exp

E t

E t' F êxp

exp

E t' (7.96)

n



n nn'



n'



n'

n'n



n

n, n'

 

 

 

 

i

2

  W F

exp

n

nn'

  E E

n

n' ( t t' ) .

n, n'

 

Here Wn are the Gibbs distribution probabilities given by Eq. (24), with the environment’s temperature T, and Fnn’Fnn’(0) are the Schrödinger-picture matrix elements of the interaction force operator.

We see that though the correlator (96) is a function of the difference   t – t’ only (as it should be for fluctuations in a macroscopically stationary system), it may depend on the order of its arguments.

This is why let us mark this particular correlation function with the upper index “+”,

~

~

2

 ~

iE

~

K   F t F t'   W F

exp

 ,

where E E E ,

(7.97)

F  

   

n

nn'

n

n'

n, n'

 

while its counterpart, with the swapped times t and t’, with the upper index “–”:

~

~

2

 ~

iE 

K

exp

.

(7.98)

F   

K F    Ft' Ft   W F

n

nn'



n, n'

 

So, in contrast with classical processes, in quantum mechanics the correlation function of fluctuations

~

F is not necessarily time-symmetric:

~

E

K

(7.99)

F   

K F    

K F   

K F  

~

Ft~

Ft'  ~

Ft' ~

Ft  2

2

iW F

sin

 ,

0

n

nn'

n, n'

so that F ˆ t gives one more example of a Heisenberg-picture operator whose “values”, taken in different moments of time, generally do not commute – see Footnote 49 in Chapter 4. (A good sanity check here is that at  = 0, i.e. at t = t’, the difference (99) between K +

-

F and KF vanishes.)

Chapter 7

Page 21 of 50

Essential Graduate Physics

QM: Quantum Mechanics

Now let us return to the force operator’s decomposition (92), and calculate its first (average) component. To do that, let us write the formal solution of Eq. (91) as follows:

t

1

F ˆ t

( ) 

 F ˆ t' H ˆ

,

t' dt' .

(7.100)

e  

i 

On the right-hand side of this relation, we still cannot treat the Hamiltonian of the environment as an unperturbed (equilibrium) one, even if the effect of our system ( s) on the environment is very weak, because this would give zero statistical average of the force F( t). Hence, we should make one more step of our perturbative treatment, taking into account the effect of the force on the environment. To do this, let us use Eqs. (68) and (90) to write the (so far, exact) Heisenberg equation of motion for the environment’s Hamiltonian,

iH ˆ

ˆ

ˆ

,

  ˆ ˆ

ˆ

,

,

(7.101)

e

H H

e

xH F

e

and its formal solution, similar to Eq. (100), but for time t’ rather than t: t'

1

H ˆ t'

( )  

x ˆ t"

(

H ˆ

)

t" F ˆ

,

t" dt" .

(7.102)

e

e   

i 

Plugging this equality into the right-hand side of Eq. (100), and averaging the result (again, over the environment only!), we get

t

t

1

'

F ˆ t

( ) 

dt' dt" x ˆ t"

.

(7.103)

2 

F ˆ t'   H ˆ

,

t" F ˆ

,

t"

e

  

  

This is still an exact result, but now it is ready for an approximate treatment, implemented by averaging in its right-hand side over the unperturbed (thermal-equilibrium) state of the environment.

This may be done absolutely similarly to that in Eq. (96), at the last step using Eq. (94):

F ˆ t' , H ˆ

ˆ

,

 Tr w F , H F

e t" F t" 

   t'   e t" 

 Tr w  

F t' H F

 F F

H  H F

F

 F

H F

e t"

t'   t" e

e t"   t'

t" e t' 

  W F t' E F t" F t' F t" E E F t" F t' F t" E F t"

n nn'   n'

n'n



nn'   n' n

n

n

nn'

n'n  nn'   n' n'n 

n, n'

~

~

iE t' t"

(7.104)

2

  W E F

n

nn'

exp

 

.

c.c .

n, n'





Now, if we try to integrate each term of this sum, as Eq. (103) seems to require, we will see that the lower-limit substitution (at t’, t”  –) is uncertain because the exponents oscillate without decay. This mathematical difficulty may be overcome by the following physical reasoning. As illustrated by the example considered in the previous section, coupling to a disordered environment makes the “memory horizon” of the system of our interest ( s) finite: its current state does not depend on its history beyond a certain time scale.35 As a result, the function under the integrals of Eq. (103), i.e. the sum (104), should 35 Actually, this is true for virtually any real physical system – in contrast to idealized models such as a dissipation-free oscillator that swings for ever and ever with the same amplitude and phase, thus “remembering”

the initial conditions.

Chapter 7

Page 22 of 50

Image 2388

Image 2389

Image 2390

Image 2391

Image 2392

Image 2393

Image 2394

Image 2395

Image 2396

Essential Graduate Physics

QM: Quantum Mechanics

self-average at a certain finite time. A simplistic technique for expressing this fact mathematically is just dropping the lower-limit substitution; this would give the correct result for Eq. (103). However, a better (mathematically more acceptable) trick is to first multiply the functions under the integrals by, respectively, exp{( t – t’)} and exp{( t’ – t” )}, where  is a very small positive constant, then carry out the integration, and after that follow the limit   0. The physical justification of this procedure may be provided by saying that the system’s behavior should not be affected if its interaction with the environment was not kept constant but rather turned on gradually – say, exponentially with an infinitesimal rate . With this modification, Eq. (103) becomes

t

t'

~

1

~

iE t' t"

ˆ

F( t)

2

 

W E F lim

dt' dt" ˆ x t"

t" t

(7.105)

n

nn'

 

 

  

2

0

 

exp

.

c.c .

n, n'









This double integration is over the area shaded in Fig. 6, which makes it obvious that the order of integration may be changed to the opposite one as

t

t'

t

t

t

0

t

dt' dt" ...  dt" dt' ...  dt" dt' t

 

 

...   dt" d ' ...,

(7.106)







t"



t" t



0

where  tt’, and  tt”.

t"

t' t"

t

Fig. 7.6. The 2D integration

t'

t

area in Eqs. (105) and (106).

As a result, Eq. (105) may be rewritten as a single integral,

t

Average

environment’s

ˆ

Ft  G( t t" ) ˆ x( t" ) dt" G( ) ˆ x( t  ) d , response

(7.107)



0

whose kernel,

~

G

iE  

'

 0

1

~

2

 

W E F lim  exp

    .

c.c  d'

2

n

nn'

 0

n, n'





~0

~

(7.108)

2

2

E



2

E

 lim   W F

sin

2

e

  W F

sin

,

 0 

n

nn'

n

nn'

n, n'

n, n'

does not depend on the particular law of evolution of the system ( s) under study, i.e. provides a general characterization of its coupling to the environment.

In Eq. (107) we may readily recognize the most general form of the linear response of a system (in our case, the environment), taking into account the causality principle, where G() is the response function (also called the “temporal Green’s function”) of the environment. Now comparing Eq. (108) with Eq. (99), we get a wonderfully simple universal relation,

Chapter 7

Page 23 of 50

Essential Graduate Physics

QM: Quantum Mechanics

 ˆ~

ˆ~

F ( ), F (0)  iG( )

Green-Kubo

.

(7.109)



formula

that emphasizes once again the quantum nature of the correlation function’s time asymmetry. (This relation, called the Green- Kubo (or just “Kubo”) formula after the works by Melville Green (1954) and Ryogo Kubo (1957), does not come up in the easier derivations of the FDT, mentioned in the beginning of this section.)

However, for us the relation between the function G() and the force’s anti-commutator,

F ˆ~ t( F ˆ~), t( 

F ˆ~

)

t

(  F ˆ~

) t

(  F ˆ~

)

t

( F ˆ~

) t

( 

)  K

K

,

(7.110)

F  

F  

is much more important, because of the following reason. Eqs. (97)-(98) show that the so-called symmetrized correlation function,

~

K

 

F

 

  KF   KF   1  ˆ~

ˆ~

2

E

F( ), F( )

0  lim

2

W F

cos

e

2

2

 0

n

nn'

Symmetrized

n n

~

, '

(7.111) correlation

2

E

function

  W F

cos

,

n

nn'

n, n'

which is an even function of the time difference , looks very similar to the response function (108),

“only” with another trigonometric function under the sum, and a constant front factor.36 This similarity may be used to obtain a direct algebraic relation between the Fourier images of these two functions of .

Indeed, the function (111) may be represented as the Fourier integral37





K ( ) 

i

,

(7.112)

F

S ()



e

d  2

F

S ()cos d

F



0

with the reciprocal transform





1

i

1

S () 

,

(7.113)

F

K () e d 

F

K ()cos d

2

F



0

of the symmetrized spectral density of the variable F, defined as

1

1

Symmetrized

ˆ ˆ

ˆ

ˆ

S () (  '

 ) 

F F

F F

F F

(7.114) spectral

F

  ω'

ω'

ˆ ˆ,

ω'  ,

2

2

density

where the function 

F ˆ (also a Heisenberg operator rather than a c-number!) is defined as





1

F ˆ 

ˆ~

i t

ˆ~

ˆ

i t

F t

( ) e

dt,

that

so

F t

( )

 

F e

d .

(7.115)

2 



The physical meaning of the function SF() becomes clear if we write Eq. (112) for the particular case  = 0:

36 For the heroic reader who has suffered through the calculations up to this point: our conceptual work is done!

What remains is just some simple math to bring the relation between Eqs. (108) and (111) to an explicit form.

37 Due to their practical importance, and certain mathematical issues of their justification for random functions, Eqs. (112)-(113) have their own grand name, the Wiener-Khinchin theorem, though the math rigor aside, they are just a straightforward corollary of the standard Fourier integral transform (115).

Chapter 7

Page 24 of 50

Essential Graduate Physics

QM: Quantum Mechanics





K ( )

0  ˆ~2

F

.

(7.116)

F

S () 

d  2

F

S () 

d

F



0

This formula infers that if we pass the function F( t) through a linear filter cutting from its frequency spectrum a narrow band of physical (positive) frequencies, then the variance  F 2

f  of the filtered

signal F f( t) would be equal to 2 SF( ω) – hence the name “spectral density”.38

~

Let us use Eqs. (111) and (113) to calculate the spectral density of fluctuations Ft in our model, using the same -trick as at the deviation of Eq. (108), to quench the upper-limit substitution:



~

S

i

F

 



2

1

E

  W F

lim

cos



n

nn'

  

e

e

d

n n

, '

2

0 



~

1

iE 

2

W F lim

 i

(7.117)

n

nn'

   exp

 

.

c.c  

e

e

d

2

0

n, n'

0 

  



1

2

1

1

W F lim

n

nn'

  

2

0

n n

i E    i E   

, '

 ~ /  

 ~

 

.

/

 

Now it is a convenient time to recall that each of the two summations here is over the eigenenergies of the environment, whose spectrum is virtually continuous because of its large size, so that we may transform each sum into an integral – just as this was done in Sec. 6.6:

... 

dn

...

...  E dE

,

(7.118)

n

n

n

where ( E)  dn/ dE is the environment’s density of states at a given energy. This transformation yields 1

2

1

1

S

(7.119)

F   

lim

dE W ( E )

n

n

En dE ( E )

  

F

n'

n'

nn'

2

0

i ~

E /     i  ~

E  

.

/

  

Since the expression inside the square bracket depends only on a specific linear combination of two

~

energies, namely on E E E , it is convenient to introduce also another, linearly-independent n

n'

combination of the energies, for example, the average energy E   E E

, so that the state energies

n

n'  / 2

may be represented as

~

~

E

E

E E

, E E

.

(7.120)

n

2

n'

2

With this notation, Eq. (119) becomes

 ~ 

~

~

~

E  

E  

E

2

1

S

lim