A
B
A ˆ
~
B ˆ~ .
(4.148)
Actually, this is already an uncertainty relation, even “better” (stronger) than its standard form (140); moreover, it is more convenient in some cases. To prove Eq. (140), we need a couple of more steps. First, let us notice that the operator product participating in Eq. (148) may be recast as ˆ~ ˆ~ 1 ˆ~ ˆ~
i
ˆ
ˆ
ˆ~ ˆ~
AB
A, B C,
C
where
i A, B .
(4.149)
2
2
Any anticommutator of Hermitian operators, including that in Eq. (149), is a Hermitian operator, and its eigenvalues are purely real, so that its expectation value (in any state) is also purely real. On the other hand, the commutator part of Eq. (149) is just
C i
A ˆ~
ˆ
B ˆ~
, i A ˆ A B ˆ B i B ˆ B A ˆ A i A ˆ B ˆ B ˆ A ˆ i A ˆ B ˆ
,
.
(4.150)
Second, according to Eqs. (52) and (65), the Hermitian conjugate of any product of the Hermitian operators A ând B îs just the product of these operators swapped. Using the fact, we may write
†
C ˆ † i A ˆ B ˆ
, i A ˆ
( B ˆ)† i B ˆ
( A ˆ † iB ˆ
)
A ˆ iA ˆ B ˆ i A ˆ B ˆ
, C ˆ ,
(4.151)
so that the operator C îs also Hermitian, i.e. its eigenvalues are also real, and thus its expectation value is purely real as well. As a result, the square of the expectation value of the operator product (149) may be represented as
2
2
2
ˆ~ ˆ~
1 ˆ~ ˆ~
1 ˆ
AB
A, B
C
2
.
(4.152)
2
Since the first term on the right-hand side of this equality cannot be negative, we may write 2
2
ˆ~ ˆ~
1
i
ˆ
AB
C
ˆ ˆ,
A B 2 ,
(4.153)
2
2
and hence continue Eq. (148) as
ˆ~ ˆ~
1
A
B
AB
A ˆ B ˆ, ,
(4.154)
2
thus proving Eq. (140).
For the particular case of operators x ând p ˆ (or a similar pair of operators for another Cartesian x
coordinate), we may readily combine Eq. (140) with Eq. (2.14b) and to prove the original Heisenberg’s uncertainty relation (2.13). For the spin-½ operators defined by Eq. (116)-(117), it is very simple (and highly recommended to the reader) to show that
Spin- ½:
ˆ
, ˆ
i
S S i
S
(4.155) commutation
j
j'
2
ˆ ,
jj ' j"
j"
ˆ ˆ
i.e.
,
j
j'
ˆ ,
jj ' j"
j"
relations
where jj’j” is the Levi-Civita permutation symbol – see, e.g., MA Eq. (13.2). As a result, the uncertainty relations (140) for all Cartesian components of spin-½ systems are similar, for example
Spin-½:
S
S
S
etc
,
.
(4.156) uncertainty
x
y
2
z
relations
Chapter 4
Page 28 of 52
QM: Quantum Mechanics
In particular, as we already know, in the state the right-hand side of this relation equals (/2)2
> 0, so that neither of the uncertainties Sx, Sy can equal zero. As a reminder, our direct calculation earlier in this section has shown that each of these uncertainties is equal to /2, i.e. their product is equal to the lowest value allowed by the uncertainty relation (156) – just as the Gaussian wave packets (2.16) provide the lowest possible value of the product x px, allowed by the Heisenberg relation (2.13).
4.6. Quantum dynamics: Three pictures
So far in this chapter, I shied away from the discussion of the system’s dynamics, implying that the bra- and ket-vectors were just their “snapshots” at a certain instant t. Now we are sufficiently prepared to examine their evolution in time. One of the most beautiful features of quantum mechanics is that this evolution may be described using either of three alternative “pictures”, giving exactly the same final results for the expectation values of all observables.
From the standpoint of our wave-mechanics experience, the Schrödinger picture is the most natural one. In this picture, the operators corresponding to time-independent observables (e.g., to the Hamiltonian function H of an isolated system) are also constant in time, while the bra- and ket-vectors evolve in time as
( t) ( t ) ˆ†
u ( t, t ),
( t) ˆ u( t, t ) ( t ) .
(4.157a)
0
0
0
0
Here ˆ u( t, t ) is the time- evolution operator, which obeys the following differential equation: 0
i
ˆ
ˆ
u
ˆ u
H ,
(4.157b)
t
where H îs the Hamiltonian operator of the system – which is always Hermitian: H †
ˆ H ˆ , and t 0 is the
initial moment of time. (Note that Eqs. (157) remain valid even if the Hamiltonian depends on time explicitly.) Differentiating the second of Eqs. (157a) over time t, and then using Eq. (157b) twice, we can merge these two relations into a single equation, without explicit use of the time-evolution operator:
Schrödinger
i
equation
t
H ˆ
t ,
(4.158)
t
which is frequently more convenient. (However, for some purposes the notion of the time-evolution operator, together with Eq. (157b), are useful – as we will see in a minute.) While Eq. (158) is a very natural generalization of the wave-mechanical equation (1.25), and is also frequently called the Schrödinger equation,30 it still should be considered as a new, more general postulate, which finds its final justification (as it is usual in physics) in the agreement of its corollaries with experiment – more exactly, in the absence of a single credible contradiction to an experiment.
Starting the discussion of Eq. (158), let us first consider the case of a time-independent Hamiltonian, whose eigenstates an and eigenvalues En obey Eq. (68) for this operator:31
H ˆ a E a ,
(4.159)
n
n
n
30 Moreover, we will be able to derive Eq. (1.25) from Eq. (158) – see below.
31 I have switched the state index notation from j to n, which was used for numbering stationary states in Chapter 1, to emphasize the special role played by the stationary states an in quantum dynamics.
Chapter 4
Page 29 of 52
QM: Quantum Mechanics
and hence are also time-independent. (Similarly to the wavefunctions n defined by Eq. (1.60), an are called the stationary states of the system.) Let us use Eqs. (158)-(159) to calculate the law of time evolution of the expansion coefficients n (i.e. the probability amplitudes) defined by Eq. (118), in a stationary state basis, using Eq. (158):
d
d
1
E
i
ˆ
( t)
a ( t) a
( t) a
H ( t)
n
a ( t) E .
(4.160)
n
n
n
n
n
n
n
dt
dt
i
i
This is the same simple equation as Eq. (1.61), and its integration, with the initial moment t 0 taken for 0, yields a similar result – cf. Eq. (1.62), just with the initial time t 0 rather than 0: Time
i
t
( ) t
( ) exp
.
(4.161) evolution
n
n
0
E t
n
of probability
amplitudes
In order to illustrate how this result works, let us consider the dynamics of a spin-½ in a time-independent, uniform external magnetic field B. To construct the system’s Hamiltonian, we may apply the correspondence principle to the classical expression for the energy of a magnetic moment m in the external magnetic field B, 32
U m B .
(4.162)
In quantum mechanics, the operator corresponding to the moment m is given by Eq. (115) (suggested by W. Pauli), so that the spin-field interaction is described by the so-called Pauli Hamiltonian, which may be, due to Eqs. (116)-(117), represented in several equivalent forms:
Pauli
ˆ
H m
ˆ
B Sˆ
B - γ σˆ B .
(4.163a) Hamiltonian:
2
operator
If the z-axis is aligned with the field’s direction, this expression is reduced to H ˆ
B S ˆ
B ˆ .
(4.163b)
z
z
2
According to Eq. (117), in the z-basis of the spin states and , the matrix of the operator (163b) is
B
Ω
Pauli
H
σ σ ,
Ω
where
(4.164) Hamiltonian:
z
z
B
.
2
2
z-basis matrix
The constant so defined coincides with the classical frequency of the precession, about the z-axis, of an axially-symmetric rigid body (the so-called symmetric top), with an angular momentum S and the magnetic moment m = S, induced by the external torque = mB.33 (For an electron, with its negative gyromagnetic ratio e = – g e e/2 m e, neglecting the tiny difference of the g e-factor from 2, we get e
B ,
(4.165)
m e
so that according to Eq. (3.48), the frequency coincides with the electron’s cyclotron frequency c.) 32 See, e.g., EM Eq. (5.100). As a reminder, we have already used this expression for the derivation of Eq. (3).
33 See, e.g., CM Sec. 4.5, in particular Eq. (4.72), and EM Sec. 5.5, in particular Eq. (5.114) and its discussion.
Chapter 4
Page 30 of 52
QM: Quantum Mechanics
In order to apply the general Eq. (161) to this case, we need to find the eigenstates an and eigenenergies En of our Hamiltonian. However, with our (smart :-) choice of the z-axis, the Hamiltonian matrix is already diagonal:
1
0
H
σ
,
(4.166)
2
z
2 0 1
meaning that the states and are the eigenstates of this system, with the eigenenergies, respectively, Spin-½ in
magnetic
field:
E
and E .
(4.167)
eigenenergies
2
2
Note that their difference,
Δ E E E Ω
,
(4.168)
B
corresponds to the classical energy 2 m B of flipping a magnetic dipole with the moment’s magnitude m = /2, oriented along the direction of the field B. Note also that if the product B is positive, then
is negative, so that E is negative, while E is positive. This is in the agreement with the classical picture of a magnetic dipole m having negative potential energy when it is aligned with the external magnetic field B – see Eq. (162) again.
So, for the time evolution of the probability amplitudes of these states, Eq. (161) immediately yields the following expressions:
i
i
( t) ( )
0 exp
t
t
t
(4.169)
,
( )
( )
0 exp
,
2
2
allowing a ready calculation of the time evolution of the expectation values of any observable. In particular, we can calculate the expectation value of Sz as a function of time by applying Eq. (130) to the (arbitrary) time moment t:
S ( t)
( t) *
( t) ( t) *
( t)
( )
0 *
( )
0 ( )
0 *
( )
0 S ( )
0 . (4.170)
z
2
2
z
Thus the expectation value of the spin component parallel to the applied magnetic field remains constant in time, regardless of the initial state of the system. However, this is not true for the components perpendicular to the field. For example, Eq. (132), applied to the moment t, gives
S
t
( )
*
*
0 *
0
0 *
0
.
(4.171)
x
t t t t
i t
e
i t
e
2
2
Clearly, this expression describes sinusoidal oscillations with frequency (164). The amplitude and the phase of these oscillations depend on initial conditions. Indeed, solving Eqs. (132)-(133) for the probability amplitude products, we get the following relations:
t *
t S
*
,
,
(4.172)
x t
i S y t
t t S
x t
i S y t
valid for any time t. Plugging their values for t = 0 into Eq. (171), we get Chapter 4
Page 31 of 52
QM: Quantum Mechanics
1
S ( t)
i t
i
t
x
Sx 0 i Sy
1
0 e
Sx 0 i Sy 0 e
2
2
(4.173)
S
x 0cos
t
S y 0sin t .
An absolutely similar calculation using Eq. (133) gives
S ( t) S
(4.174)
y
y 0cos
t
Sx 0sin t .
These formulas show, for example, that if at moment t = 0 the spin’s state was , i.e. Sx(0) =
Sy(0) = 0, then the oscillation amplitudes of the both “lateral” components of the spin vanish. On the other hand, if the spin was initially in the state →, i.e. had the definite, largest possible value of Sx, equal to /2 (in classics, we would say “the spin-½ was oriented in the x-direction”), then both expectation values Sx and Sy oscillate in time34 with this amplitude, and with the phase shift /2 between them.
So, the quantum-mechanical results for the expectation values of the Cartesian components of spin-½ are indistinguishable from the classical results for the precession, with the frequency = –B, 35
of a symmetric top with the angular momentum of magnitude S = /2, about the field’s direction (our axis z), under the effect of an external torque = mB exerted by the field B on the magnetic moment
m = S. Note, however, that the classical language does not describe the large quantum-mechanical uncertainties of the components, obeying Eqs. (156), which are absent in the classical picture – at least when it starts from a definite orientation of the angular momentum vector. Also, as we have seen in Sec.
3.5, the component Lz of the angular momentum at the orbital motion of particles is always a multiple of
– see, e.g., Eq. (3.139). As a result, the angular momentum of a spin-½ particle, with Sz = /2, cannot be explained by any summation of orbital angular moments of its hypothetical components, i.e. by any internal rotation of the particle about its axis.
After this illustration, let us return to the discussion of the general Schrödinger equation (157b) and prove the following fascinating fact: it is possible to write the general solution of this operator equation. In the easiest case when the Hamiltonian is time-independent, this solution is an exact analog of Eq. (161),
i
i
ˆ u ( t, t ) ˆ
ˆ
u ( t , t ) exp H t t
H t t
(4.175)
0
0
0
0
êxp
0 .
To start its proof we should, first of all, understand what a function (in this particular case, the exponent) of an operator means. In the operator (and matrix) algebra, such nonlinear functions are defined by their Taylor expansions; in particular, Eq. (175) means that
34 This is one more (hopefully, redundant :-) illustration of the difference between the averaging over the statistical ensemble and that over time: in Eqs. (170), (173)-(174), and also in quite a few relations below, only the former averaging has been performed, so the results are still functions of time.
35 Note that according to this relation, the gyromagnetic ratio may be interpreted just as the angular frequency of the spin precession per unit magnetic field – hence the name. In particular, for electrons, e 1.7611011 s-1T-1; for protons, the ratio is much smaller, p g p e/2 m p 2.675108 s-1T-1, mostly because of their larger mass m p, at a g-factor of the same order as for the electron: g p 5.586. For heavier spin-½ particles, e.g., atomic nuclei with such spin, the values of are correspondingly smaller – e.g., 8.681106 s-1T-1 for the 57Fe nucleus.
Chapter 4
Page 32 of 52
QM: Quantum Mechanics
1 i
k
ˆ
ˆ
ˆ
u ( t, t ) I
H t t
0
0
k 1
k !
(4.176)
1
2
3
ˆ
i ˆ
I
i
i
H
t t
H t t
H t t
0
1
2
2
1
ˆ
ˆ
(
)
3 (
)3 ...,
!
1
2!
0
!
3
0
where ˆ 2
ˆ ˆ
ˆ 3
ˆ ˆ ˆ
H H
H , H
H
H
H
, etc. Working with such series of operator products is not as hard as one
could imagine, due to their regular structure. For example, let us differentiate both sides of Eq. (176) over t, at constant t 0, at the last stage using this equality again – backward:
1 i
1
2
i
2
1
2
i
ˆ
ˆ
ˆ
ˆ
u( t, t 0ˆ
)
H H 2( t t )
3
H (
3 t t )2 ...
0
t
!
1
2!
0
3!
0
(4.177)
i
1
2
ˆ ˆ
i
H I H
i
i
t t
H t t
u
H t t
0
1
ˆ
ˆ 2
2
ˆ
(
)
...
ˆ ( , ),
!
1
2!
0
0
so that the differential equation (158) is indeed satisfied. On the other hand, Eq. (175) also satisfies the initial condition
u ˆ t
( , t ) u ˆ † t
( , t I ˆ
)
(4.178)
0
0
0
0
that immediately follows from the definition (157a) of the evolution operator. Thus, Eq. (175) indeed gives the (unique) solution for the time evolution operator – in the Schrödinger picture.
Now let us allow the operator H ˆ to be a function of time, but with the condition that its “values”
(in fact, operators) at different instants commute with each other:
H ˆ ( t' H ˆ
), ( t" ) ,
0
for
t'
any , t" .
(4.179)
(An important non-trivial example of such a Hamiltonian is the time-dependent part of the Hamiltonian of a particle, due to the effect of a classical, time-dependent, but position-independent force F( t), ˆ
H F( t) ˆ.
r
(4.180)
F
Indeed, the radius vector’s operator rˆ does not depend explicitly on time and hence commutes with itself, as well as with the c-numbers F( t’) and F( t” ).) In this case, it is sufficient to replace, in all the above formulas, the product ˆ
H ( t t ) with the corresponding integral over time; in particular, Eq. (175) 0
is generalized as
Evolution
operator:
t
i
ˆ
ˆ
u ( t, t ) exp
explicit
H ( t' ) dt' .
(4.181)
0
expression
t
0
This replacement means that the first form of Eq. (176) should be replaced with
k
k
t
1 i
u ˆ t
( , t ) I ˆ
0
H ˆ t' () dt'
1
!
k k
t
0
Chapter 4
Page 33 of 52
QM: Quantum Mechanics
1
k t
t
t
i
ˆ
ˆ
ˆ
ˆ
I dt dt ... dt H( t ) H( t )... H( t ).
(4.182)
k
k
k 1
k !
1
2
1
2
t 0 t 0
t 0
The proof that Eq. (182) satisfies Eq. (158) is absolutely similar to the one carried out above.
We may now use Eq. (181) to show that the time-evolution operator remains unitary at any moment, even for a time-dependent Hamiltonian, if it satisfies Eq. (179). Indeed, Eq. (181) yields
†
t
i
t
i
u ˆ t
( , t u
) ˆ t
( , t )
.
(4.183)
0
0
H êxp
t'
( ) dt'
H êxp
t"
( ) dt"
t
t
0
0
Since each of these exponents may be represented with the Taylor series (182), and, thanks to Eq. (179), different components of these sums may be swapped at will, the expression (183) may be manipulated exactly as the product of c-number exponents, for example rewritten as
i t
t
ˆ u ( t, t ) ˆ †
ˆ
ˆ
u ( t, t ) exp H ( t' ) dt' H ( t" ) dt"
I.ˆ
}
0êxp{
(4.184)
0
0
t
0
t 0
This property ensures, in particular, that the system state’s normalization does not depend on time:
( t) ( t) ( t ) ˆ †
u ( t,t ) ˆ u ( t,t ) ( t ) ( t ) ( t ) .
(4.185)
0
0
0
0
0
0
The most difficult cases for the explicit solution of Eq. (158) are those where Eq. (179) is violated.36 It may be proven that in these cases the integral limits in the last form of Eq. (182) should be truncated, giving the so-called Dyson series
1
k t
1
t
tk 1
i
ˆ
ˆ
ˆ
ˆ
ˆ
u ( t, t ) I
dt dt ... dt H ( t ) H ( t )... H ( t ).
(4.186)
0
k
k
k 1
k !
1
2
1
2
t 0 t 0
t 0
Since we would not have time/space to use this relation in our course, I will skip its proof.37
Let me now return to the general discussion of quantum dynamics to outline its alternative, Heisenberg picture. For its introduction, let us recall that according to Eq. (125), in quantum mechanics the expectation value of any observable A is a long bracket. Let us explore an even more general form of such bracket:
A ˆ .
(4.187)
(In some applications, the states and may be different.) As was discussed above, in the Schrödinger picture the bra- and ket-vectors of the states evolve in time, while the operators of observables remain time-independent (if they do not explicitly depend on time), so that Eq. (187), applied to a moment t, may be represented as
ˆ
( t) A ( t) ,
(4.188)
S
where the index “S” is added to emphasize the Schrödinger picture. Let us apply the evolution law (157a) to the bra- and ket-vectors in this expression:
36 We will run into such situations in Chapter 7, but will not need to apply Eq. (186) there.
37 It may be found, for example, in Chapter 5 of J. Sakurai’s textbook – see References.
Chapter 4
Page 34 of 52
QM: Quantum Mechanics
t ˆ
†
A t t
u t t A u t t
t
(4.189)
S
ˆ
( ) ˆ ( , )
ˆ ( , ) ( ) .
0
0
S
0
0
This equality means that if we form a long bracket with bra- and ket-vectors of the initial-time states, together with the following time-dependent Heisenberg operator 38
Heisenberg
operator
ˆ
†
ˆ
†
ˆ
A ( t) ˆ u ( t, t ) A ˆ u ( t, t ) ˆ u ( t, t ) A ( t ) ˆ u ( t, t ) , (4.190)
H
0
S
0
0
H
0
0
all experimentally measurable results will remain the same as in the Schrödinger picture: Heisenberg
t ˆ A t
ˆ
( t ) A ( t, t ) ( t ) .
(4.191)
0
H
0
0
picture
For full clarity, let us see how does the Heisenberg picture work for the same simple (but very important!) problem of the spin-½ precession in a z-oriented magnetic field, described (in the z-basis) by the Hamiltonian matrix (164). In that basis, Eq. (157b) for the time-evolution operator becomes
u
u
1
0
u
u
u
u
11
12
11
12
11
12
i
.
(4.192)
t u
u
2 0
1 u
u
2
u
u
21
22
21
22
21
22
We see that in this simple case the differential equations for different matrix elements of the evolution operator matrix are decoupled, and readily solvable, using the universal initial conditions (178):39
i t
/ 2
e
t
t
0
u( t )
0
,
Icos
i σ sin
.
(4.193)
z
0
i t / 2
e
2
2
Now let us use them in Eq. (190) to calculate the Heisenberg-picture operators of spin components – still in the z-basis. Dropping the index “H” for the notation brevity (the Heisenberg-picture operators are clearly marked by their dependence on time anyway), we get
S ( t) u† ( t 0
, S
) (0)u( t,0) u† ( t,0)σ u( t,0)
x
x
2
x
i t / 2
0 1
e
0
i t / 2
e
0
(4.194)
2
i t / 2
1 0
0
e
0
i t / 2
e
0
ei t
σ cos t σ sin t
x
y
S (0)cos t S (0)sin t .
2 i t
2
x
y
e
0
Absolutely similar calculations of the other spin components yield
38 Note that this strict relation is similar in structure to the first of the symbolic Eqs. (94), with the state bases { v}
and { u} loosely associated with the time moments, respectively, t and t 0.
39 We could of course use this solution, together with Eq. (157), to obtain all the above results for this system within the Schrödinger picture. In our simple case, the use of Eqs. (161) for this purpose was more straightforward, but in some cases, e.g., for some time-dependent Hamiltonians, an explicit calculation of the time-evolution matrix may be the best (or even only practicable) way to proceed.
Chapter 4
Page 35 of 52
QM: Quantum Mechanics
i t
0
ie
S t
( )
σ cos σ sin S (0)cos S ( )
0 sin , (4.195)
y
t
t
y
x
t
t
y
x
2
ie i t
0
2
1
0
S ( t)
σ S ( )
0 .
(4.196)
z
2 0 1 2 z
z
One practical advantage of these formulas is that they describe the system’s evolution for arbitrary initial conditions, thus making the analysis of initial state effects very simple. Indeed, since in the Heisenberg picture the expectation values of observables are calculated using Eq. (191) (with = ), with time-independent bra- and ket-vectors, such averaging of Eqs. (194)-(196) immediately returns us to Eqs. (170), (173), and (174), which were obtained above in the Schrödinger picture. Moreover, these equations for the Heisenberg operators formally coincide with the classical equations of the torque-induced precession for c-number variables. (Below we will see that the same exact correspondence is valid for the Heisenberg picture of the orbital motion.)
In order to see that the last fact is by no means a coincidence, let us combine Eqs. (157b) and (190) to form an explicit differential equation of the Heisenberg operator’s evolution. For that, let us differentiate Eq. (190) over time:
†
ˆ
d
ˆ u
† A
†
ˆ u
ˆ
ˆ
S
ˆ
A
A ˆ u ˆ u
ˆ u ˆ u A
.
(4.197)
H
S
S
dt
t
t
t
Plugging in the derivatives of the time evolution operator from Eq. (157b) and its Hermitian conjugate, and multiplying both sides of the equation by i, we get
ˆ
d
A
†
†
S
†
ˆ
i
A ˆ ˆ ˆ
ˆ
u
A
H ˆ u i ˆ u
ˆ u ˆ
ˆ
u A
ˆ u
H .
(4.198a)
H
S
S
dt
t
If for the Schrödinger-picture’s Hamiltonian the condition similar to Eq. (179) is satisfied, then, according to Eqs. (177) or (182), the Hamiltonian commutes with the time evolution operator and its Hermitian conjugate, and may be swapped with any of them.40 Hence, we may rewrite Eq. (198a) as d
†
† A ˆ
S
†
† A ˆ
i
A ˆ
ˆ
u
H ˆ A ˆ u ˆ i u
ˆ
u ˆ u ˆ A ˆ u ˆ H ˆ i u
ˆ
S u ˆ
†
,
. (4.198b)
H
S
S
u A ˆ
ˆ
S u
ˆ H ˆ
dt
t
t
Now using the definition (190) again, for both terms on the right-hand side, we may write d
A ˆ
Heisenberg
i
A ˆ i
,
.
(4.199) equation
H
A ˆH H ˆ
dt
t
of motion
H
This is the so-called Heisenberg equation of motion.
Let us see how this equation looks for the same problem of the spin-½ precession in a z-oriented, time-independent magnetic field, described in the z-basis by the Hamiltonian matrix (164), which does not depend on time. In this basis, Eq. (199) for the vector operator of spin reads41
†
†
40
ˆ
H ˆ ˆ
u H ˆ u ˆ u ˆ ˆ
ˆ
Due to the same reason,
H
u
H ; this is why the Hamiltonian operator’s index may be
H
S
S
S
dropped in Eqs. (198)-(199).
Chapter 4
Page 36 of 52
QM: Quantum Mechanics
S
S
Ω
S
S
- S
11
12
11
12 1
0
0
12
i
,
Ω
.
(4.200)
S
S
S
S
S
21
22
2 21
22 0
1
0
21
Once again, the equations for different matrix elements are decoupled, and their solution is elementary: S t S
S t S
11
(0) const,
11
22
(0) const,
22
(4.201)
i t
i
t
S t S
e
S t S
e
12
(0)
,
12
21
(0)
.
21
According to Eq. (190), the initial values of the Heisenberg-picture matrix elements are just the Schrödinger-picture ones, so that using Eq. (117) we may rewrite this solution in either of two forms:
i t
i t
1 0
0
e
0
S( t)
ie
n
n
n
2
x
y
z
i t
i t
0 1
e
0
ie
0
(4.202)
n
n e i t
z
,
where n n in .
2
x
y
n e i t
n
z
The simplicity of the last expression is spectacular. (Remember, it covers any initial conditions and all three spatial components of spin!) On the other hand, for some purposes the previous form may be more convenient; in particular, its Cartesian components give our earlier results (194)-(196).42
One of the advantages of the Heisenberg picture is that it provides a more clear link between classical and quantum mechanics, found by P. Dirac. Indeed, analytical classical mechanics may be used to derive the following equation of time evolution of an arbitrary function A( qj, pj, t) of the generalized coordinates qj and momenta pj of the system, and time t: 43
dA
A
,
A H ,
(4.203)
P
dt
t
where H is the classical Hamiltonian function of the system, and {..,..} is the so-called Poisson bracket defined, for two arbitrary functions A( qj, pj, t) and B( qj, pj, t), as Poisson
A B
A B
bracket
A,
B
.
(4.204)
P
j p q
q p
j
j
j
j
Comparing Eq. (203) with Eq. (199), we see that the correspondence between the classical and quantum mechanics (in the Heisenberg picture) is provided by the following symbolic relation
41 Using the commutation relations (155), this equation may be readily generalized to the case of an arbitrary magnetic field B( t) and an arbitrary state basis – the exercise highly recommended to the reader.
42 Note that the “values” of the same Heisenberg operator at different moments of time may or may not commute.
For example, consider a free 1D particle, with the time-independent Hamiltonian H ˆ
p ˆ2
/ 2 m . In this case, Eq.
(199) yields the following equations: i x ˆ
[ x ˆ H ˆ
, ] i p ˆ / m and i ˆ p
[ ˆ ˆ
p, H ] 0 , with simple solutions
(similar to those for the classical motion): ˆ p( t) const ˆ p(0) and x ˆ t ( ) x ˆ(0) p ˆ(0) t / m , so that
[ x ˆ( ),
0 x ˆ t
( )] [ x ˆ( ),
0 p ˆ( )]
0 t / m [ x ˆ , p ˆ ] t / m i t / m 0, for t 0.
S
S
43 See, e.g., CM Eq. (10.17). The notation there does not use the subscript “P” that is employed in Eqs. (203)-
(205) to distinguish the classical Poisson bracket (204) from the quantum anticommutator (34).
Chapter 4
Page 37 of 52
QM: Quantum Mechanics
i
Classical
A
B
.
(4.205) vs.
P
A ˆ
,
B ˆ
,
quantum
mechanics
This relation may be used, in particular, for finding appropriate operators for the system’s observables, if their form is not immediately evident from the correspondence principle.
Finally, let us discuss one more alternative picture of quantum dynamics. It is also attributed to Dirac, and is called either the “Dirac picture”, or (more frequently) the interaction picture. The last name stems from the fact that this picture is very useful for the perturbative (approximate) approaches to systems whose Hamiltonians may be partitioned into two parts,
ˆ
ˆ
ˆ
H H H ,
(4.206)
0
int
where ˆ
H is the sum of relatively simple Hamiltonians of the component subsystems, while the second 0
term in Eq. (206) represents their weak interaction. (Note, however, that all relations in the balance of this section are exact and not directly based on the interaction weakness.) In this case, it is natural to consider, together with the full operator ˆ u t, t of the system’s evolution, which obeys Eq. (157b), a 0
similarly defined unitary operator ˆ u t, t of evolution of the “unperturbed system” described by the 0
0
Hamiltonian ˆ
H alone:
0
i
ˆ
ˆ
u H ˆ u ,
(4.207)
0
0 0
t
and also the following interaction evolution operator,
Interaction
u ˆ u ˆ† u ˆ .
(4.208) evolution
I
0
operator
The motivation for these definitions become more clear if we insert the reciprocal relation, ˆ u ˆ u ˆ†
u ˆ u ˆ u ˆ u ,
(4.209)
0
0
0 I
and its Hermitian conjugate,
†
ˆ u ˆ u ˆ u
ˆ u ˆ u ,
(4.210)
0 I †
† †
I
0
into the basic Eq. (189):
ˆ
†
ˆ
A ( t ) ˆ u ( t, t ) A ˆ u ( t, t ) ( t ) 0
0
S
0
0
(4.211)
( t ) ˆ†
†
u t t u t t A u t t u t t t
0
I , 0 ˆ0
0 ˆ
,
ˆ
S 0 , 0 Î , 0
( ) .
0
This relation shows that any long bracket (187), i.e. any experimentally verifiable result of quantum mechanics, may be expressed as
ˆ
ˆ
A ( t) A ( t) ( t) ,
(4.212)
I
I
I
if we assume that both the state vectors and the operators depend on time, with the vectors evolving only due to the interaction operator ˆ u ,
I
Interaction
( t) ( t ) ˆ†
u ( t, t ),
( t) ˆ u ( t, t ) ( t ) ,
(4.213) picture:
I
0
I
0
I
I
0
0
state vectors
Chapter 4
Page 38 of 52
QM: Quantum Mechanics
while the operators’ evolution being governed by the unperturbed operator ˆ u : 0
Interaction
picture:
ˆ
†
ˆ
A ( t) ˆ u t, t A ˆ u t, t .
(4.214)
I
0
0 S 0
0
operators
These relations describe the interaction picture of quantum dynamics. Let me defer an example of its use until the perturbative analysis of open quantum systems in Sec. 7.6, and end this section with a proof that the interaction evolution operator (208) satisfies the following natural equation,
i
ˆ
ˆ
u H ˆ u ,
(4.215)
I
I I
t
where ˆ
H is the interaction Hamiltonian formed from ˆ
H in accordance with the same rule (214):
I
int
ˆ
H t ˆ†
ˆ
u t, t H ˆ u t, t .
(4.216)
I
0
0
int 0
0
The proof is very straightforward: first using the definition (208), and then Eqs. (157b) and the Hermitian conjugate of Eq. (207), we may write
†
u
u
i
ˆ
†
†
†
†
†
†
u i
u u i
u u i
H u u u
u
H H u u u H H
u
I
ˆ ˆ0
ˆ
ˆ
0 ˆ
ˆ
ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ
0
0 0
0
0 0
0 ˆ
ˆ
0
int ˆ
t
t
t
t
(4.217)
ˆ
H ˆ†
u ˆ u ˆ† ˆ
u H ˆ u ˆ† ˆ
u H ˆ
†
†
†
u H u u H u u H u
0 0
0
0
0
int
ˆ ˆ ˆ ˆ
0 0
0
0 ˆ
ˆ ˆ ˆ .
0
int
Since †
ˆ u may be represented as an integral of an exponent of ˆ
H over time (similar to Eq. (181) relating
0
0
u ând H ˆ ), these operators commute, so that the parentheses in the last form of Eq. (217) vanish. Now plugging u ˆ from the last form of Eq. (209), we get the equation,
i
ˆ u ˆ† ˆ
u H ˆ u u ˆ† ˆ
u H ˆ u ˆ u ,
(4.218)
I
0
int 0 I
0 int 0 I
t
which is clearly equivalent to the combination of Eqs. (215) and (216).
As Eq. (215) shows, if the energy scale of the interaction H int is much smaller than that of the background Hamiltonian H 0, the interaction evolution operators ˆ u and †
ˆ u , and hence the state vectors
I
I
(213) evolve relatively slowly, without fast background oscillations. This is very convenient for the perturbative approaches to complex interacting systems, in particular to the “open” quantum systems that weakly interact with their environment – see Sec. 7.6.
4.7. Coordinate and momentum representations
Now let me show that in application to the orbital motion of a particle, the bra-ket formalism naturally reduces to the notions and postulates of wave mechanics, which were discussed in Chapter 1.
For that, we first have to modify some of the above formulas for the case of a basis with a continuous spectrum of eigenvalues. In that case, it is more appropriate to replace discrete indices, such as j, j’, etc.
broadly used above, with the corresponding eigenvalue – just as it was done earlier for functions of the wave vector – see, e.g., Eqs. (1.88), (2.20), etc. For example, the key Eq. (68), defining the eigenkets and eigenvalues of an operator, may be conveniently rewritten in the form
Chapter 4
Page 39 of 52
QM: Quantum Mechanics
A ˆ a
A a .
(4.219)
A
A
More substantially, all sums over such continuous eigenstate sets should be replaced with integrals. For example, for a full and orthonormal set of the continuous eigenstates aA, the closure relation (44) should be replaced with
Continuous
dA a
a
I ˆ
,
(4.220) spectrum:
A
A
closure
relation
where the integral is over the whole interval of possible eigenvalues of the observable A.44 Applying this relation to the ket-vector of an arbitrary state , we get the following replacement of Eq. (37):
I ˆ dA a a dA a a .
(4.221)
A
A
A
A
For the particular case when = aA’ , this relation requires that
Continuous
a a
( A A' );
(4.222) spectrum:
A
A'
state ortho-
normality
this formula replaces the orthonormality condition (38).
According to Eq. (221), in the continuous case the bracket aA still the role of probability amplitude, i.e. a complex c-number whose modulus squared determines the state aA’s probability – see the last form of Eq. (120). However, for a continuous observable, the probability to find the system exactly in a particular state is infinitesimal; instead, we should speak about the probability dW = w( A) dA of finding the observable within a small interval dA << A near the value A, with probability density w( A)
aA2. The coefficient in this relation may be found by making a similar change from the summation to integration in the normalization condition (121):
dA a
a
.
1
(4.223)
A
A
Since the total probability of the system to be in some state should be equal to w( A) dA, this means that Continuous 2
(
w )
A a
a a
.
(4.224) spectrum:
A
A
A
probability
density
Now let us see how we can calculate the expectation values of continuous observables, i.e. their ensemble averages. If we speak about the same observable A whose eigenstates are used as the continuous basis (or any compatible observable), everything is simple. Indeed, inserting Eq. (224) into the general statistical relation
A w( A) AdA,
(4.225)
which is just the obvious continuous version of Eq. (1.37), we get
A a A a
.
dA
(4.226)
A
A
Inserting a delta-function to represent this expression as a formally double integral,
A dA dA' a
A ( A A' ) a ,
(4.227)
A
A'
and using the continuous-spectrum version of Eq. (98),
44 The generalization to cases when the eigenvalue spectrum consists of both a continuum interval plus some set of discrete values, is straightforward, though leads to somewhat cumbersome formulas.
Chapter 4
Page 40 of 52
QM: Quantum Mechanics
ˆ
a A a
A ( A A' ) ,
(4.228)
A
A'
we may write
ˆ
ˆ
A dA dA' a
a A a
a A ,
(4.229)
A
A
A'
A'
so that Eq. (4.125) remains valid in the continuous-spectrum case without any changes.
The situation is a bit more complicated for the expectation values of an operator that does not commute with the basis-generating operator, because its matrix in that basis may not be diagonal. We will consider (and overcome :-) this technical difficulty very soon, but otherwise we are ready for a discussion of the relation between the bra-ket formalism and the wave mechanics. (For the notation simplicity I will discuss its 1D version; its generalization to 2D and 3D cases is straightforward.) Let us postulate the (intuitively almost evident) existence of a quantum state basis, whose ket-vectors will be called x, corresponding to a certain definite value x of the particle’s coordinate. Writing the following trivial identity:
x x x x ,
(4.230)
and comparing this relation with Eq. (219), we see that they do not contradict each other if we assume that x on the left-hand side of this relation is the (Hermitian) operator x ôf particle’s coordinate, whose action on a ket- (or bra-) vector is just its multiplication by the c-number x. (This looks like a proof, but is actually a separate, independent postulate, no matter how plausible.) Hence we may consider vectors
x as the eigenstates of the operator x ˆ . Let me hope that the reader will excuse me if I do not pursue here a strict proof that this set is full and orthogonal,45 so that we may apply to them Eq. (222): x x' x x' .
(4.231)
Using this basis is called the coordinate representation – the term which was already used at the end of Sec. 1.1, but without explanation.
In the basis of the x-states, the inner product aA( t) becomes x( t), and Eq. (223) takes the following form:
(
w x, t) ( t) x x ( t) x ( t) * x ( t) .
(4.232)
Comparing this formula with the basic postulate (1.22) of wave mechanics, we see that they coincide if the wavefunction of a time-dependent state is identified with that short bracket:46
Wave-
function
as inner
( x, t) x ( t)
.
(4.233)
product
This key formula provides the desired connection between the bra-ket formalism and the wave mechanics, and should not be too surprising for the (thoughtful :-) reader. Indeed, Eq. (45) shows that any inner product of two state vectors describing two states is a measure of their coincidence – just as the scalar product of two geometric vectors is; the orthonormality condition (38) is a particular manifestation of this fact. In this language, the particular value (233) of a wavefunction at some 45Such proof is rather involved mathematically, but physically this fact should be evident.
46 I do not quite like expressions like x used in some papers and even textbooks. Of course, one is free to replace with any other letter ( including) to denote a quantum state, but then it is better not to use the same letter to denote the wavefunction, i.e. an inner product of two state vectors, to avoid confusion.
Chapter 4
Page 41 of 52
QM: Quantum Mechanics
point x and moment t characterizes “how much of a particular coordinate x” does the state contain at time t. (Of course, this informal language is too crude to reflect the fact that ( x, t) is a complex function, which has not only a modulus but also an argument – the quantum-mechanical phase.) Now let us rewrite the most important formulas of the bra-ket formalism in the wave mechanics notation. Inner-multiplying both parts of Eq. (219) by the ket-vector x, and then inserting into the left-hand side of that relation the identity operator in the form (220) for coordinate x’, we get dx' x A x' x' a
A x a
ˆ
,
(4.234)
A
A
i.e., using the wavefunction’s definition (233),
ˆ
dx' x A x' ( x' ) A ( x)
,
(4.235)
A
A
where, for the notation brevity, the time dependence of the wavefunction is just implied (with the capital
serving as a reminder of this fact), and will be restored when needed.
For a general operator, we would have to stop here, because if it does not commute with the coordinate operator, its matrix in the x-basis is not diagonal, and the integral on the left-hand side of Eq.
(235) cannot be worked out explicitly. However, virtually all quantum-mechanical operators discussed in this course47 are ( space-) local: they depend on only one spatial coordinate, say x. For such operators, the left-hand side of Eq. (235) may be further transformed as
ˆ
ˆ
ˆ
ˆ
x A x' ( x' ) dx'
x x' A( x' ) dx' A ( x x' )( x' ) dx' A( x)
.
(4.236)
The first step in this transformation may appear as elementary as the last two, with the ket-vector x’
swapped with the operator depending only on x; however, due to the delta-functional character of the bracket (231), this step is, in fact, an additional postulate, so that the second equality in Eq. (236) essentially defines the coordinate representation of the local operator, whose explicit form still needs to be determined.
Let us consider, for example, the 1D version of the Hamiltonian (1.41),
ˆ 2
p
ˆ
H
x
U ( ˆ x) ,
(4.237)
2 m
which was the basis of all our discussions in Chapter 2. Its potential-energy part U (which may be time-dependent as well) commutes with the operator x ˆ , i.e. its matrix in the x-basis has to be diagonal. For such an operator, the transformation (236) is indeed trivial, and its coordinate representation is given merely by the c-number function U( x).
The situation the momentum operator p ˆ (and hence the kinetic energy p ˆ2 / 2 m ), not x
x
commuting with x ˆ , is less evident. Let me show that its coordinate representation is given by the 1D
version of Eq. (1.26), if we postulate that the commutation relation (2.14),
x ˆ, p ˆ i I,ˆ
i.e. ˆ p
x ˆ p ˆ x ˆ i I ˆ ,
(4.238)
x
x
47 The only substantial exception is the statistical operator w ˆ ( x, x’), to be discussed in Chapter 7.
Chapter 4
Page 42 of 52
QM: Quantum Mechanics
is valid in any representation.48 For that, let us consider the following matrix element, x ˆ p x ˆ p ˆ x ˆ x' .
x
x
On one hand, we may use Eq. (238), and then Eq. (231), to write
x ˆˆ p
x
ˆ p ˆ
ˆ
x x' x i I x' i x x' i ( x x' ) .
(4.239)
x
x
On the other hand, since x ˆ x' x' x' and x x ˆ x x , we may represent the same matrix element as x ˆ p
x ˆ p ˆ x ˆ x' x p
x ˆ p ˆ x' x'
'
ˆ
.
(4.240)
x
x
x
x
x x x p x'
x
Comparing Eqs. (239) and (240), we get
( x x'
x p ˆ x'
)
i
x
.
(4.241)
x x'
As it follows from the definition of the delta function,49 all expressions involving it acquire final sense only at their integration, in our current case, at that described by Eq. (236). Plugging Eq. (241) into the left-hand side of that relation, we get
x x'
x p ˆ x' ( x' ) dx' i
( )
.
(4.242)
x
x' dx'
x x'
Since the right-hand-part integral is contributed only by an infinitesimal vicinity of the point x’ = x, we may calculate it by expanding the continuous wavefunction ( x’) into the Taylor series in small ( x’ – x), and keeping only two leading terms of the series, so that Eq. (242) is reduced to
x x'
x'
x p ˆ x' ( x' ) dx' i ( x)
dx'
.
(4.243)
x
x x'
x' dx'
x
x x'
x'
Since the delta function may be always understood as an even function of its argument, in our case of ( x
– x’), the first term on the right-hand side is proportional to an integral of an odd function in symmetric limits and is equal to zero, and we get50
x p ˆ x' ( x' ) dx' i
.
(4.244)
x
x
Comparing this expression with the right-hand side of Eq. (236), we see that in the coordinate representation we indeed get the 1D version of Eq. (1.26), which was used so much in Chapter 2,51
p ˆ i
.
(4.245)
x
x
48 Another possible approach to the wave mechanics axiomatics is to derive Eg. (238) by postulating the form, ˆ
T exp{ i ˆ p X / }, of the operator that shifts any wavefunction by distance X along the axis x. In my X
x
approach, this expression will be derived when we need it (in Sec. 5.5), while Eq. (238) is postulated.
49 If necessary, please revisit MA Sec. 14.
50 One more useful expression of this type, which may be proved similarly, is (/ x)( x – x’) = ( x – x’)/ x’.
51 This means, in particular, that in the sense of Eq. (236), the operator of differentiation is local, despite the fact that its action on a function f may be interpreted as the limit of the fraction f/ x, involving two points. (In some axiomatic systems, local operators are defined as arbitrary polynomials of functions and their derivatives.) Chapter 4
Page 43 of 52
QM: Quantum Mechanics
It is straightforward to show (and is virtually evident) that the coordinate representation of any operator function f ( ˆ p ) is
x
f i
.
(4.246)
x
In particular, this pertains to the kinetic energy operator in Eq. (237), so the coordinate representation of this Hamiltonian also takes the very familiar form:
1
2
2
2
ˆ
H
i
U ( x, t)
U ( x, t) .
(4.247)
2 m
x
2
2
m x