Essential Graduate Physics by Konstantin K. Likharev - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

(9.10)

0

0

0

  .

Figure 4 shows several results of a numerical solution of Eq. (10).10 In all cases, parameters ,

0, and f 0 are fixed, while the external frequency  is gradually changed. For the case shown on the top two panels, the system still tends to a stable periodic solution, with very low contents of higher harmonics. If the external force frequency is reduced by a just few percent, the 3rd subharmonic may be excited. (This effect has already been discussed in Sec. 5.8 – see, e.g., Fig. 5.15.) The next row shows that just a small further reduction of the frequency  leads to a new tripling of the period, i.e. the generation of a complex waveform with the 9th subharmonic. Finally (see the bottom panels of Fig. 4), even a minor further change of  leads to oscillations without any visible period, e.g., to the chaos.

In order to trace this transition, a direct inspection of the oscillation waveforms q( t) is not very convenient, and trajectories on the phase plane [ q, p] also become messy if plotted for many periods of the external frequency. In situations like this, the Poincaré (or “stroboscopic”) plane, already discussed in Sec. 5.6, is much more useful. As a reminder, this is essentially just the phase plane [ q, p], but with the points highlighted only once a period, e.g., at  = 2 n, with n = 1, 2, … On this plane, periodic oscillations of frequency  are represented just as one fixed point – see, e.g. the top panel in the right column of Fig. 4. The 3rd subharmonic generation, shown on the next panel, means the oscillation period’s tripling and is represented as the splitting of the fixed point into three. It is evident that this transition is similar to the period-doubling bifurcation in the logistic map, besides the fact (already discussed in Sec. 5.8) that in systems with an antisymmetric nonlinearity, such as the pendulum (10), the 3rd subharmonic is easier to excite. From this point, the 9th harmonic generation (shown on the 3rd panel of Fig. 4), i.e. one more splitting of the points on the Poincaré plane, may be understood as one more step on the Feigenbaum-like route to chaos – see the bottom panel of that figure.

9 See, e.g., Chapters 2-4 in H. Schuster and W. Just, Deterministic Chaos, 4th ed., Wiley-VCH, 2005, or Chapters 8-9 in J. Thompson and H. Stewart, Nonlinear Dynamics and Chaos, 2nd ed., Wiley, 2002.

10 In the actual simulation, a small term  q, with  << 1, has been added to the left-hand side of this equation. This term slightly tames the trend of the solution to spread along the q- axis, and makes the presentation of results easier, without affecting the system’s dynamics too much.

Chapter 9

Page 5 of 14

Essential Graduate Physics

CM: Classical Mechanics

10

10

 /  0.81

0

0

0

10

10

288

290

292

294

296

298

300

0

10

10

 /  0.74

0

0

0

10

10

288

290

292

294

296

298

300

0

t

10

10

 /  0.72

0

0

0

10

10

288

290

292

294

296

298

300

0

10

10

 /  0.717

0

0

0

10

10

988

990

992

994

996

998

1000

0

t  t / 

2

Fig. 9.4. Oscillations in a pendulum with weak damping, δ/ ω

0 = 0.1, driven by a sinusoidal external

force with a fixed effective amplitude f

2

0/0 = 1, and several close values of the frequency  (listed on

the panels). Left panel column: the oscillation waveforms q( t) recorded after certain initial transient intervals. Right column: representations of the same processes on the Poincaré plane of the variables

[ q, p], with the q-axis turned vertically, for the convenience of comparison with the left panels.

Chapter 9

Page 6 of 14

Image 416

Essential Graduate Physics

CM: Classical Mechanics

So, the transition to chaos in dynamic systems may be at least qualitatively similar to that in 1D

maps, with a law similar to Eq. (6) for the critical values of some parameter of the system (in Fig. 4, frequency ), though with a system-specific value of the coefficient . Moreover, we may consider the first two differential equations of the system (10) as a 2D map that relates the vector { qn+1, pn+1} of the coordinate and momentum, measured at  = 2( n + 1), with the previous value { qn, pn} of that vector, reached at  = 2 n.

Unfortunately, this similarity also implies that the deterministic chaos in dynamic systems is at least as complex, and is as little understood, as in maps. For example, Fig. 5 shows (a part of) the phase diagram of the externally-driven pendulum, with the red bar marking the route to chaos traced in Fig. 4, and shading/hatching styles marking different oscillation regimes. One can see that the pattern is at least as complex as that shown in Figs. 2 and 3, and, besides a few features,11 is equally unpredictable from the form of the equation.

Fig. 9.5. The phase diagram of an externally-driven

pendulum with weak damping ( δ/0 = 0.1). The

f

regions of oscillations with the basic period are not

0

2

shaded; the notation for other regions is as follows.

0

Doted: subharmonic generation; cross-hatched:

chaos; hatched: either chaos or the basic period

(depending on the initial conditions); hatch-dotted:

either the basic period or subharmonics. Solid lines

show the boundaries of single-regime regions,

while dashed lines are the boundaries of the regions

where several types of motion are possible. (Figure

courtesy by V. Kornev.)

 /

0

Are there any valuable general results concerning the deterministic chaos in dynamic systems?

The most important (though an almost evident) result is that this phenomenon is impossible in any system described by one or two first-order differential equations with time-independent right-hand sides.

Indeed, let us start with a single equation

q  f ( q),

(9.11)

where f( q) is any single-valued function. This equation may be directly integrated to give q

dq'

t

 const,

(9.12)

f ( q' )

showing that the relation between q and t is unique and hence does not leave any place for chaos.

11 In some cases, it is possible to predict a parameter region where chaos cannot happen, due to the lack of any instability-amplification mechanism. Unfortunately, typically the analytically predicted boundaries of such a region form a rather loose envelope of the actual (numerically simulated) chaotic regions.

Chapter 9

Page 7 of 14

Essential Graduate Physics

CM: Classical Mechanics

Next, let us explore a system of two such equations:

q  f ( q , q ),

1

1

1

2

(9.13)

q  f ( q , q ).

2

2

1

2

Consider its phase plane shown schematically in Fig. 6. In a “usual” system, the trajectories approach either some fixed point (Fig. 6a) describing static equilibrium, or a limit cycle (Fig. 6b) describing periodic oscillations. (Both notions are united by the term attractor because they “attract” trajectories launched from various initial conditions.) On the other hand, phase plane trajectories of a chaotic system of equations that describe physical variables (which cannot be infinite), should be confined to a limited phase plane area, and simultaneously cannot start repeating each other. (This topology is frequently called the strange attractor.) For that, the 2D trajectories need to cross – see, e.g., point A in Fig. 6c.

q

(a)

q

(b)

q

(c)

2

2

2

0

q

0

q

0

q

1

1

1

A

Fig. 9.6. Attractors in dynamical systems: (a) a fixed point, (b) a limit cycle, and (c) a strange attractor.

However, in the case described by Eqs. (13), such a crossing is clearly impossible, because according to these equations, the tangent of a phase plane trajectory is a unique function of the coordinates{ q 1, q 2}:

dq

f ( q , q )

1

1

1

2

.

(9.14)

dq

f ( q , q )

2

2

1

2

Thus, in this case, the deterministic chaos is impossible.12 It becomes, however, readily possible if the right-hand sides of a system similar to Eq. (13) depend either on other variables of the system or time.

For example, if we consider the first two differential equations of the system (10), in the case f 0 = 0 they have the structure of the system (13), and hence the chaos is impossible – even at δ < 0 when (as we know from Sec. 5.4) the system allows self-excitation of oscillations, leading to a limit-cycle attractor.

However, if f 0  0, this argument does not work any longer, and (as we have already seen) the system may have a strange attractor – which is, for dynamic systems, a synonym for the deterministic chaos.

Thus, chaos is only possible in autonomous dynamic systems described by three or more differential equations of the first order.13

12 A mathematically strict formulation of this statement is called the Poincaré-Bendixon theorem, which was proved by Ivar Bendixon in 1901.

13 Since a typical dynamic system with one degree of freedom is described by two such equations, the number of first-order equations describing a dynamic system is sometimes called the number of its half-degrees of freedom.

This notion is very useful and popular in statistical mechanics – see, e.g., SM Sec. 2.2 and on.

Chapter 9

Page 8 of 14

Essential Graduate Physics

CM: Classical Mechanics

9.3. Chaos in Hamiltonian systems

The last conclusion is of course valid for Hamiltonian systems, which are just a particular type of dynamic systems. However, one may wonder whether these systems, that feature at least one first integral of motion, H = const, and hence are more “ordered” than the systems discussed above, can exhibit chaos at all. The answer is yes because such systems still can have mechanisms for the exponential growth of a small initial perturbation.

As the simplest way to show it, let us consider the so-called mathematical billiard, i.e. system with a ballistic particle (a “ball”) moving freely by inertia on a horizontal plane surface (“table”) limited by rigid walls. In this idealized model of the usual game of billiards, the ball’s velocity v is conserved when it moves on the table, and when it runs into a wall, the ball is elastically reflected from it as from a mirror,14 with the reversal of the sign of the normal velocity vn, and the conservation of the tangential velocity v, and hence without any loss of its kinetic (and hence the full) energy m

m

2

E H T

v

 2 2

v  .

(9.15)

n

 

2

2

This model, while being a legitimate 2D dynamic system,15 allows geometric analyses for several simple table shapes. The simplest of them is a rectangular billiard of area ab (Fig. 7), whose analysis may be readily carried out just by the replacement of each ball reflection event with the mirror reflection of the table in that wall – see the dashed lines on panel (a).

(a)

(b)

b

Fig. 9.7. Ball motion on

a rectangular billiard at

O

O

(a) a commensurate, and

(b) an incommensurate

launch angle.

0

a

Such analysis (left for the reader’ pleasure :-) shows that if the tangent of the ball launching angle  is commensurate with the side length ratio:

m b

tan  

,

(9.16)

n a

where n and m are non-negative integers without common integer multipliers, the ball returns exactly to the launch point O, after bouncing m times from each wall of length a, and n times from each wall of length b. (Red lines in Fig. 7a show an example of such a trajectory for n = m = 1, while blue lines, for m = 3, n = 1.) The larger is the sum ( m + n), the more complex is such a closed “orbit”.

14 A more scientific-sounding name for such a reflection is specular – from the Latin word “speculum” meaning a metallic mirror.

15 Indeed, it is fully described by the following Lagrangian function: L = mv 2/2 – U(), with U() = 0 for the 2D

radius vectors  belonging to the table area, and U() = + outside the area.

Chapter 9

Page 9 of 14

Essential Graduate Physics

CM: Classical Mechanics

Finally, if ( n + m)  , i.e. tan and b/ a are incommensurate (meaning that their ratio is an irrational number), the trajectory covers all of the table area, and the ball never returns exactly to the launch point. Still, this is not genuine chaos. Indeed, a small shift of the launch point O shifts all the trajectory fragments by the same displacement. Moreover, at any time t, each of Cartesian components vj( t) of the ball’s velocity (with coordinate axes parallel to the table sides) may take only two values,

vj(0), and hence may vary only as much as the initial velocity is being changed.

In 1963, i.e. well before E. Lorenz’s work, Yakov Sinai showed that the situation changes completely if an additional wall, in the shape of a circle, is inserted into the rectangular billiard (Fig. 8).

For most initial conditions, the ball’s trajectory eventually runs into the circle (see the red line on panel (a) as an example), and the further trajectory becomes essentially chaotic. Indeed, let us consider the ball’s reflection from a circle-shaped wall – Fig. 8b. Due to the conservation of the tangential velocity, and the sign change of the normal velocity component, the reflection obeys a simple law: r = i. Figure 8b shows that as the result, the magnitude of a small difference  between the angles of two close trajectories (as measured in the lab system), doubles at each reflection from the curved wall. This means that the small deviation grows along the ball trajectory as

 N  ~ 0  2 N  0 N ln 2

e

,

(9.17)

where N is the number of reflections from the convex wall.16 As we already know, such exponential divergence of trajectories, with a positive Lyapunov exponent, is the main feature of deterministic chaos.17

(a)

(b)

  

r



ri

Fig. 9.8. (a) Motion on a Sinai

 

billiard table, and (b) the

i

mechanism of the exponential

O

R

divergence of close trajectories.

The most important new feature of the dynamic chaos in Hamiltonian systems is its dependence on initial conditions. (In the systems discussed in the previous two sections, that lack the integrals of motion, the initial conditions are rapidly “forgotten”, and the chaos is usually characterized after an initial transient period – see, e.g., Fig. 4.) Indeed, even a Sinai billiard allows periodic motion, along closed orbits, under certain initial conditions – see the blue and green lines in Fig. 8a as examples. Thus 16 Superficially, Eq. (17) is also valid for a plane wall, but as was discussed above, a billiard with such walls features a full correlation between sequential reflections, so that angle  always returns to its initial value. In a Sinai billiard, such correlation disappears. Concave walls may also make a billiard chaotic; a famous example is the stadium billiard, suggested by Leonid Bunimovich in 1974, with two straight, parallel walls connecting two semi-circular, concave walls. Another example, which allows a straightforward analysis (first carried out by Martin Gutzwiller in the 1980s), is the so-called Hadamard billiard: an infinite (or rectangular) table with a non-horizontal surface of negative curvature.

17 Curved-wall billiards are also a convenient platform for studies of quantum properties of classically chaotic systems (for their conceptual discussion, see QM Sec. 3.5), in particular, the features called “quantum scars” –

see, e.g., the spectacular numerical simulation results by E. Heller, Phys. Rev. Lett. 53, 1515 (1984).

Chapter 9

Page 10 of 14

Essential Graduate Physics

CM: Classical Mechanics

the chaos “depth” in such systems may be characterized by the “fraction”18 of the phase space of initial parameters (for a 2D billiard, of the 3D space of the initial values of x, y, and ) resulting in chaotic trajectories.

This conclusion is also valid for Hamiltonian systems that are met in physics much more frequently than exotic billiards, for example, coupled nonlinear oscillators without damping. Perhaps the earliest and the most popular example is the so-called Hénon-Heiles system,19 which may be described by the following Lagrangian function:

m

m

1

1

L

 2 2 2

q   q

q   q   q q q .

(9.18)

1

1

1 

2  2

2

2

2

2

2 

2

2

1

2

2

2

2

3

Hénon-

Heiles It is straightforward to use this function to derive the corresponding Lagrange equations of motion, system

m q   q   q

q

1 

2

1

1

1 

2

,

1 2

(9.19)

m q   q   q q

2 

2

2

2

2 

 2 2

1

2 ,

and find their first integral of motion (physically, the energy conservation law):

m

m

1

2

2

2

2

2

2

2

2

1

H E

q  q q  q  q q q

.

(9.20)

1

1

1 

 2 2 2 

2

const

2

2

1

3 2

2

In the context of our discussions in Chapters 5 and 6, Eqs. (19) may be readily interpreted as those describing two oscillators, with small-oscillation frequencies 1 and 2, coupled only by the quadratic terms on the right-hand sides of the equations. This means that as the oscillation amplitudes A 1,2, and hence the total energy E of the system, are close to zero, the oscillator subsystems are virtually independent, each performing sinusoidal oscillations at its own frequency. This observation suggests a convenient way to depict the system’s motion.20 Let us consider a Poincaré plane for one of the oscillators (say, with coordinate q 2), similar to that discussed in Sec. 2 above, with the only difference is that (because of the absence of an explicit function of time in the system’s equations), the trajectory on the phase plane [ q , q

2

 ] is highlighted at the moments when q

2

1 = 0.

Let us start from the limit A 1,2  0 when the oscillations of q 2 are virtually sinusoidal. As we already know (see Fig. 5.9 and its discussion), if the representation point highlighting was perfectly synchronous with frequency 2 of the oscillations, there would be only one point on the Poincaré plane

– see, e.g. the right top panel of Fig. 4. However, at the q 1 – initiated highlighting, there is no such synchronism, so that each period, a different point of the elliptical (at the proper scaling of the velocity, 18 Actually, quantitative characterization of the fraction is not trivial, because it may have fractal dimensionality.

Unfortunately, due to lack of time I have to refer the reader interested in this issue to special literature, e.g., the monograph by B. Mandelbrot (cited above) and references therein.

19 It was first studied in 1964 by M. Hénon and C. Heiles as a simple model of star rotation about a galactic center. Most studies of this equation have been carried out for the following particular case: m 2

2 = 2 m 1, m 11 =

m

2

22 . In this case, by introducing new variables x   q 1, y   q 2, and   1 t, it is possible to rewrite Eqs. (19) in a parameter-free form. All the results shown in Fig. 9 below are for this case.

20 Generally, the system has a trajectory in 4D space, e.g., that of coordinates q 1,2 and their time derivatives, although the first integral of motion (20) means that for each fixed energy E, the motion is limited to a 3D

subspace. Still, this is one dimension too many for a convenient representation of the motion.

Chapter 9

Page 11 of 14

Image 417

Image 418

Image 419

Essential Graduate Physics

CM: Classical Mechanics

circular) trajectory is highlighted, so that the resulting points, for certain initial conditions, reside on a circle of radius A 2. If we now vary the initial conditions, i.e. redistribute the initial energy between the oscillators, but keep the total energy E constant, on the Poincaré plane we get a set of ellipses.

Now, if the initial energy is increased, the nonlinear interaction of the oscillations starts to deform these ellipses, causing also their crossings – see, e.g., the top left panel of Fig. 9. Still, below a certain threshold value of E, all Poincaré points belonging to a certain initial condition sit on a single closed contour. Moreover, these contours may be calculated approximately, but with pretty good accuracy, using straightforward generalization of the method discussed in Sec. 5.2.21

1

1

e

e

12

8

1

e  6

Fig. 9.9. Poincaré planes of the Hénon-

Heiles system (19), in notation y   q 2, for three

values of the dimensionless energy eE/ E

0,

with E

2

0  m 11 /2. Adapted from M.

Hénon and C. Heiles, The Astron. J. 69, 73

(1964). © AAS, reproduced with permission.

However, starting from some value of energy, certain initial conditions lead to sequences of points scattered over parts of the Poincaré plane, with a nonzero area – see the top right panel of Fig. 9.

This means that the corresponding oscillations q 2( t) do not repeat from one (quasi-) period to the next one – cf. Fig. 4 for the dissipative, forced pendulum. This is chaos.22 However, some other initial 21 See, e.g., M. Berry, in: S. Jorna (ed.), Topics in Nonlinear Dynamics, AIP Conf. Proc. No. 46, AIP, 1978, pp .

16 - 120.

22 This fact complies with the necessary condition of chaos, discussed at the end of Sec. 2, because Eqs. (19) may be rewritten as a system of four differential equations of the first order.

Chapter 9

Page 12 of 14

Essential Graduate Physics

CM: Classical Mechanics

conditions still lead to closed contours. This feature is similar to that in Sinai billiards, and is typical for Hamiltonian systems. As the energy is increased, larger and larger parts of the Poincaré plane correspond to the chaotic motion, signifying deeper and deeper chaos – see the bottom panel of Fig. 9.

9.4. Chaos and turbulence

This extremely short section consists of essentially just one statement, extending the discussion in Sec. 8.5. The (re-) discovery of the deterministic chaos in systems with just a few degrees of freedom in the 1960s has changed the tone of the debates concerning turbulence origins, very considerably. At first, an extreme point of view that equated the notions of chaos and turbulence, became the debate’s favorite.23 However, after the initial excitement, a significant role of the Richardson-style energy-cascade mechanisms, involving many degrees of freedom, has been rediscovered and could not be ignored any longer. To the best knowledge of this author, who is a very distant albeit interested observer of that field, most experimental and numerical-simulation data carry features of both mechanisms, so that the debate continues.24 Due to the age difference, most readers of these notes have much better chances than the author to see where would this discussion eventually lead.25

9.5. Exercise problems

9.1. Generalize the reasoning of Sec. 1 to an arbitrary 1D map qn+1 = f( qn), with a function f( q) differentiable at all points of interest. In particular, derive the condition of stability of an N-point limit cycle q(1)  q(2)  … q( N) q(1)…

9.2. Use the stability condition derived in the previous problem, to analyze the possibility of the deterministic chaos in the so-called tent map, with

f q  rq,

0

for

q  1 2

,

 

r

r

 

1 q

0

with

.

2

,

1

for

2  q

,

1

9.3. Find the conditions of existence and stability of fixed points of the so-called standard circle map:

K

q

q   

sin 2 q

n 1

,

n

n

2

where qn are real numbers defined modulo 1 (i.e. with qn + 1 identified with qn), while  and K are constant parameters. Discuss the relevance of the result for phase locking of self-oscillators – see, e.g., Sec. 5.4.

23 An important milestone in that way was the work by S. Newhouse et al., Comm. Math. Phys. 64, 35 (1978), who proved the existence of a strange attractor in a rather abstract model of fluid flow.

24 See, e.g., U. Frisch, Turbulence: The Legacy of A. N. Kolmogorov, Cambridge U. Press, 1996.

25 The reader interested in the deterministic chaos as such may like to have a look at a very popular book by S.

Strogatz, Nonlinear Dynamics and Chaos, Westview, 2001.

Chapter 9

Page 13 of 14

Essential Graduate Physics

CM: Classical Mechanics

9.4. Find the conditions of existence and stability of fixed points of the so-called Hénon map:26

q

 1

2

aq p ,

n 1

n

n

p

bq , with a, b  0.

n 1

n

9.5. Is the deterministic chaos possible in our “testbed” problem shown in Fig. 2.1? What if an additional periodic external force is applied to the bead? Explain your answers.

26 This map, first explored by M. Hénon in 1976 (for a particular set of constants a and b), has played an important historic role in the study of strange attractors.

Chapter 9

Page 14 of 14

Essential Graduate Physics

CM: Classical Mechanics

Chapter 10. A Bit More of Analytical Mechanics

This concluding chapter reviews two alternative approaches to analytical mechanics, whose major value is a closer parallel to quantum mechanics in general and its quasiclassical (WKB) approximation in particular. One of them, the Hamiltonian formalism, is also convenient for the derivation of an important asymptotic result, the adiabatic invariance, for classical systems with slowly changing parameters.

10.1. Hamilton equations

Throughout this course, we have seen how analytical mechanics, in its Lagrangian form, is invaluable for solving various particular problems of classical mechanics. Now let us discuss several alternative formulations1 that may not be much more useful for this purpose, but shed additional light on possible extensions of classical mechanics, most importantly to quantum mechanics.

As was already discussed in Sec. 2.3, the partial derivative pj   L/ q participating in the j

Lagrange equation (2.19),

d L

L

 ,

0

(10.1)

dt q

q

j

j

may be considered as the generalized momentum corresponding to the generalized coordinate qj, and the full set of these momenta may be used to define the Hamiltonian function (2.32):

H   p q  L .

(10.2)

j

j

j

Now let us rewrite the full differential of this function2 in the following form:

dH d p q  L

d p q

p d q

dL

j

j

 ( ) 

( )

j

j

j

j 





j

j

(10.3)

 

 L

  L

L



d( p ) q  p d( q )

dt

d q

d q

j

j

j

j  

( ) 

(  ) .

j

j

j

  t

j   q

q

j

j



According to the definition of the generalized momentum, the second terms of each sum over j in the last expression cancel each other, while according to the Lagrange equation (1), the derivative  L/ qj is equal to p , so that

j

L

dH  

dt   qdp pdq .

(10.4)

j

j

j

j

t

j

So far, this is just a universal identity. Now comes the main trick of Hamilton’s approach: let us consider H as a function of the following independent arguments: time t, the generalized coordinates qj, 1 Due to not only William Rowan Hamilton (1805-1865), but also Carl Gustav Jacob Jacobi (1804-1851).

2 Actually, this differential was already spelled out (but partly and implicitly) in Sec. 2.3 – see Eqs. (2.33)-(2.35).

© K. Likharev

Essential Graduate Physics

CM: Classical Mechanics

and the generalized momenta pj – rather than generalized velocities q as in the Lagrangian formalism.

j

With this new commitment, the general “chain rule” of differentiation of a function of several arguments gives

H

  H

H

dH

dt  

dq

dp

,

(10.5)

j

j

t

j   q

p

j

j

where dt, dqj, and dpj are independent differentials. Since Eq. (5) should be valid for any choice of these argument differentials, it should hold in particular if they correspond to the real law of motion, for which Eq. (4) is valid as well. The comparison of Eqs. (4) and (5) gives us three relations: H

L

 

.

(10.6)

t

t

H

H

q 

,

p  

.

(10.7) Hamilton

j

j

p

q

equations

j

j

Comparing the first of them with Eq. (2.35), we see that

dH

H

,

(10.8)

dt

t

meaning that the function H( t, qj, pj) can change in time only via its explicit dependence on t. Two Eqs.

(7) are even more substantial: provided that such function H( t, qj, pj) has been calculated, they give us two first-order differential equations (called the Hamilton equations) for the time evolution of the generalized coordinate and generalized momentum of each degree of freedom of the system.3

Let us have a look at these equations for the simplest case of a system with one degree of freedom, with the Lagrangian function (3.3):

m

ef

2

L

q  U ( q, t).

(10.9)

2

ef

In this case, p L

 / q

   m q , and

2

H q

p   L m q / 2  U ( q, t). To honor our new commitment, ef 

ef

ef

we need to express the Hamiltonian function explicitly via t, q, and p (rather than q ). From the above expression for p, we immediately have q  p / m ; plugging this expression back to Eq. (9), we get ef

2

p

H

U ( q, t).

(10.10)

2

ef

m ef

Now we can spell out Eqs. (7) for this particular case:

H

p

q 

,

(10.11)

p

m ef

H

U

ef

p  

 

.

(10.12)

q

q

3 Of course, the right-hand side of each equation (7) may include coordinates and momenta of other degrees of freedom as well, so that the equations of motion for different j are generally coupled.

Chapter 10

Page 2 of 16

Essential Graduate Physics

CM: Classical Mechanics

While the first of these equations just repeats the definition of the generalized momentum corresponding to the coordinate q, the second one gives the equation of momentum’s change.

Differentiating Eq. (11) over time, and plugging Eq. (12) into the result, we get:

p

1

U

ef

q  

 

.

(10.13)

m

m

q

ef

ef

So, we have returned to the same equation (3.4) that had been derived from the Lagrangian approach.4

Thus, the Hamiltonian formalism does not give much help for the solution of this problem – and indeed most problems of classical mechanics. (This is why its discussion had been postponed until the very end of this course.) Moreover, since the Hamiltonian function H( t, qj, pj) does not include generalized velocities explicitly, the phenomenological introduction of dissipation in this approach is less straightforward than that in the Lagrangian equations, whose precursor form (2.17) is valid for dissipative forces as well. However, the Hamilton equations (7), which treat the generalized coordinates and momenta in a manifestly symmetric way, are heuristically fruitful – besides being very appealing aesthetically. This is especially true in the cases where these arguments participate in H in a similar way.

For example, in the very important case of a dissipation-free linear (“harmonic”) oscillator, for which U ef = ef q 2/2, Eq. (10) gives the symmetric form

2

2

2

2

2

p

x

p

m x

ef

ef

0

H

,

where

2

ef

 

.

(10.14)

2 m

2

2 m

2

0

m

ef

ef

ef

The Hamilton equations (7) for this system preserve that symmetry, especially evident if we introduce the normalized momentum Ñ  p/ m ef0 (already used in Secs. 5.6 and 9.2): dq

d Ñ

  Ñ,

 

.

q

(10.15)

0

0

dt

dt

More practically, the Hamilton approach gives additional tools for the search for the integrals of motion. To see that, let us consider the full time derivative of an arbitrary function f( t, qj, pj): df

f

  f

f

 

q 

p

.

(10.16)

j

j

dt

t

j   q

p

j

j

Plugging in q and p

j

 from the Hamilton equations (7), we get

j

Dynamics

df

f

H

f

H

f

 

f

of arbitrary

variable

 

  H, f .

(10.17)

dt

t

j

p

q

q

p

t

j

j

j

j

The last term on the right-hand side of this expression is the so-called Poisson bracket,5 and is defined, for two arbitrary functions f( t, qj, pj) and g( t, qj, pj), as 4 The reader is strongly encouraged to perform a similar check for a few more problems, for example those listed at the end of the chapter, to get a better feeling of how the Hamiltonian formalism works.

5 Named after Siméon Denis Poisson (1781-1840), of the Poisson equation and the Poisson statistical distribution fame.

Chapter 10

Page 3 of 16

Essential Graduate Physics

CM: Classical Mechanics

  g f

f g

g, f   

.

Poisson

(10.18) bracket

j   p q

p q

j

j

j

j

From this definition, one can readily verify that besides evident relations { f, f} = 0 and { f, g} = –{ g, f}, the Poisson brackets obey the following important Jacobi identity:

f , g, 

h   g, h, f    h, f , g  0.

(10.19)

Now let us use these relations for a search for integrals of motion. First, Eq. (17) shows that if a function f does not depend on time explicitly, and

H , f   ,

0

(10.20)

then df/ dt = 0, i.e. that function is an integral of motion. Moreover, it turns out that if we already know two integrals of motion, say f and g, then the following function,

F   f , g,

(10.21)

is also an integral of motion – the so-called Poisson theorem. In order to prove it, we may use the Jacobi identity (19) with h = H. Next, using Eq. (17) to express the Poisson brackets { g, H}, { H, g}, and { H,{ f, g}} = { H, F} via the full and partial time derivatives of the functions f , g, and F, we get

  g dg   df

f dF F

f ,

   g,

 

 ,

0

(10.22)

t

dt   dt

t dt

t

so that if f and g are indeed integrals of motion, i.e., df/ dt = dg/ dt = 0, then dF

F   f    g   F  f

   g 

  g,    f ,  

  , g   f ,  .

(10.23)

dt

t

  t  

t   t   t    t 

Plugging Eq. (21) into the first term of the right-hand side of this equation, and differentiating it by parts, we get dF/ dt = 0, i.e. F is indeed an integral of motion as well.

Finally, one more important role of the Hamilton formalism is that it allows one to trace the close formal connection between classical and quantum mechanics. Indeed, using Eq. (18) to calculate the Poisson brackets of the generalized coordinates and momenta, we readily get

q , q

p p

q p

 

(10.24)

j

j'

,

0

 , j j'  ,0  , j j'

.

jj'

In quantum mechanics, the operators of these variables (“observables”) obey commutation relations6

 ˆ q , ˆ q

p p

q p i

(10.25)

j

j ' 

,

0

 ˆ , ˆ

j

j ' 

,

0

 ˆ , ˆ

j

j' 

,

jj '

where the definition of the commutator,  g ˆ f ˆ

,   g f ˆ

ˆ

f ˆ g ˆ , is to a certain extent7 similar to that (18) of

the Poisson bracket. We see that the classical relations (24) are similar to the quantum-mechanical relations (25) if the following parallel has been made:

6 See, e.g., QM Sec. 2.1.

Chapter 10

Page 4 of 16

Essential Graduate Physics

CM: Classical Mechanics

i

CM  QM

g, f    g ˆ f ˆ

, .

(10.26)

relation

This analogy extends well beyond Eqs. (24)-(25). For example, making the replacement (26) in Eq. (17), we get

f

d ˆ

f ˆ

i

f

d ˆ

f ˆ

  H ˆ f ˆ

, ,

i

i.e. 

i

  f ˆ H ˆ

, ,

(10.27)

dt

t

dt

t

which is the correct equation of operator evolution in the Heisenberg picture of quantum mechanics.8

The parallel (26) may give important clues in the search for the proper quantum-mechanical operator of a given observable – which is not always elementary.

10.2. Adiabatic invariance

One more application of the Hamiltonian formalism in classical mechanics is the solution of the following problem.9 Earlier in the course, we already studied some effects of time variation of parameters of a single oscillator (Sec. 5.5) and coupled oscillators (Sec. 6.5). However, those discussions were focused on the case when the parameter variation speed is comparable with the own oscillation frequency (or frequencies) of the system. Another practically important case is when some system’s parameter (let us call it ) is changed much more slowly ( adiabatically 10),



1



,

(10.28)

T

where T is a typical period of oscillations in the system. Let us consider a 1D system whose Hamiltonian H( q, p, ) depends on time only via such a slow evolution of such parameter  = ( t), and whose initial energy restricts the system’s motion to a finite coordinate interval – see, e.g., Fig. 3.2c.

Then, as we know from Sec. 3.3, if the parameter  is constant, the system performs a periodic (though not necessarily sinusoidal) motion back and forth the q-axis, or, in a different language, along a closed trajectory on the phase plane [ q, p] – see Fig. 1.11 According to Eq. (8), in this case, H is constant along the trajectory. (To distinguish this particular value of H from the Hamiltonian function as such, I will call it E, implying that this constant coincides with the full mechanical energy E – as does for the Hamiltonian (10), though this assumption is not necessary for the calculation made below.) The oscillation period T may be calculated as a contour integral along this closed trajectory: 7 There is, of course, a conceptual difference between the “usual” products of the function derivatives participating in the Poisson brackets, and the operator “products” (meaning their sequential action on a state vector) forming the commutator.

8 See, e.g., QM Sec. 4.6.

9 Various aspects of this problem and its quantum-mechanical extensions were first discussed by L. Le Cornu (1895), Lord Rayleigh (1902), H. Lorentz (1911), P. Ehrenfest (1916), and M. Born and V. Fock (1928).

10 This term is also used in thermodynamics and statistical mechanics, where it implies not only a slow parameter variation (if any) but also thermal insulation of the system – see, e.g., SM Sec. 1.3. Evidently, the latter condition is irrelevant in our current context.

11 As a reminder, we discussed such phase-plane representations in Chapter 5 – see, e.g., Figs. 5.5, 5.9, and 5.16.

Chapter 10

Page 5 of 16

Essential Graduate Physics

CM: Classical Mechanics

T

dt

1

T dt

dq

dq

.

(10.29)

dq

q

0

Using the first of the Hamilton equations (7), we may represent this integral as

1

T

dq

.

(10.30)

H

 / p

At each given point q, H = E is a function of p alone, so that we may flip the partial derivative in the denominator just as the full derivative, and rewrite Eq. (30) as

p

T

dq

.

(10.31)

E

For the particular Hamiltonian (10), this relation is immediately reduced to Eq. (3.27), now in the form of a contour integral:

m

1

ef 1/ 2

T  

dq

.

(10.32)

 2 

[ E U ( q 1/ 2

)]

ef

p

H ( p, q, )  E' E

H ( p, q, )  E

0

q

Fig. 10.1. Phase-plane representation of periodic

oscillations of a 1D Hamiltonian system, for two

values of energy (schematically).

Naively, it may look that these formulas may be also used to find the motion period’s change when the parameter  is being changed adiabatically, for example, by plugging the given functions m ef() and U ef( q, ) into Eq. (32). However, there is no guarantee that the energy E in that integral would stay constant as the parameter changes, and indeed we will see below that this is not necessarily the case. Even more interestingly, in the most important case of the harmonic oscillator ( U ef = ef q 2/2), whose oscillation period T does not depend on E (see Eq. (3.29) and its discussion), its variation in the adiabatic limit (28) may be readily predicted: T () = 2/0() = 2[ m ef()/ef()]1/2, but the dependence of the oscillation energy E (and hence of the oscillation amplitude) on  is not immediately obvious.

In order to address this issue, let us use Eq. (8) (with E = H) to represent the rate of the energy change with ( t), i.e. in time, as

dE

H

H

d

.

(10.33)

dt

t

dt

Since we are interested in a very slow (adiabatic) time evolution of energy, we can average Eq. (33) over fast oscillations in the system, for example over one oscillation period T, treating d/ dt as a constant during this averaging. (This is the most critical point of this argumentation, because at any non-

Chapter 10

Page 6 of 16

Essential Graduate Physics

CM: Classical Mechanics

vanishing rate of parameter change the oscillations are, strictly speaking, non-periodic.12) The averaging yields

dE

d 

T

H

d 1  H

dt .

(10.34)

dt

dt 

dt T



0

Transforming this time integral to the contour one, just as we did at the transition from Eq. (29) to Eq.

(30), and then using Eq. (31) for T , we get

  H / dq

dE

d  H /  p

.

(10.35)

dt

dt

  p dq

E

At each point q of the contour, H is a function of not only , but also of p, which may be also -

dependent, so that if E is fixed, the partial differentiation of the relation E = H over  yields

H

H p

H / 

p

 ,

0

i.e.

 

.

(10.36)

p

H /  p

Plugging the last relation to Eq.(35), we get

dE

d   p dq

 

.

(10.37)

dt

dt   p dq

E

Since the left-hand side of Eq. (37) and the derivative d/ dt do not depend on q, we may move them into the integrals over q as constants, and rewrite Eq. (37) as

  p dE p d 

dq  0.





(10.38)

  E dt

 dt

Now let us consider the following integral over the same phase-plane contour,

Action

1

variable

J

pdq,

(10.39)

2

called the action variable. Just to understand its physical sense, let us calculate J for a harmonic oscillator (14). As we know very well from Chapter 5, for such an oscillator, q = A cos, p = –

m ef0 A sin (with  = 0 t + const), so that J may be easily expressed either via the oscillations’

amplitude A, or via their energy E = H = m

2

ef0 A 2/2:

1

1 2

m

E

J

pdq

 m A sin  d A  

A

(10.40)

ef

0

  cos 

ef

0

2

.

2

2

2

0

0

12 Because of the implied nature of this conjecture (which is very close to the assumptions made at the derivation of the reduced equations in Sec. 5.3), new, more strict (but also much more cumbersome) proofs of the final Eq.

(42) are still being offered in literature – see, e.g., C. Wells and S. Siklos, Eur. J. Phys. 28, 105 (2007) and/or A.

Lobo et al., Eur. J. Phys. 33, 1063 (2012).

Chapter 10

Page 7 of 16

Essential Graduate Physics

CM: Classical Mechanics

Returning to a general system with adiabatically changed parameter , let us use the definition of J, Eq. (39), to calculate its time derivative, again taking into account that at each point q of the trajectory, p is a function of E and :

dJ

1

dp

1

p

dE

p

d 

dq

dq

 .

(10.41)

dt

2

dt

2

E

dt

 dt

Within the accuracy of our approximation, in which the contour integrals (38) and (41) are calculated along a closed trajectory, the factor dE/dt is indistinguishable from its time average, and these integrals coincide so that the result (38) is applicable to Eq. (41) as well. Hence, we have finally arrived at a very important result: at a slow parameter variation, dJ/ dt = 0, i.e. the action variable remains constant: J  const .

(10.42) Adiabatic

invariance

This is the famous adiabatic invariance.13 In particular, according to Eq. (40), in a harmonic oscillator, the energy of oscillations changes proportionately to its own (slowly changed) frequency.

Before moving on, let me briefly note that the adiabatic invariance is not the only application of the action variable J. Since the initial choice of generalized coordinates and velocities (and hence the generalized momenta) in analytical mechanics is arbitrary (see Sec. 2.1), it is almost evident that J may be taken for a new generalized momentum corresponding to a certain new generalized coordinate ,14

and that the pair { J, } should satisfy the Hamilton equations (7), in particular, d

H

.

(10.43)

dt

J

Following the commitment of Sec. 1 (made there for the “old” arguments qj, pj), before the differentiation on the right-hand side of Eq. (43), H should be expressed as a function (besides t) of the

“new” arguments J and . For time-independent Hamiltonian systems, H is uniquely defined by J – see, e.g., Eq. (40). Hence in this case the right-hand side of Eq. (43) does not depend on either t or , so according to that equation,  (called the angle variable) is a linear function of time:

H

 

t  const .

(10.44)

J

For a harmonic oscillator, according to Eq. (40), the derivative  H/ J =  E/ J is just 0  2/ T, so that  = 0 t + const, i.e. it is just the full phase  that was repeatedly used in this course – especially in Chapter 5. It may be shown that a more general form of this relation,

H

2

,

(10.45)

J

T

13 For certain particular oscillators, e.g., a point pendulum, Eq. (42) may be also proved directly – an exercise highly recommended to the reader.

14 This, again, is a plausible argument but not a strict proof. Indeed: though, according to its definition (39), J is nothing more than a sum of several (formally, the infinite number of) values of the momentum p, they are not independent, but have to be selected on the same closed trajectory on the phase plane. For more mathematical vigor, the reader is referred to Sec. 45 of Mechanics by Landau and Lifshitz (which was repeatedly cited above), which discusses the general rules of the so-called canonical transformations from one set of Hamiltonian arguments to another one – say from { p, q} to { J, }.

Chapter 10

Page 8 of 16

Essential Graduate Physics

CM: Classical Mechanics

is valid for an arbitrary system described by Eq. (10). Thus, Eq. (44) becomes

t

Θ  2

 const .

(10.46)

T

This means that for an arbitrary (nonlinear) 1D oscillator, the angle variable  is a convenient generalization of the full phase . Due to this reason, the variables J and  present a convenient tool for discussion of certain fine points of the dynamics of strongly nonlinear oscillators – for whose discussion I, unfortunately, do not have time/space.15

10.3. The Hamilton principle

Now let me show that the Lagrange equations of motion, that were derived in Sec. 2.1 from the Newton laws, may be also obtained from the so-called Hamilton principle,16 namely the condition of a minimum (or rather an extremum) of the following integral called action:

t fin

Action

S   Ldt ,

(10.47)

t ini

where t ini and t fin are, respectively, the initial and final moments of time, at which all generalized coordinates and velocities are considered fixed (not varied) – see Fig. 2.

actual

motion

virtual

q

 , q

j

j

motion

Fig. 10.2. Deriving the Hamilton

principle.

t

ini

t

t

fin

The proof of that statement is rather simple. Considering, similarly to Sec. 2.1, a possible virtual variation of the motion, described by infinitesimal deviations { q

 ( t) , q

 ( t) } from the real motion, the

j

j

necessary condition for S to be minimal is

t fin

Hamilton

principle

S

 

L

dt  0

,

(10.48)

t ini

where  S and  L are the variations of the action and the Lagrange function, corresponding to the set

{ q

 ( t) , q

  ( t) }. As has been already discussed in Sec. 2.1, we can use the operation of variation just j

j

15 An interested reader may be referred, for example, to Chapter 6 in J. Jose and E. Saletan, Classical Dynamics, Cambridge U. Press, 1998.

16 It is also called the “principle of least action”. (This name may be fairer in the context of a long history of the development of the principle, starting from its simpler particular forms, which includes the names of P. de Fermat, P. Maupertuis, L. Euler, and J.-L. Lagrange.)

Chapter 10

Page 9 of 16

Essential Graduate Physics

CM: Classical Mechanics

as the usual differentiation (but at a fixed time, see Fig. 2), swapping these two operations if needed –

see Fig. 2.3 and its discussion. Thus, we may write

  L

L

L

L d

L

  

q

 

q

 

q

q

(10.49)

j

j

j

.

j

q

j

q

q

qdt

j

j

j j

j

j

After plugging the last expression into Eq. (48), we can integrate the second term by parts: t fin

t

L

fin  L d

S

   

q

dt

q

dt

j

 

t

j

j

q

qdt

j

j

t

j

ini

ini

t

t

t

fin

fin

fin

L

  L

  L

q

dt

 

q

q

d

(10.50)

j



j

 0.

j  

t

j q

q

q

j

j  j



j t

t

  j

ini

ini

ini

Since the generalized coordinates in the initial and final points are considered fixed (not affected by the variation), all  qj( t ini) and  qj( t fin) vanish, so that the second term in the last form of Eq. (50) vanishes as well. Now multiplying and dividing the last term of that expression by dt, we finally get t

t

t

fin

fin

fin

L

d   L

d   L

L

S

 

q

dt

q

 

(10.51)

j

dt  

j

 

 

q dt  0.

 

j q j

j

dt

j

t

t

  qj

t

j

dt q

j

q j

ini

ini

ini

This relation should hold for an arbitrary set of functions  qj( t), and for any time interval, and this is only possible if the expressions in the square brackets equal zero for all j, giving us the set of the Lagrange equations (2.19). So, the Hamilton principle indeed gives the Lagrange equations of motion.

It is fascinating to see how the Hamilton principle works for particular cases. As a very simple example, let us consider the usual 1D linear oscillator, with the Lagrangian function used so many times before in this course:

2

m

m

2

0

2

L

q 

q .

(10.52)

2

2

As we know very well, the Lagrange equations of motion for this L are exactly satisfied by any sinusoidal function with the frequency 0, in particular by a symmetric function of time q

 cos ,

that

so

   sin .

(10.53)

e  t

A

t

q

0

e  t

A

t

0

0

On a limited time interval, say 0  0 t  +/2, this function is rather smooth and may be well approximated by another simple, reasonably selected functions of time, for example

q t  A1  t 2

 

,

(10.54)

a

,

q

that

so

a  t

2 At

provided that the parameter  is also selected reasonably. Let us take  = (/20)2, so that the approximate function q a( t) coincides with the exact function q e( t) at both ends of our time interval (Fig.3):

q t

q t

 ,

A

q t

q t

 ,

0

where t  ,

0

t

,

(10.55)

a  ini 

e  ini 

a  fin 

e  fin 

ini

fin

20

and check which of them the Hamilton principle “prefers”, i.e. which function gives the least action.

Chapter 10

Page 10 of 16

Essential Graduate Physics

CM: Classical Mechanics

0.8

q ( t)

a

0.6

qt 2

q ( t)

e

A

0.4

0.2

Fig. 10.3. Plots of the functions

q( t) given by Eqs. (53) and (54).

00

0.1

0.2

0.3

0.4

t /

0

An elementary calculation of the action (47) corresponding to these two functions, yields

   

2

 4

2 

S     mA  ,

0

2

S  

mA

mA  ,

(10.56)

e

0

a

0

 4244

.

0

4189

.

0

2

0

 8

8 

 3

15

0

with the first terms in all the parentheses coming from the time integrals of the kinetic energy, and the second terms, from those of the potential energy.

This result shows, first, that the exact function of time, for which these two contributions exactly cancel,17 is indeed “preferable” for minimizing the action. Second, for the approximate function, the two contributions to the action are rather close to the exact ones, and hence almost cancel each other, signaling that this approximation is very reasonable. It is evident that in some cases when the exact analytical solution of the equations of motion cannot be found, the minimization of S by adjusting one or more free parameters, incorporated into a guessed “trial” function, may be used to find a reasonable approximation for the actual law of motion.18

It is also very useful to make the notion of action S, defined by Eq. (47), more transparent by calculating it for the simple case of a single particle moving in a potential field that conserves its energy E = T + U. In this case, the Lagrangian function L = T U may be represented as L T U  2 T  ( T U )  2

2

T E mv E,

(10.57)

with a time-independent E, so that

S   Ldt   2

mv dt Et  const.

(10.58)

Recasting the expression under the remaining integral as mvv dt = p( dr/ dt) dt = pdr, we finally get S p dr Et  const  S

 const

Et

,

(10.59)

0

17 Such cancellation, i.e. the equality S = 0, is of course not the general requirement; it is specific only for this particular example, with a specific choice of the arbitrary constant in the potential energy of the system.

18 This is essentially a classical analog of the variational method of quantum mechanics – see, e.g., QM Sec. 2.9.

Chapter 10

Page 11 of 16

Essential Graduate Physics

CM: Classical Mechanics

where the time-independent integral

S

(10.60)

0

pr

d

is frequently called the abbreviated action.19

This expression may be used to establish one more important connection between the classical and quantum mechanics – now in its Schrödinger picture. Indeed, in the quasiclassical (WKB) approximation of that picture20 a particle of fixed energy E is described by a de Broglie wave Ψ(r, t)  exp i k dr   t const

,

(10.61)

where the wave vector k is proportional to the particle’s momentum (which is possibly a slow function of r) and the frequency , to its energy:

p

E

k  ,  

.

(10.62)

Plugging these expressions into Eq. (61) and comparing the result with Eq. (59), we see that the WKB

wavefunction may be represented as

Ψ 

exp iS / 

 .

(10.63)

Hence the Hamilton principle (48) means that the total phase of the quasiclassical wavefunction should be minimal along the particle’s real trajectory. But this is exactly the so-called eikonal minimum principle well known from the optics (though it is valid for any other waves as well), where it serves to define the ray paths in the geometric optics limit – similar to the WKB approximation. Thus, the ratio S/ may be considered just as the eikonal, i.e. the total phase accumulation, of the de Broglie waves.21

Now, comparing Eq. (60) with Eq. (39), we see that the action variable J is just the change of the abbreviated action S 0 along a single phase-plane contour, divided by 2. This means, in particular, that in the WKB approximation, J is the number of de Broglie waves along the classical trajectory of a particle, i.e. an integer value of the corresponding quantum number. If the system’s parameters are changed slowly, the quantum number has to stay integer, and hence J cannot change, giving a quantum-mechanical interpretation of the adiabatic invariance. The reader should agree that this is really fascinating: a fact of classical mechanics may be “derived” (or at least understood) more easily from the quantum mechanics’ standpoint. (As a reminder, we have run into a similarly pleasant surprise at our discussion of the non-degenerate parametric excitation in Sec. 6.7.)

19 Comparing Eq. (59) with the Hamilton principle (48), we see that if the variational trajectories are limited to those of only one (actual) energy E, the real motion corresponds to the minimum of not only S but S 0 as well. This fact is called the Maupertuis principle. (Historically, this result rather than Eq. (48), was called the “principle of least action”, and some authors still use this terminology, so the reader’s caution is advised.) 20 See, e.g., QM Sec. 3.1.

21 Indeed, Eq. (63) was the starting point for R. Feynman’s development of his path-integral formulation of quantum mechanics – see, e.g., QM Sec. 5.3.

Chapter 10

Page 12 of 16

Essential Graduate Physics

CM: Classical Mechanics

10.4. The Hamilton-Jacobi equation

The action S, defined by Eq. (47), may be used for one more analytical formulation of classical mechanics. For that, we need to make one more, different commitment: S has to be considered as a function of the following independent arguments: the final time point t fin (which I will, for brevity, denote as t in this section), and the set of generalized coordinates (but not of the generalized velocities!) at that point:

t

Hamilton-

Jacobi

S Ldt S

t, q ( t) .

(10.64)

j

action

t ini

Let us calculate the variation of this (from the variational point of view, new!) function, resulting from an arbitrary combination of variations of the final values qj( t) of the coordinates while keeping t fixed. Formally this may be done by repeating the variational calculations described by Eqs. (49)-(51), besides that now the variations  qj at the finite point ( t) do not necessarily equal zero. As a result, we get t

L

d   L

L

S

  

q

dt

q

 .

(10.65)

j t

    

j

j

q

dt

q

q

j

t

j

ini

j

j

For the motion along the real trajectory, i.e. satisfying the Lagrange equations (2.19), the second term of this expression equals zero. Hence Eq. (65) shows that, for (any) fixed time t,

S

L