Essential Graduate Physics by Konstantin K. Likharev - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

  t

 

't e

A e t

't  (5.9)

free

 cos

sin

0

0

0

0 

cos

0

 0

0 .

The result shows that, besides a certain correction to the free oscillation frequency (which is very small in the most interesting low damping limit,  << 0), the energy dissipation leads to an exponential decay of oscillation amplitude with the time constant  = 1/:

4 Here Eq. (5) is treated as a phenomenological model, but in statistical mechanics, such dissipative term may be derived as an average force exerted upon a system by its environment, at very general assumptions. As will be discussed in detail later in this series (QM Chapter 7 and SM Chapter 5), due to the numerous degrees of freedom of a typical environment (think about the molecules of air surrounding a macroscopic pendulum), its force also has a random component; as a result, the dissipation is fundamentally related to fluctuations. The latter effect may be neglected (as it is in this course) only if the oscillator’s energy E is much higher than the energy scale of its random fluctuations – in the thermal equilibrium at temperature T, the larger of k B T and 0/2.

5 Systems with high damping ( > 0) can hardly be called oscillators, and though they are used in engineering and physics experiment (e.g., for the shock and sound isolation), due to the lack of time/space, for their detailed discussion I have to refer the interested reader to special literature – see, e.g., C. Harris and A. Piersol, Shock and Vibration Handbook, 5th ed., McGraw Hill, 2002. Let me only note that according to Eq. (8), the dynamics of systems with very high damping ( >> 0) has two very different time scales: a relatively short “momentum relaxation time” 1/

2

-  1/2 = m/, and a much longer “coordinate relaxation time” 1/+. 2/0 = /-.

Chapter 5

Page 2 of 38

Essential Graduate Physics

CM: Classical Mechanics

Decaying

t

 /

1

2 m

free

A A e

,

where

.

(5.10)

0

oscillations

A very popular dimensionless measure of damping is the so-called quality factor Q (or just the Q- factor) which is defined as 0/2, and may be rewritten in several other useful forms:

m

m

 

0

0

 1/2

0

Q

 

,

(5.11)

2

T

2

where T  2 π/0 is the oscillation period in the absence of damping – see Eq. (3.29). Since the oscillation energy E is proportional to A 2, i.e. decays as exp{-2 t/}, i.e. with the time constant /2, the last form of Eq. (11) may be used to rewrite the Q-factor in one more form:

E

E

Q  

 

,

(5.12)

0  E

0 P

where P is the energy dissipation rate. (Other practical ways to measure Q will be discussed below.) The range of Q-factors of important oscillators is very broad, all the way from Q ~ 10 for a human leg (with relaxed muscles), to Q ~ 104 of the quartz crystals used in electronic clocks and watches, all the way up to Q ~ 1012 for carefully designed microwave cavities with superconducting walls.

In contrast to the decaying free oscillations, forced oscillations induced by an external force F( t), may maintain their amplitude (and hence energy) infinitely, even at non-zero damping. This process may be described using a still linear but now inhomogeneous differential equation q

m   q  q

  F( t),

(5.13a)

or, more usually, the following generalization of Eq. (6b):

Forced

oscillator

with

q  2 q

2

    q f t(),

where f t

( )  F t

( ) / m .

(5.13b)

0

damping

For a mechanical linear, dissipative 1D oscillator (6), under the effect of an additional external force F( t), Eq. (13a) is just an expression of the 2nd Newton law. However, according to Eq. (1.41), Eq. (13) is valid for any dissipative, linear 6 1D system whose Gibbs potential energy (1.39) has the form U G( q, t) =

κq 2/2 – F( t) q.

The forced-oscillation solutions to Eq. (13) may be analyzed by two mathematically equivalent methods whose relative convenience depends on the character of function f( t).

(i) Frequency domain. Representing the function f( t) as a Fourier sum of sinusoidal harmonics:7

f ( t) 

it

f e

,

(5.14)

and using the linearity of Eq. (13), we may represent its general solution as a sum of the decaying free oscillations (9) with the frequency 0 , that are independent of the function f( t), and forced oscillations due to each of the Fourier components of the force:8

6 This is a very unfortunate, but common jargon, meaning “the system described by linear equations of motion”.

7 Here, in contrast to Eq. (3b), we may drop the operator Re, assuming that f- = f*, so that the imaginary components of the sum compensate for each other.

Chapter 5

Page 3 of 38

Essential Graduate Physics

CM: Classical Mechanics

General

it

q t

( )  q

t

( )  q

t

( ), q

t

( ) 

.

(5.15) solution

free

forced

forced

a e

of Eq. (13)

Plugging Eq. (15) into Eq. (13), and requiring the factors before each e-it on both sides to be equal, we get

a

(),

f

(5.16)

where the complex function (), in our particular case equal to

1

()  

(5.17)

2

2

    i

0

,

2

is called either the response function or (especially for non-mechanical oscillators) the generalized susceptibility. From here, and Eq. (4), the amplitude of the oscillations under the effect of a sinusoidal force is

1

A a f () ,

with () 

Forced

.

(5.18)

2

2 2

2

(   )  (2 )

oscillations

0

1/2

This formula describes, in particular, an increase of the oscillation amplitude A at   0 – see the left panel of Fig. 1. In particular, at the exact equality of these two frequencies, 1

 

( )   

,

(5.19)

0

2 

0

so that, according to Eq. (11), the ratio of the response magnitudes at  = 0 and  = 0 (()=0 =

1/ 2

0 ) is exactly equal to the Q-factor of the oscillator. Thus, the response increase is especially strong in the low-damping limit ( << 0, i.e. Q >> 1); moreover, at Q   and   0, the response diverges. (This mathematical fact is very useful for the methods to be discussed later in this section.) This is the classical description of the famous phenomenon of resonance, so ubiquitous in physics.

15

1

Q

Q  100

100

10

0.75

10

5

 ()

10

2

0.5

 (0)

1

0.25

5

5

2

0

1

0

0.5

1

1.5

2

 / 0

0

Fig. 5.1. Resonance in the linear

0

0.5

1

1.5

2

oscillator, for several values of Q.

 / 0

8 In physics, this mathematical property of linear equations is frequently called the linear superposition principle.

Chapter 5

Page 4 of 38

Essential Graduate Physics

CM: Classical Mechanics

Due to the increase of the resonance peak height, its width is inversely proportional to Q.

Quantitatively, in the most interesting low-damping limit, i.e. at Q >> 1, the reciprocal Q-factor gives the normalized value of the so-called full-width at half-maximum (FWHM) of the resonance curve:9



1

 .

(5.20)

Q

0

Indeed, this  is defined as the difference (+ – -) between the two values of  at that the square of the oscillator response function, ()2 (which is proportional to the oscillation energy), equals a half of its resonance value (19). In the low damping limit, these points are very close to 0, so that in the linear approximation in   – 

2

0  << 0, we may write (0 – 2)  –( +0)( – 0)  –2 –20,

where

    

(5.21)

0

is a very convenient parameter called detuning, which will be repeatedly used later in this chapter, and beyond it. In this approximation, the second of Eqs. (18) is reduced to10

1

2

() 

.

(5.22)

2

4   

0  2

2 

As a result, the points  correspond to 2 = δ 2, i.e. ω = ω 0 ± δ = ω 0(1 ± 1/2 Q), so that   ω+ – ω- =

ω 0/ Q, thus proving Eq. (20).

(ii) Time domain. Returning to an arbitrary external force f( t), one may argue that Eqs. (9), (15)-

(17) provide a full solution of the forced oscillation problem even in this general case. This is formally correct, but this solution may be very inconvenient if the external force is far from a sinusoidal function of time, especially if it is not periodic at all. In this case, we should first calculate the complex amplitudes f participating in the Fourier sum (14). In the general case of a non-periodic f( t), this is actually the Fourier integral,11



f t

( )

 

f e it dt

,

(5.23)



so that f should be calculated using the reciprocal Fourier transform,



1

f

f t' () eit'dt' .

(5.24)

2 

Now we may use Eq. (16) for each Fourier component of the resulting forced oscillations, and rewrite the last of Eqs. (15) as

9 Note that the phase shift   arg[()] between the oscillations and the external force (see the right panel in Fig.

1) makes its steepest change, by /2, within the same frequency interval .

10 Such function of frequency may be met in many branches of science, frequently under special names, including the “Cauchy distribution”, “the Lorentz function” (or “Lorentzian line”, or “Lorentzian distribution”), “the Breit-Wigner function” (or “the Breit-Wigner distribution”), etc.

11 Let me hope that the reader knows that Eq. (23) may be used for periodic functions as well; in such a case, f is a set of equidistant delta functions. (A reminder of the basic properties of the Dirac δ-function may be found, for example, in MA Sec. 14.)

Chapter 5

Page 5 of 38

Essential Graduate Physics

CM: Classical Mechanics









it

i t

1

q

( t)  a e

d  () f e

d  d ()

dt' f ( t' ) i( t' t)

e

forced

 

2









(5.25)



 1 

dt' f ( t' )

it'

d ()

(

t)

e

,

2





with the response function ( ω) given, in our case, by Eq. (17). Besides requiring two integrations, Eq.

(25) is conceptually uncomforting: it seems to indicate that the oscillator’s coordinate at time t depends not only on the external force exerted at earlier times t’ < t, but also at future times. This would contradict one of the most fundamental principles of physics (and indeed, science as a whole), causality: no effect may precede its cause.

Fortunately, a straightforward calculation (left for the reader’s exercise) shows that the response function (17) satisfies the following rule:12



()  

e i

d

,

0

for

  0.

(5.26)



This fact allows the last form of Eq. (25) to be rewritten in either of the following equivalent forms: t

Linear

q

t

( ) 

f t'

( ) G t

(  t' ) dt' f t

(  ) G

( ) d ,

(5.27) system’s

forced

response



0

where G( τ), defined as the Fourier transform of the response function,



1

Temporal

G 

 

i

( e ω

)

d ,

(5.28) Green’s

2

function



is called the ( temporal) Green’s function of the system. According to Eq. (26), G( τ) = 0 for all τ < 0.

While the second form of Eq. (27) is frequently more convenient for calculations, its first form is more suitable for physical interpretation of the Green’s function. Indeed, let us consider the particular case when the force is a delta function

f ( t)   ( t t' ),

with t' t

i.e.

,

  t t'  0 ,

(5.29)

representing an ultimately short pulse at the moment t’, with unit “area”  f( t) dt. Substituting Eq. (29a) into Eq. (27),13 we get

qt  Gt t' .

(5.30)

Thus the Green’s function G( tt’) is just the oscillator’s response, as measured at time t, to a short force pulse of unit “area”, exerted at time t’. Hence Eq. (27) expresses the linear superposition principle in the time domain: the full effect of the force f( t) on a linear system is a sum of the effects of short pulses of duration dt’ and magnitude f( t’), each with its own “weight” G( t – t’) – see Fig. 2.

12 Eq. (26) is true for any linear physical system in which f( t) represents a cause, and q( t) its effect. Following tradition, I discuss the frequency-domain expression of this causality relation (called the Kramers-Kronig relations) in the Classical Electrodynamics part of this lecture series – see EM Sec. 7.2.

13 Technically, for this integration, t’ in Eq. (27) should be temporarily replaced with another letter, say t” .

Chapter 5

Page 6 of 38

Essential Graduate Physics

CM: Classical Mechanics

Gt t'

f ( t' )

Fig. 5.2. A schematic, finite-interval

q( t)

representation of a force f( t) as a sum of

. . .

. . .

short pulses at all times t’ < t, and their

contributions to the linear system’s

response q( t), as given by Eq. (27).

t

t'

0

dt'

This picture may be used for the calculation of Green’s function for our particular system.

Indeed, Eqs. (29)-(30) mean that G() is just the solution of the differential equation of motion of the system, in our case, Eq. (13), with the replacement t  , and a δ-functional right-hand side: 2

d G( )

dG( )

 2

2

  G( )   ( ) .

(5.31)

2

0

d

d

Since Eqs. (27) describes only the second term in Eq. (15), i.e. only the forced, rather than free oscillations, we have to exclude the latter by solving Eq. (31) with zero initial conditions: dG

G 0 

 0  ,0

(5.32)

d

where  = – 0 means the instant immediately preceding  = 0.

This problem may be simplified even further. Let us integrate both sides of Eq. (31) over an infinitesimal interval including the origin, e.g. [– d /2, + d /2], and then follow the limit d  0. Since the Green’s function has to be continuous because of its physical sense as the (generalized) coordinate, all terms on the left-hand side but the first one vanish, while the first term yields dG/d+0 – dG/ d–0.

Due to the second of Eqs. (32), the last of these two derivatives has to equal zero, while the right-hand side of Eq. (31) yields 1 upon the integration. Thus, the function G() may be calculated for  > 0 (i.e.

for all times when it is different from zero) by solving the homogeneous version of the system’s equation of motion for  > 0, with the following special initial conditions: dG

G0  ,

0

0  .1

(5.33)

d

This approach gives us a convenient way for the calculation of Green’s functions of linear systems. In particular for the oscillator with not very high damping ( < 0, i.e. Q > ½), imposing the boundary conditions (33) on the homogeneous equation’s solution (9), we immediately get

Oscillator’s

1

Green’s

G

 

( ) 

e

sin '  .

(5.34)

function

'

0

0

(The same result may be obtained directly from Eq. (28) with the response function () given by Eq.

(19). This way is, however, a little bit more cumbersome, and is left for the reader’s exercise.) Relations (27) and (34) provide a very convenient recipe for solving many forced oscillations problems. As a very simple example, let us calculate the transient process in an oscillator under the effect of a constant force being turned on at t = 0, i.e. proportional to the theta-function of time: Chapter 5

Page 7 of 38

Essential Graduate Physics

CM: Classical Mechanics

 ,

0

for t  ,

0

f ( t)  f t

(5.35)

0    

f , for t  ,

0

0

provided that at t < 0 the oscillator was at rest, so that in Eq. (15), q free( t)  0. Then the second form of Eq. (27), together with Eq. (34), yield

t

1

q( t)  f ( t  ) G( ) d



f

e

sin  ' d.

(5.36)

0 

0

'

0

0

0

The simplest way to work out such integrals is to represent the sine function under it as the imaginary part of exp{ i0 ’t}, and merge the two exponents, getting

1

t

1

   i'  

F

0

0

t



q t

( )  f

Im

e

1

cos

sin

. (5.37)

0

  e



't

't

0

0 

'

0

  i ' 0

k

0

' 0



This result, plotted in Fig. 3, is rather natural: it describes nothing more than the transient from the initial position q = 0 to the new equilibrium position q

2

0 = f 0/0 = F 0/, accompanied by decaying

oscillations. For this particular simple function f( t), the same result might be also obtained by introducing a new variable q~ t q( t) – q 0 and solving the resulting homogeneous equation for q~ (with appropriate initial condition q~ (0) = – q 0). However, for more complicated functions f( t), Green’s function approach is irreplaceable.

2

1.5

q( t)

1

q 0

0.5

Fig. 5.3. The transient process in a linear

oscillator, induced by a step-like force f( t), for

the particular case /

0

0 = 0.1 (i.e., Q = 5).

0

20

40

t

0

Note that for any particular linear system, its Green’s function should be calculated only once, and then may be repeatedly used in Eq. (27) to calculate the system response to various external forces –

either analytically or numerically. This property makes the Green’s function approach very popular in many other fields of physics – with the corresponding generalization or re-definition of the function.14

5.2. Weakly nonlinear oscillations

In comparison with systems discussed in the last section, which are described by linear differential equations with constant coefficients and thus allow a complete and exact analytical solution, oscillations in nonlinear systems (very unfortunately but commonly called nonlinear oscillations) present a complex and, generally, analytically intractable problem. However, much insight on possible 14 See, e.g., Sec. 6.6, and also EM Sec. 2.7 and QM Sec. 2.2.

Chapter 5

Page 8 of 38

Essential Graduate Physics

CM: Classical Mechanics

processes in such systems may be gained from a discussion of an important case of weakly nonlinear systems, which may be explored analytically. An important example of such systems is given by an anharmonic oscillator – a 1D system whose higher terms in the potential’s expansion (3.10) cannot be neglected, but are small and may be accounted for approximately. If, in addition, damping is low (or negligible), and the external harmonic force exerted on the system is not too large, the equation of motion is a slightly modified version of Eq. (13):

Weakly

nonlinear

2

q   q f ( t, q, q,...) ,

(5.38)

oscillator

where   0 is the anticipated frequency of oscillations (whose choice may be to a certain extent arbitrary – see below), and the right-hand side f is small (say, scales as some small dimensionless parameter  << 1), and may be considered as a small perturbation.

Since at  = 0 this equation has the sinusoidal solution given by Eq. (3), one might naïvely think that at a nonzero but small  , the approximate solution to Eq. (38) should be sought in the form Perturbative

solution

(0)

)

1

(

(2)

( n)

n

q t

( )  q q q  ...,

q

where

  ,

(5.39)

with q(0) = A cos(0 t – )  0. This is a good example of an apparently impeccable mathematical reasoning that would lead to a very inefficient procedure. Indeed, let us apply it to the problem we already know the exact solution for, namely free oscillations in a linear but damped oscillator, for this occasion assuming the damping to be very low, /0 ~  << 1. The corresponding equation of motion, Eq. (6), may be represented in form (38) if we take  = 0 and

f   

2 q,

with   .

(5.40)

The naïve perturbation theory based on Eq. (39) would allow us to find small corrections, of the order of

, to the free, non-decaying oscillations A cos(0 t – ). However, we already know from Eq. (9) that the main effect of damping is a gradual decrease of the free oscillation amplitude to zero, i.e. a very large change of the amplitude, though at low damping,  << 0, this decay takes large time t ~  >> 1/0.

Hence, if we want our approximate method to be productive (i.e. to work at all time scales, in particular for forced oscillations with stationary amplitude and phase), we need to account for the fact that even a small right-hand side of Eq. (38) may eventually lead to large changes of oscillation’s amplitude A (and sometimes, as we will see below, also of oscillation’s phase ) at large times, because of the slowly accumulating effects of the small perturbation.15

This goal may be achieved16 by the account of these slow changes already in the “0th approximation”, i.e. the basic part of the solution in the expansion (39):

15 The same flexible approach is necessary for approximations used in quantum mechanics. The method discussed here is closer in spirit (though not completely identical) to the WKB approximation (see, e.g., QM Sec.

2.4) rather than most perturbative approaches (QM Ch. 6).

16 This approach has a long history and, unfortunately, does not have a commonly accepted name. It had been gradually developed in celestial mechanics, but its application to 1D systems (on which I am focusing) was clearly spelled out only in 1926 by Balthasar van der Pol. So, I will follow several authors who call it the van der Pol method. Note, however, that in optics and quantum mechanics, this method is commonly called the Rotating Wave Approximation (RWA). In math-oriented texts, this approach, and especially its extensions to higher approximations, is usually called either the small parameter method or the asymptotic method. The list of other Chapter 5

Page 9 of 38

Essential Graduate Physics

CM: Classical Mechanics

(0)

q

 (

A t) cos[ t  ( t)],

with ,

A   at

0

  0 .

(5.41) 0th

approximation

(It is evident that Eq. (9) is a particular case of this form.) Let me discuss this approach using a simple but representative example of a dissipative (but high- Q) pendulum driven by a weak sinusoidal external force with a nearly-resonant frequency:

q  2

2

q

    sin q f cos t

 ,

(5.42)

0

0

with | ω – ω 0|, δ << ω 0, and the force amplitude f 0 so small that | q | << 1 at all times. From what we know about the forced oscillations from Sec. 1, in this case it is natural to identify  on the left-hand side of Eq. (38) with the force’s frequency. Expanding sin q into the Taylor series in small q, keeping only the first two terms of this expansion, and moving all small terms to the right-hand side, we can rewrite Eq. (42) in the following popular form (38):17

2

q   q  2 q

   2

3

q

   q f cos t f ( t, q, q) .

(5.43) Duffing

0

equation

Here  =  2

0 /6 in the case of the pendulum (though the calculations below will be valid for any α), and the second term on the right-hand side was obtained using the approximation already employed in Sec.

1: (2 –  2

0 ) q  2 ( – 0) q = 2 q, where    – 0 is the detuning parameter that was already used earlier – see Eq. (21).

Now, following the general recipe expressed by Eqs. (39) and (41), in the 1st approximation in f

  we may look for the solution to Eq. (43) in the following form:

q( t)  A cos Ψ

)

1

(

q ( t),

Ψ

where

  t  ,

)

1

(

q

~  .

(5.44)

Let us plug this solution into both parts of Eq. (43), keeping only the terms of the first order in . Thanks to our (smart :-) choice of  on the left-hand side of that equation, the two zero-order terms in that part cancel each other. Moreover, since each term on the right-hand side of Eq. (43) is already of the order of

, we may drop q(1)   from the substitution into that part at all, because this would give us only terms O(2) or higher. As a result, we get the following approximate equation:

d

)

1

(

2

)

1

(

(0)

q   q f t  2

 

( A cos )  2  A cos   A cos 3  f cos t. (5.45) 0

dt

According to Eq. (41), generally, A and  should be considered (slow) functions of time.

However, let us leave the analyses of the transient process and system’s stability until the next section, and use Eq. (45) to find stationary oscillations in the system, that are established after an initial transient process. For that limited task, we may take A = const,  = const, so that q(0) represents sinusoidal oscillations of frequency . Sorting the terms on the right-hand side according to their time dependence,18 we see that it has terms with frequencies  and 3:

scientists credited for the development of this method, its variations, and extensions includes, most notably, N.

Krylov, N. Bogolyubov, and Yu. Mitroplolsky.

17 This equation is frequently called the Duffing equation (or the equation of the Duffing oscillator), after Georg Duffing who carried out its first (rather incomplete) analysis in 1918.

18 Using the second of Eqs. (44), cos  t may be rewritten as cos ( + )  cos  cos  – sin sin . Then using the identity given, for example, by MA Eq. (3.4): cos3 = (3/4)cos  + (1/4)cos 3, we get Eq. (46).

Chapter 5

Page 10 of 38

Essential Graduate Physics

CM: Classical Mechanics

(0)

3

3

1

f

 2 A

   A f cos cosΨ 

A

  f

  A

(5.46)

0

2

sin

0

sinΨ

3 cos

.

Ψ

3

4

4

Now comes the main punch of the van der Pol approach: mathematically, Eq. (45) may be viewed as the equation of oscillations in a linear, dissipation-free harmonic oscillator of frequency 

(not 0!) under the action of an external force f (0)( t). In our particular case, this force is given by Eq.

(46) and has three terms: two “quadrature” components at that very frequency , and the third one of frequency 3. As we know from our analysis of this problem in Sec. 1, if any of the first two components is not equal to zero, q(1) grows to infinity – see Eq. (19) with  = 0. At the same time, by the very structure of the van der Pol approximation, q(1) has to be finite – moreover, small! The only way to avoid these infinitely growing (so-called secular) terms is to require that the amplitudes of both quadrature components of f (0) with frequency  are equal to zero:

3

2

3

 A   A f cos  ,

0

2 A f sin  .

0

(5.47)

4

0

0

These two harmonic balance equations enable us to find both parameters of the forced oscillations: their amplitude A and phase  . The phase may be readily eliminated from this system (most easily, by expressing sin and cos from Eqs. (47), and then requiring the sum sin2 + cos2 to equal 1), and the solution for A recast in the following implicit but convenient form: 2

f

2

1

3 A

3 A

0

 2

 2 

A

,

where ( )

A   

  

.

(5.48)

2

2

2

 

0



4

 ( )

A  

8 

8  

This expression differs from Eq. (22) for the linear resonance in the low-damping limit only by the replacement of the detuning  with its effective amplitude-dependent value ( A) – or, equivalently, the replacement of the frequency 0 of the oscillator with its effective, amplitude-dependent value 3

2

A

( )

A   

.

(5.49)

0

0

8 

The physical meaning of 0( A) is simple: this is just the frequency of free oscillations of amplitude A in a similar nonlinear system, but with zero damping.19 Indeed, for  = 0 and f 0 = 0 we could repeat our calculations, assuming that  is an amplitude-dependent eigenfrequency 0( A). Then the second of Eqs.

(47) is trivially satisfied, while the second of them gives Eq. (49). The implicit relation (48) enables us to draw the curves of this nonlinear resonance just by bending the linear resonance plots (Fig. 1) according to the so-called skeleton curve expressed by Eq. (49). Figure 4 shows the result of this procedure. Note that at small amplitude, ( A)  0, i.e. we return to the usual, “linear” resonance (22).

To bring our solution to its logical completion, we should still find the first perturbation q(1)( t) from what is left of Eq. (45). Since the structure of this equation is similar to Eq. (13) with the force of frequency 

3 and zero damping, we may use Eqs. (16)-(17) to obtain

19 The effect of the pendulum’s frequency dependence on its oscillation amplitude was observed as early as 1673

by Christiaan Huygens – who by the way had invented the pendulum clock, increasing the timekeeping accuracy by about three orders of magnitude. (He also discovered the largest of Saturn’s moons, Titan).

Chapter 5

Page 11 of 38

Essential Graduate Physics

CM: Classical Mechanics

)

1

(

1

q ( t)

3

 

A cos (

3  t  ).

(5.50)

32 2

Adding this perturbation (note the negative sign!) to the sinusoidal oscillation (41), we see that as the amplitude A of oscillations in a system with  > 0 (e.g., a pendulum) grows, their waveform becomes a bit more “blunt” near the largest deviations from the equilibrium.

1.5

1

A

Fig. 5.4. The nonlinear resonance in the

0.5

Duffing oscillator, as described by Eq. (48),

for the particular case  =  2

0 /6, / = 0.01

(i.e. Q = 50), and several values of the

parameter f

2

0/0 , increased by equal steps of

0

0.005 from 0 to 0.03.

0.8

0.9

1

1.1

 /

0

The same Eq. (50) also enables an estimate of the range of validity of our first approximation: since it has been based on the assumption | q(1)| << | q(0)| ≤ A, for this particular problem we have to require αA 2/32 ω 2 << 1. For a pendulum (i.e. for  =  2

0 /6), this condition becomes A 2 << 192. Though

numerical coefficients in such strong inequalities should be taken with a grain of salt, the large magnitude of this particular coefficient gives a good hint that the method may give very accurate results even for relatively large oscillations with A ~ 1. In Sec. 7 below, we will see that this is indeed the case.

From the mathematical viewpoint, the next step would be to write the next approximation as q( t)  A cos

)

1

(

  q ( t)

(2)

q ( t),

(2)

q

~ 2

 ,

(5.51)

and plug it into the Duffing equation (43), which (thanks to our special choice of q(0) and q(1)) would retain only the sum (2)

2

(2)

q   q on its left-hand side. Again, requiring the amplitudes of two quadrature components of the frequency  on the right-hand side to vanish, we may get second-order corrections to A and . Then we may use the remaining part of the equation to calculate q( 2), and then go after the third-order terms, etc. 20 However, for most purposes, the sum q(0) + q( 1), and sometimes even just the crudest approximation q(0) alone, are completely sufficient. For example, according to Eq. (50), for a simple pendulum swinging as much as between the opposite horizontal positions ( A = /2), the 1st order correction q(1) is of the order of 0.5%. (Soon beyond this value, completely new dynamic phenomena 20 For a mathematically rigorous treatment of higher approximations, see, e.g., Yu. Mitropolsky and N. Dao, Applied Asymptotic Methods in Nonlinear Oscillations, Springer, 2004. A more laymen (and, by today’s standards, somewhat verbose) discussion of various oscillatory phenomena may be found in the classical text A.

Andronov, A. Vitt, and S. Khaikin, Theory of Oscillators, first published in the 1960s and still available online as Dover’s republication in 2011.

Chapter 5

Page 12 of 38

Essential Graduate Physics

CM: Classical Mechanics

start – see Sec. 7 below – but they cannot be described by these successive approximations at all.) Due to such reasons, higher approximations are rarely pursued for particular systems.

5.3. Reduced equations

A much more important issue is the stability of the solutions described by Eq. (48). Indeed, Fig.

4 shows that within a certain range of parameters, these equations give three different values for the oscillation amplitude (and phase), and it is important to understand which of them are stable. Since these solutions are not the fixed points in the sense discussed in Sec. 3.2 (each point in Fig. 4 represents a nearly-sinusoidal oscillation), their stability analysis needs a more general approach that would be valid for oscillations with amplitude and phase slowly evolving in time. This approach will also enable the analysis of non-stationary (especially the initial transient) processes, which are of importance for some dynamic systems.

First of all, let us formalize the way the harmonic balance equations, such as Eqs. (47), should be obtained for the general case (38) – rather than for the particular Eq. (43) considered in the last section.

After plugging in the 0th approximation (41) into the right-hand side of equation (38) we have to require the amplitudes of both quadrature components of frequency  to vanish. From the standard Fourier analysis, we know that these requirements may be represented as

Harmonic

balance

(0)

f

sin   ,

0

(0)

f

cos   ,

0

(5.52)

equations

where the top bar means the time averaging – in our current case, over the period 2/ of the right-hand side of Eq. (52), with the arguments calculated in the 0th approximation:

f (0)  f t

( , q(0) , q(0)

 ,...)  f t, A cosΨ, 

A sin Ψ 

,... ,

Ψ

with   t   .

(5.53)

Now, for a transient process the contribution of q(0) to the left-hand side of Eq. (38) is not zero any longer, because its amplitude and phase may be both slow functions of time – see Eq. (41). Let us calculate this contribution. The exact result would be

2

d

(0)

2

(0)

2

q   q

  A cos t 

 2



dt

(5.54)

  A  2

2

 A  Acos t  2 

A   sin t  .

However, in the first approximation in , we may neglect the second derivative of A, and also the squares and products of the first derivatives of A and  (which are all of the second order in ), so that Eq. (54) is reduced to

(0)

2

(0)

q   q  2 A

 cos( t )  2 

A sin( t  ) .

(5.55)

On the right-hand side of Eq. (53), we can neglect the time derivatives of the amplitude and phase at all, because this part is already proportional to the small parameter. Hence, in the first order in , Eq. (38) becomes

)

1

(

2

)

1

(

0

(0)

q   q f f

A



 

A

.

(5.56)

ef

2

cos Ψ 2

sin Ψ

Chapter 5

Page 13 of 38

Essential Graduate Physics

CM: Classical Mechanics

Now, applying Eqs. (52) to the function f (0)

ef

, and taking into account that the time averages of

sin2 and cos2 are both equal to ½, while the time average of the product sincos vanishes, we get a pair of so-called reduced equations (alternatively called either “truncated”, or “RWA”, or “van der Pol”

equations) for the time evolution of the amplitude and phase:

1

Reduced

(0)

1

A  

f

sin Ψ,

(0)

 

f

cos Ψ .

(5.57a)

(RWA)

A

equations

Extending the definition (4) of the complex amplitude of oscillations to their slow evolution in time, a( t)

A( t)exp{ i( t)}, and differentiating this relation, the two equations (57a) may be also rewritten in the form of either one equation for a:

i

(0)

(Ψ  )

i

i

(0) it

a 

f

e

f

e

,

(5.57b)

Reduced

equations:

or two equations for the real and imaginary parts of a( t) = u( t) + iv( t): alternative

forms

1

(0)

1

u  

f

sin  t,

v 

f (0) cos t .

(5.57c)

The first-order harmonic balance equations (52) are evidently just the particular case of the reduced equations (57) for stationary oscillations (  A    0 ).21

Superficially, the system (57a) of two coupled, first-order differential equations may look more complex than the initial, second-order differential equation (38), but actually, it is usually much simpler.

For example, let us spell them out for the easy case of free oscillations a linear oscillator with damping.

For that, we may reuse the ready Eq. (46) by taking  = f 0 = 0, and thus turning Eqs. (57a) into 1 (0)

1

A  

f

sin Ψ   (2 A cos Ψ  2 A sin Ψ)sin Ψ  

,

A

(5.58a)

1

(0)

1

 

f

cos Ψ 

(2 A cos Ψ  2 A sin Ψ) cos Ψ   .

(5.58b)

A

A

The solution of Eq. (58a) gives us the same “envelope” law A( t) = A(0) e- t as the exact solution (10) of the initial differential equation, while the elementary integration of Eq. (58b) yields ( t) =  t +

(0)   t – 0 t + (0). This means that our approximate solution,

(0)

q ( t)  (

A t)

cos  t  ( t)  (

A 0)  

e t

cos  t  (0)

(5.59)

0

,

agrees with the exact Eq. (9), and misses only the correction (8) of the oscillation frequency. (This correction is of the second order in , i.e. of the order of 2, and hence is beyond the accuracy of our first approximation.) It is remarkable how nicely do the reduced equations recover the proper frequency of free oscillations in this autonomous system – in which the very notion of  is ambiguous.

21 One may ask why we cannot stick to just one, most compact, complex–amplitude form (57b) of the reduced equations. The main reason is that when the function f ( q, q, t) is nonlinear, we cannot replace its real arguments, such as q = A cos( t – ), with their complex-function representations like a exp{ -it} (as could be done in the linear problems considered in Sec. 5.1), and need to use real variables, such as either { A, } or { u, v}, anyway.

Chapter 5

Page 14 of 38

Essential Graduate Physics

CM: Classical Mechanics

The result is different at forced oscillations. For example, for the (generally, nonlinear) Duffing oscillator described by Eq. (43) with f 0  0, Eqs. (57a) yield the reduced equations, f

f

A  

0

A

sin,

A    ( )

0

A A

cos ,

(5.60)

2

2

which are valid for an arbitrary function ( A), provided that this nonlinear detuning remains much smaller than the oscillation frequency. Here (after a transient), the amplitude and phase tend to the stationary states described by Eqs. (47). This means that  becomes a constant, so that q(0)  A cos( t

const), i.e. the reduced equations again automatically recover the correct frequency of the solution, in this case equal to the external force frequency.

Note that each stationary oscillation regime, with certain amplitude and phase, corresponds to a fixed point of the reduced equations, so that the stability of those fixed points determines that of the oscillations. In the next three sections, we will carry out such analyses for several simple systems of key importance for physics and engineering.

5.4. Self-oscillations and phase locking

B. van der Pol’s motivation for developing his method was the analysis of one more type of oscillatory motion: self-oscillations. Several systems, e.g., electronic rf amplifiers with positive feedback and optical media with quantum level population inversion, provide convenient means for the compensation and even over-compensation of the intrinsic energy losses in oscillators.

Phenomenologically, this effect may be described as the change of sign of the damping coefficient 

from positive to negative. Since for small oscillations the equation of motion is still linear, we may use Eq. (9) to describe its general solution. This equation shows that at  < 0, even infinitesimal deviations from equilibrium (say, due to unavoidable fluctuations) lead to oscillations with exponentially growing amplitude. Of course, in any real system such growth cannot persist infinitely, and shall be limited by this or that effect – e.g., in the above examples, respectively, by the amplifier’s saturation and the quantum level population’s exhaustion.

In many cases, the amplitude limitation may be described reasonably well by making the following replacement:

3

2 q

   2 q

    q ,

(5.61)

with  > 0. Let us analyze the effects of such nonlinear damping, applying the van der Pol’s approach22

to the corresponding differential equation:

q  2

3

2

q

   q   q  .

0

(5.62)

0

Carrying out the dissipative and detuning terms to the right-hand side, and taking them for f in the canonical Eq. (38), we can easily calculate the right-hand sides of the reduced equations (57a), getting23

3

2

2

A  

 ( )

A ,

A

where  ( )

A     A ,

(5.63a)

8

22 In his original work, B. van der Pol considered a very similar equation (frequently called the van der Pol oscillator) that differs from Eq. (62) only by the nonlinear term: q3  q 2 q , and has very similar properties.

23 For that, one needs to use the trigonometric identity sin3 = (3/4)sin – (1/4)sin3 – see, e.g., MA Eq. (3.4).

Chapter 5

Page 15 of 38

Essential Graduate Physics

CM: Classical Mechanics

A   A.

(5.63b)

The last of these equations has exactly the same form as Eq. (58b) for the case of decaying oscillations and hence shows that the self-oscillations (if they happen, i.e. if A  0) have the own frequency 0 of the oscillator – cf. Eq. (59). However, Eq. (63a) is more substantive. If the initial damping  is positive, it has only the trivial fixed point, A 0 = 0 (that describes the oscillator at rest), but if  is negative, there is also another fixed point,

1/ 2

2

 2  

A

q ,

where q

,

for   0 ,

(5.64)

1

3 0

0



2 

  

which describes steady self-oscillations with a non-zero amplitude A 1.

To understand which of these points is stable, let us apply the general approach discussed in Sec.

3.2, the linearization of equations of motion, to Eq. (63a). For the trivial fixed point A 0 = 0, its linearization is reduced to discarding the nonlinear term in the definition of the amplitude-dependent damping ( A). The resulting linear equation evidently shows that the system’s equilibrium point, A = A 0

= 0, is stable at  > 0 and unstable at  < 0. (This self-excitation condition was already discussed above.) On the other hand, the linearization of near the non-trivial fixed point A 1 requires a bit more

~

math: in the first order in A A A  0 , we get

1

~

~

3

2

~ 3

~ 3

2

2 ~

~

~

A A  

 ( A A)   ( A A)  

A   3 A A     

  ,

(5.65)

1

1

1

3  A 2 A

8

8

where Eq. (64) has been used to eliminate A 1. We see that the fixed point A 1 (and hence the self-oscillation process) is stable as soon as it exists (  0 ) – similarly to the situation in our “testbed problem” (Fig. 2.1), besides that in our current, dissipative system, the stability is “actual” rather than

“orbital” – see Sec. 6 for more on this issue.

Now let us consider another important problem: the effect of an external oscillating force on a self-excited oscillator. If the force is sufficiently small, its effects on the self-excitation condition and the oscillation amplitude are negligible. However, if the frequency  of such a weak force is close to the own frequency 0 of the oscillator, it may lead to phase locking 24 – also called “synchronization”, though the latter term also has a much broader meaning. At this effect, the oscillation frequency deviates from 0, and becomes exactly equal to the external force’s frequency , within a certain range

        .

(5.66)

0

To prove this fact, and also to calculate the phase-locking range width 2, we may repeat the calculation of the right-hand sides of the reduced equations (57a), adding the term f 0cos t to the right-hand side of Eq. (62) – cf. Eqs. (42)-(43). This addition modifies Eqs. (63) as follows:25

A  

f

 ( )

0

A A

sin,

(5.67a)

2

f

0

A    A

cos.

(5.67b)

2

24 Apparently, the phase locking was first noticed by the same C. Huygens for pendulum clocks.

25 Actually, this result should be evident, even without calculations, from the comparison of Eqs. (60) and (63).

Chapter 5

Page 16 of 38

Essential Graduate Physics

CM: Classical Mechanics

If the system is self-excited, and the external force is weak, its effect on the oscillation amplitude is small, and in the first approximation in f 0 we can take A to be constant and equal to the value A 1 given by Eq. (64). Plugging this approximation into Eq. (67b), we get a very simple equation26

Phase

locking

    Δ cos ,

(5.68)

equation where in our current case

f

Δ

0

.

(5.69)

2 A

 1

Within the range – <  < +, Eq. (68) has two fixed points on each 2-segment of the variable :

 

 

   cos 1

.

(5.70)

   2 n

  

It is easy to linearize Eq. (68) near each point to analyze their stability in our usual way; however, let me use this case to demonstrate another convenient way to do this in 1D systems, using the so-called phase plane [, ] – see Fig. 5, where the red line shows the right-hand side of Eq. (68).





 

0

Fig. 5.5. The phase plane of a phase-

locked oscillator, for the particular

case  = /2, f

0 > 0.

Since according to Eq. (68), positive values of the plotted function correspond to the growth of 

in time and vice versa, we may draw the arrows showing the direction of phase evolution. From this graphics, it is clear that one of these fixed points (for f 0 > 0, +) is stable, while its counterpart (in this case, –) is unstable. Hence the magnitude of  given by Eq. (69) is indeed the phase-locking range (or rather its half) that we wanted to find. Note that the range is proportional to the amplitude of the phase-locking signal – perhaps the most important quantitative feature of this effect.

To complete our simple analysis, based on the assumption of fixed oscillation amplitude, we need to find the condition of its validity. For that, we may linearize Eq. (67a), for the stationary case, near the value A 1, just as we have done in Eq. (65) for the transient process. The stationary result,

~

1

f 0

Δ

A A A

sin  A

sin ,

(5.71)

1

1

2 

2

2

~

shows that our assumption,  A  << A 1, and hence the final result (69), are valid if the calculated phase-locking range 2 is much smaller than 4 .

26 This equation is ubiquitous in phase-locking system descriptions, including even some digital electronic circuits used for that purpose – at the proper re-definition of the phase difference .

Chapter 5

Page 17 of 38

Image 154

Image 155

Image 156

Image 157

Image 158

Image 159

Image 160

Essential Graduate Physics

CM: Classical Mechanics

5.5. Parametric excitation

In both problems solved in the last section, the stability analysis was easy because it could be carried out for just one slow variable, either amplitude or phase. More generally, such an analysis of the reduced equations involves both of these variables. A classical example of such a situation is provided by one important physical effect – the parametric excitation of oscillations. A simple example of such excitation is given by a pendulum with a variable parameter, for example, the suspension length l( t) –

see Fig. 6. Experiments27 and numerical simulations show that if the length is changed periodically ( modulated) with some frequency 2 that is close to 20, and a sufficiently large depth  l, the equilibrium position of the pendulum becomes unstable, and it starts oscillating with frequency  equal exactly to the half of the modulation frequency – and hence only approximately equal to the average frequency 0 of the oscillator.

2

l

l( t)

  

0

Fig. 5.6. Parametric excitation of a pendulum.

For an elementary analysis of this effect, we may consider the simplest case when the oscillations are small. At the lowest point ( = 0), where the pendulum moves with the highest velocity v

2

max, the suspension string’s tension T is higher than mg by the centripetal force: T max = mg + mv max / l.

On the contrary, at the maximum deviation of the pendulum from the equilibrium, the force is lower than mg, because of the string’s tilt: T

2

min = mg cosmax. Using the energy conservation, E = mv max /2 =

mgl(1 – cosmax), we may express these values as Tmax = mg + 2 E/ l and Tmin = mgE/ l. Now, if during each oscillation period the string is pulled up slightly by  l (with  l << l) at each of its two passages through the lowest point, and is let to go down by the same amount at each of two points of the maximum deviation, the net work of the external force per period is positive:

Δ l

W  (

2 T

 T )Δ l  6

E,

(5.72)

max

min

l

and hence increases the oscillator’s energy. If the parameter modulation depth  l is sufficient, this increase may overcompensate the energy drained out by damping during the same period.

Quantitatively, Eq. (10) shows that low damping ( << 0) leads to the following energy decrease,

E

  4

 

E ,

(5.73)

0

per oscillation period. Comparing Eqs. (72) and (73), we see that the net energy flow into the oscillations is positive, W +  E > 0, i.e. oscillation amplitude has to grow if28

27 The simplest experiments of this kind may be done with the usual playground swings, where moving your body up and down moves the system’s c.o.m. position, and hence the effective length l ef of the support – see Eq. (4.41).

28 Modulation of the pendulum’s mass (say, by periodic pumping water in and out of a suspended bottle) gives a qualitatively similar result. Note, however, that parametric oscillations cannot be excited by modulating every Chapter 5

Page 18 of 38

Essential Graduate Physics

CM: Classical Mechanics

Δ l

2

.

(5.74)

l

3

3 Q

0

Since this result is independent of the oscillation energy E, the growth of energy and amplitude is exponential (until E becomes so large that some of our assumptions fail), so that Eq. (74) is the condition of parametric excitation – in this simple model.

However, this result does not account for a possible difference between the oscillation frequency

 and the eigenfrequency 0, and also does not clarify whether the best phase shift between the oscillations and parameter modulation, assumed in the above calculation, may be sustained automatically. To address these issues, we may apply the van der Pol approach to a simple but reasonable model:

q  2

2

q

    1

(   cos 2 t) q  ,

0

(5.75)

0

describing the parametric excitation in a linear oscillator with a sinusoidal modulation of the parameter

 2

0 ( t). Rewriting this equation in the canonical form (38),

2

q   q f ( t, q, q)  2

q

   2

2

q

   q cos 2 t

 ,

(5.76)

0

and assuming that the dimensionless ratios / ω and /, and the modulation depth  are all much less than 1, we may use general Eqs. (57a) to get the following reduced equations:



A  

A

A sin 2,

4



(5.77)

A   

A

A cos 2.

4

These equations evidently have a fixed point, with A 0 = 0, but its stability analysis (though possible) is not absolutely straightforward, because the phase  of oscillations is undetermined at that point. In order to avoid this (technical rather than conceptual) difficulty, we may use, instead of the real amplitude and phase of oscillations, either their complex amplitude a = A exp{ i}, or its components u and v – see Eqs. (4). Indeed, for our function f, Eq. (57b) gives



*

a  ( 

  i ) a i

a ,

(5.78)

4

while Eqs. (57c) yield



u  

u v

 

v ,

4



(5.79)

v   v

   u

u.

4

We see that in contrast to Eqs. (77), in the “Cartesian coordinates” { u, v} the trivial fixed point A 0 = 0 (i.e. u 0 = v 0 = 0) is absolutely regular. Moreover, equations (78)-(79) are already linear, so they do not require any additional linearization. Thus we may use the same approach as was already used in Secs. 3.2 and 5.1, i.e. look for the solution of Eqs. (79) in the exponential form exp{ t}. However, now oscillator’s parameter – for example, the oscillator’s damping coefficient (at least if it stays positive at all times), because this does not change the system’s energy, just the energy drain rate.

Chapter 5

Page 19 of 38

Essential Graduate Physics

CM: Classical Mechanics

we are dealing with two variables and should allow them to have, for each value of , a certain ratio u/ v.

For that, we may take the partial solution in the form

t

u c e ,

t

v c e .

(5.80)

u

v

where the constants cu and cv are frequently called the distribution coefficients. Plugging this solution into Eqs. (79), we get from them the following system of two linear algebraic equations:



   

c

c

u

 

 ,

0

v

4 

(5.81)

 

  

c    c

u

    .0

4 

v

The characteristic equation of this system, i.e. the condition of compatibility of Eqs. (81),



      

2



4

2

   2

2

2

      

  ,

0



(5.82)

 4 

 

   

4

has two roots:

1/ 2

2

 

2

   



 

 .

(5.83)

 4



Requiring the fixed point to be unstable, Re+ > 0, we get the parametric excitation condition



  2

2

   1/2.

(5.84)

4

Thus the parametric excitation may indeed happen without any external phase control: the arising oscillations self-adjust their phase to pick up energy from the external source responsible for the periodic parameter variation.

Our key result (84) may be compared with two other calculations. First, in the case of negligible damping ( = 0), Eq. (84) turns into the condition /4 >  . This result may be compared with the well-developed theory of the so-called Mathieu equation, whose canonical form is 2

d y

  a  2 b cos2 vy  .

0

(5.85)

2

dv

With the substitutions yq, v   t, a  (0/)2, and b/ a /2, this equation is just a particular case of Eq. (75) for  = 0. In terms of Eq. (85), our result (84) may be re-written just as b >  a – 1 , and is supposed to be valid for b << 1. The boundaries given by this condition are shown with dashed lines in Fig. 7 together with the numerically calculated29 stability boundaries for the Mathieu equation. One can see that the van der Pol approximation works just fine within its applicability limit (and a bit beyond :-), though it fails to predict some other important features of the Mathieu equation, such as the existence of higher, more narrow regions of parametric excitation (at an 2, i.e. 0  / n, for all integer n), and some 29 Such calculations are substantially simplified by the use of the so-called Floquet theorem, which is also the mathematical basis for the discussion of wave propagation in periodic media – see the next chapter.

Chapter 5

Page 20 of 38

Essential Graduate Physics

CM: Classical Mechanics

spill-over of the stability region into the lower half-plane a < 0.30 The reason for these failures is the fact that, as can be seen in Fig. 7, these phenomena do not appear in the first approximation in the parameter modulation amplitude   b, which is the realm of the reduced equations (79).

5

stable

n  2

4

3

a

stable

2

Fig. 5.7. Stability boundaries of the Mathieu

1

n  1

equation (85), as calculated: numerically (solid

curves) and using the reduced equations (79)

stable

(dashed straight lines). In the regions numbered

0

by various n, the trivial solution y = 0 of the

n  0

equation is unstable, i.e. its general solution y( v)

1

includes an exponentially growing term.

0

0.2

0.4

0.6

0.8

1

b

In the opposite case of non-zero damping but exact tuning ( = 0,   0), Eq. (84) becomes 4

2

 

 .

(5.86)

Q

0

This condition may be compared with Eq. (74) by taking  l/ l = 2. The comparison shows that while the structure of these conditions is similar, the numerical coefficients are different by a factor close to 2.

The first reason for this difference is that the instant parameter change at optimal moments of time is more efficient than the smooth, sinusoidal variation described by (75). Even more significantly, the change of the pendulum’s length modulates not only its frequency 0  ( g/ l)1/2 as Eq. (75) implies but also its mechanical impedance Z  ( gl)1/2 – the notion to be discussed in detail in the next chapter. (The analysis of the general case of the simultaneous modulation of 0 and Z is left for the reader’s exercise.) To conclude this section, let me summarize the important differences between the excitation of the parametric and forced oscillations:

(i) Parametric oscillations completely disappear outside of their excitation range, while the forced oscillations have a non-zero amplitude for any frequency and amplitude of the external force –

see Eq. (18).

(ii) While the parametric excitation may be described by linear equations such as Eq. (75), such equations cannot predict a finite oscillation amplitude within the excitation range, even at finite 30 This region (for b << 1, – b 2/2 < a < 0) describes, in particular, the counter-intuitive stability of the so-called Kapitza pendulum – an inverted pendulum with the suspension point oscillated fast in the vertical direction – the effect first observed by Andrew Stephenson in 1908.

Chapter 5

Page 21 of 38

Essential Graduate Physics

CM: Classical Mechanics

damping. In order to describe stationary parametric oscillations, some nonlinear effects have to be taken into account. (I am leaving analyses of such effects for the reader’s exercise.)

One more important feature of parametric oscillations will be discussed in the next section.

5.6. Fixed point classification

The reduced equations (79) give us a good pretext for a brief discussion of an important general topic of dynamics: classification and stability of the fixed points of a system described by two time-independent, first-order differential equations with time-independent coefficients.31 After their linearization near a fixed point, the equations for deviations can always be expressed in the form similar to Eq. (79):

~

~

~

q  M q M q ,

1

11 1

12 2

(5.87)

~

~

~

q  M q M q ,

2

21 1

22 2

where Mjj’ (with j, j’ = 1, 2) are some real scalars, which may be viewed as the elements of a 22 matrix M. Looking for an exponential solution of the type (80),

~

t

~

q c e ,

t

q c e ,

(5.88)

1

1

2

2

we get a general system of two linear equations for the distribution coefficients c 1,2: ( M  ) c M c  ,

0

11

1

12 2

(5.89)

M c  ( M  ) c  .

0

21 1

22

2

These equations are consistent if

M  

M

11

12

 0 ,

(5.90)

M

M  

21

22

giving us a quadratic characteristic equation:

2

   M M

M M M M

(5.91)

11

22 

(

)

.

0

11

22

12

21

Its solution,32

1

1

  ( M M ) 

M M

M M

(5.92)

11

22

(

)2 4

11

22

12

21 1/ 2,

2

2

shows that the following situations are possible:

A. The expression under the square root, ( M 11- M 22)2 + 4 M 12 M 21, is positive. In this case, both characteristic exponents  are real, and we can distinguish three sub-cases:

31 Autonomous systems described by a single, second-order homogeneous differential equation, say F( q, q, q)

  0, also belong to this class, because we may always treat the generalized velocity q  v as a new variable, and use this definition as one first-order differential equation, while the initial equation, in the form F ( q, v, v)  0 , as the second first-order equation.

32 In the language of linear algebra,  are the eigenvalues, and the corresponding sets of the distribution coefficients [ c 1, c 2] are the eigenvectors of the matrix M with elements Mjj’.

Chapter 5

Page 22 of 38

Essential Graduate Physics

CM: Classical Mechanics

(i) Both + and - are negative. As Eqs. (88) show, in this case the deviations q~ tend to zero at t  , i.e. the fixed point is stable. Because of generally different magnitudes of the exponents

~ ~

, the process represented on the phase plane [ q , q ] (see Fig. 8a, with the solid arrows, for an 1

2

example) may be seen as consisting of two stages: first, a faster (with the rate - > +) relaxation to a linear asymptote,33 and then a slower decline, with the rate +, along this line, i.e. at a virtually fixed ratio of the variables. Such a fixed point is called the stable node.

(a)

(b)

 1

 0 25

. 

 1

 0 6

. 

M 

  1 0.75

M 

  11.25

1.25

 4 

 2 6

.

1 

unstable

asymptote

stable

separatrix

asymptote

(c)

(d)

1  

1

 0

 

1

M 

    i

1

M 

   i

1  

1

1 0 

unstable

stable

f1 f2 f3 f4 f5

Fig. 5.8. Typical trajectories on the phase plane [ ~ ~

q , q ] near fixed points of different types:

1

2

(a) node, (b) saddle, (c) focus, and (d) center. The particular matrices M used for the first three panels correspond to Eqs. (81) for the parametric excitation, with  =  and three different values of the ratio /4: (a) 1.25, (b) 1.6, and (c) 0.

33 The asymptote direction may be found by plugging the value + back into Eq. (89) and finding the corresponding ratio c 1/ c 2. Note that the separation of the system’s evolution into the two stages is conditional, being most vivid in the case of a large difference between the exponents + and -.

Chapter 5

Page 23 of 38

Essential Graduate Physics

CM: Classical Mechanics

(ii) Both + and - are positive. This case of an unstable node differs from the previous one only by the direction of motion along the phase plane trajectories – see the dashed arrows in Fig. 8a.

Here the variable ratio is also approaching a constant soon, now the one corresponding to + > -.

(iii) Finally, in the case of a saddle (+ > 0, - < 0), the system’s dynamics is different (Fig. 8b): after the rate--relaxation to an asymptote, the perturbation starts to grow, with the rate +, along one of two opposite directions. (The direction is determined on which side of another straight line, called the separatrix, the system has been initially.) So the saddle34 is an unstable fixed point.