Essential Graduate Physics by Konstantin K. Likharev - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

4

4

T T

 ,

(2.89a) Stefan

dA

60 3 2

K

c

law

where  is the Stefan-Boltzmann constant

2

W

Stefan-

4

8

 

k  67

.

5

10

.

(2.89b) Boltzmann

3 2

B

2

4

60 c

m K

constant

By this point, the thoughtful reader should have an important concern ready: Eq. (84) and hence Eq. (88) are based on Eq. (72) for the average energy of each oscillator, referred to its ground-state energy /2. However, the radiation power should not depend on the energy origin; why have not we included the ground energy of each oscillator into the integration (88), as we have done in Eq. (80)? The answer is that usual radiation detectors only measure the difference between the power Pin of the incident radiation (say, that of a blackbody surface with temperature T) and their own back-radiation power Pout, corresponding to some effective temperature T d of the detector – see Fig. 10. But however low T d is, the temperature-independent contribution /2 of the ground-state energy to the back radiation is always there. Hence, the term /2 drops out from the balance, and cannot be detected – at least in this simple way. This is the reason why we had the right to ignore this contribution in Eq. (88) –

very fortunately, because it would lead to the integral’s divergence at its upper limit. However, let me repeat that the ground-state energy of the electromagnetic field oscillators is physically real – and important – see Sec. 5.5 below.

 

d in

P  

E

( , T ) 

d

2 

T

Fig. 2.10. The power balance at

T d

the electromagnetic radiation

 

d out

P  

E

( , T ) 

d

d

power measurement.

2 

One more interesting result may be deduced from the free energy F of the electromagnetic radiation, which may be calculated by integration of Eq. (73) over all the modes, with the appropriate weight (83):

49 Note that the heat capacity CV  ( E/ T) V, following from Eq. (88), is proportional to T 3 at any temperature, and hence does not obey the trend CV  const at T  . This is the result of the unlimited growth, with temperature, of the number of thermally-exited field oscillators with frequencies  below T/.

50 Its functional part ( ET 4) was deduced in 1879 by Joseph Stefan from earlier experiments by John Tyndall.

Theoretically, it was proved in 1884 by L. Boltzmann, using a result derived earlier by Adolfo Bartoli from the Maxwell equations for the electromagnetic field – all well before Max Planck’s work.

Chapter 2

Page 26 of 44

Essential Graduate Physics

SM: Statistical Mechanics

 

2

 / T

 / T dN

 

 / T

 

F   T ln1 e

 Tln1 e

d  Tln1 e

V

d



. (2.90)

2 3 

d

0

0

  c

Representing 2 d as d(3)/3, we can readily work out this integral by parts, reducing it to a table integral similar to that in Eq. (88), and getting a surprisingly simple result:

2

E

4

F V

T  

.

(2.91)

45 3 3

c

3

Now we can use the second of the general thermodynamic relations (1.35) to calculate the pressure exerted by the radiation on the walls of the containing volume V:51

F

2

 

4

E

P  

 

T

.

(2.92a)

V

 

45 3

c 3

V

3

T

Rewritten in the form,

E

Photon gas:

PV

,

(2.92b)

PV vs. E

3

this result may be considered as the equation of state of the electromagnetic field, i.e. from the quantum-mechanical point of view, of the photon gas. Note that the equation of state (1.44) of the ideal classical gas may be represented in a similar form, but with a coefficient generally different from Eq. (92).

Indeed, according to the equipartition theorem, for an ideal gas of non-relativistic particles whose internal degrees of freedom are in a fixed (say, ground) state, the temperature-dependent energy is that of the three translational “half-degrees of freedom”, E = 3 N( T/2). Expressing from here the product NT

= (2 E/3), and plugging it into Eq. (1.44), we get a relation similar to Eq. (92), but with a twice larger factor before E. On the other hand, a relativistic treatment of the classical gas shows that Eq. (92) is valid for any gas in the ultra-relativistic limit, T >> mc 2, where m is the rest mass of the gas’ particle.

Evidently, photons (i.e. particles with m = 0) satisfy this condition at any energy.52

Finally, let me note that Eq. (92) allows for the following interesting interpretation. The last of Eqs. (1.60), being applied to Eq. (92), shows that in this particular case the grand thermodynamic potential  equals (– E/3), so that according to Eq. (91), it is equal to F. But according to the definition of , i.e. the first of Eqs. (1.60), this means that the chemical potential of the electromagnetic field excitations (photons) vanishes:

F  

 

 0 .

(2.93)

N

In Sec. 8 below, we will see that the same result follows from the comparison of Eq. (72) and the general Bose-Einstein distribution for arbitrary bosons. So, from the statistical point of view, photons may be considered as bosons with zero chemical potential.

(ii) Specific heat of solids. The heat capacity of solids is readily measurable, and in the early 1900s, its experimentally observed temperature dependence served as an important test for the then-51 This formula may be also derived from the expression for the forces exerted by the electromagnetic radiation on the walls (see, e.g. EM Sec. 9.8), but the above calculation is much simpler.

52 Note that according to Eqs. (1.44), (88), and (92), the difference between the equations of state of the photon gas and an ideal gas of non-relativistic particles, expressed in the more usual form P = P( V, T), is much more dramatic: PT 4 V 0 vs. PT 1 V-1.

Chapter 2

Page 27 of 44

Essential Graduate Physics

SM: Statistical Mechanics

emerging quantum theories. However, the theoretical calculation of CV is not simple53 – even for insulators, whose specific heat at realistic temperatures is due to thermally-induced vibrations of their crystal lattice alone.54 Indeed, at relatively low frequencies, a solid may be treated as an elastic continuum. Such a continuum supports three different modes of mechanical waves with the same frequency , that all obey linear dispersion laws,  = vk, but the velocity v = vl for one of these modes (the longitudinal sound) is higher than that ( vt) of two other modes (the transverse sound).55 At such frequencies, the wave mode density may be described by an evident generalization of Eq. (83): 1  1

2 

dN V

2

4

d .

(2.94a)

(  3

2 )  v 3

v 3 

l

t

For what follows, it is convenient to rewrite this relation in a form similar to Eq. (83): 1

 / 3

2

3 V

d

1  1

2 

dN

4

,

with v  

.

(2.94b)

3

3

 3

3

(2 )

v

3



  v

v

l

t 

However, the basic wave theory shows56 that as the frequency  of a sound wave in a periodic structure is increased so that its half-wavelength / k approaches the crystal period d, the dispersion law

( k) becomes nonlinear before the frequency reaches its maximum at k = / d. To make things even more complex, 3D crystals are generally anisotropic, so that the dispersion law is different in different directions of the wave propagation. As a result, the exact statistics of thermally excited sound waves, and hence the heat capacity of crystals, is rather complex and specific for each particular crystal type.

In 1912, P. Debye suggested an approximate theory of the specific heat’s temperature dependence, which is in a surprisingly good agreement with experiment for many insulators, including polycrystalline and amorphous materials. In his model, the linear ( acoustic) dispersion law  = vk, with the effective sound velocity v defined by the second of Eqs. (94b), is assumed to be exact all the way up to some cutoff frequency D, the same for all three wave modes. This Debye frequency may be defined by the requirement that the total number of acoustic modes, calculated within this model from Eq. (94b),

D

3

1

3

V

2

D

N V

4 d 

,

(2.95)

3

3

2 3

(2 ) v

2 v

0

is equal to the universal number N = 3 nV of the degrees of freedom (and hence of independent oscillation modes) in a 3D system of nV elastically coupled particles, where n is the atomic density of the crystal, i.e. the number of atoms per unit volume.57 For this model, Eq. (72) immediately yields the following expression for the average energy and specific heat (in thermal equilibrium at temperature T ):

D

1

3

E V

2

4 d  nVT

3

Dx

,

(2.96)

3

3

x T T

(2 ) v

 /

0 e

T

/

D

1

53 Due to a rather low temperature expansion of solids, the difference between their CV and CP is small.

54 In good conductors (e.g., metals), specific heat is contributed (and at low temperatures, dominated) by free electrons – see Sec. 3.3 below.

55 See, e.g., CM Sec. 7.7.

56 See, e.g., CM Sec. 6.3, in particular Fig. 6.5 and its discussion.

57 See, e.g., CM Sec. 6.2.

Chapter 2

Page 28 of 44

Essential Graduate Physics

SM: Statistical Mechanics

Debye

C

1   

( ) 

V

E

dD x

law

c

  3 D( x)  x

,

(2.97)

V

nV

nV T

 

dx

V

x T

 / T

D

where T D  D is called the Debye temperature,58 and

3 x 3

d

 ,

1

for

x  ,

0

D( x) 

3 

x

e

(2.98)

x

x

0

1

 4 / 5 3, for  ,

is the Debye function. Red lines in Fig. 11 show the temperature dependence of the specific heat cV (per particle) within the Debye model. At high temperatures, it approaches a constant value of three, corresponding to the energy E = 3 nVT, in agreement with the equipartition theorem for each of three degrees of freedom (i.e. six half-degrees of freedom) of each mode. (This value of cV is known as the Dulong-Petit law.) In the opposite limit of low temperatures, the specific heat is much smaller: 3

12 4

  T

c

 1,

(2.99)

V

5 



T D 

reflecting the reduction of the number of excited phonons with  < T as the temperature is decreased.

3

10

1

2

c

c

0.1

V

V

0.01

1

1 1

 0 3

0

1 1

 0 4

0

0.5

1

1.5

0.01

0.1

1

T / T

T / T

D

D

Fig. 2.11. The specific heat as a function of temperature in the Debye (red lines) and Einstein (blue lines) models.

As a historic curiosity, P. Debye’s work followed one by A. Einstein, who had suggested (in 1907) a simpler model of crystal vibrations. In his model, all 3 nV independent oscillatory modes of nV

atoms of the crystal have approximately the same frequency, say E, and Eq. (72) immediately yields



E  3

E

nV

,

(2.100)

 / T

E

e

1

so that the specific heat is functionally similar to Eq. (75):

58 In the SI units, the Debye temperature T D is of the order of a few hundred K for most simple solids (e.g., ~430

K for aluminum and ~340 K for copper), with somewhat lower values for crystals with heavy atoms (~105 K for lead), and reaches its highest value ~2200 K for diamond, with its relatively light atoms and very stiff lattice.

Chapter 2

Page 29 of 44

Essential Graduate Physics

SM: Statistical Mechanics

2

1   E

 / 2 T

c

(2.101)

V

  3

E

.

nV   T

sinh( / 2 T )

V

E

This dependence cV( T) is shown with blue lines in Fig. 11 (assuming, for the sake of simplicity, that E = T D). At high temperatures, this result does satisfy the universal Dulong-Petit law ( cV = 3), but for T << T D, Einstein’s model predicts a much faster (exponential) drop of the specific heart as the temperature is reduced. (The difference between the Debye and Einstein models is not too spectacular on the linear scale, but in the log-log plot, shown on the right panel of Fig. 11, it is rather dramatic.59) The Debye model is in a much better agreement with experimental data for simple, monoatomic crystals, thus confirming the conceptual correctness of his wave-based approach.

Note, however, that when a genius such as Albert Einstein makes an error, there is usually some deep and important background under it. Indeed, crystals with the basic cell consisting of atoms of two or more types (such as NaCl, etc.), feature two or more separate branches of the dispersion law ( k) –

see, e.g., Fig. 12. While the lower, “acoustic” branch is virtually similar to those for monoatomic crystals and may be approximated by the Debye model,  = vk, reasonably well, the upper (“optical”60) branch does not approach  = 0 at any k. Moreover, for large values of the atomic mass ratio r, the optical branches are almost flat, with virtually k-independent frequencies 0, which correspond to simple oscillations of each light atom between its heavy neighbors. For thermal excitations of such oscillations, and their contribution to the specific heat, Einstein’s model (with E = 0) gives a very good approximation, so that for such solids, the specific heat may be well described by a sum of the Debye and Einstein laws (97) and (101), with appropriate weights.

“optical” branch

( k)

(arbitrary units,

linear

scale)

Fig. 2.12. The dispersion relation for

“acoustic” branch

mechanical waves in a simple 1D model of a

solid, with similar interparticle distances d, but

alternating particle masses, plotted for a

particular mass ratio r = 5 – see CM Chapter 6.

0

0.5

1.0

kd /

2.7. Grand canonical ensemble and distribution

As we have seen, the Gibbs distribution is a very convenient way to calculate the statistical and thermodynamic properties of systems with a fixed number N of particles. However, for systems in which N may vary, another distribution is preferable for applications. Several examples of such situations (as 59 This is why there is the following general “rule of thumb” in quantitative sciences: if you plot your data on a linear rather than log scale, you better have a good excuse ready. (An example of a valid excuse: the variable you are plotting changes its sign within the range you want to exhibit.)

60 This term stems from the fact that at k  0, the mechanical waves corresponding to these branches have phase velocities v ph  ( k)/ k that are much higher than that of the acoustic waves, and may approach the speed of light.

As a result, these waves can strongly interact with electromagnetic (practically, optical) waves of the same frequency, while acoustic waves cannot.

Chapter 2

Page 30 of 44

Image 2541

Image 2542

Image 2543

Image 2544

Image 2545

Image 2546

Image 2547

Image 2548

Image 2549

Image 2550

Image 2551

Image 2552

Image 2553

Image 2554

Image 2555

Image 2556

Image 2557

Image 2558

Image 2559

Image 2560

Image 2561

Essential Graduate Physics

SM: Statistical Mechanics

well as the basic thermodynamics of such systems) have already been discussed in Sec. 1.5. Perhaps even more importantly, statistical distributions for systems with variable N are also applicable to some ensembles of independent particles in certain single-particle states even if the number of the particles is fixed – see the next section.

With this motivation, let us consider what is called the grand canonical ensemble (Fig. 13). It is similar to the canonical ensemble discussed in Sec. 4 (see Fig. 6) in all aspects, besides that now the system under study and the heat bath (in this case more often called the environment) may exchange not only heat but also particles. In this ensemble, all environments are in both the thermal and chemical equilibrium, with their temperatures T and chemical potentials  the same for all members.

system

under study

dQ, dS

Em,N, T, μ

dN

environment

Fig. 2.13. A member of the grand canonical

T, μ

ensemble.

Let us assume that the system of interest is also in the chemical and thermal equilibrium with its environment. Then using exactly the same arguments as in Sec. 4 (including the specification of microcanonical sub-ensembles with fixed E and N), we may generalize Eq. (55), taking into account that the entropy S env of the environment is now a function of not only its energy E env = E – Em,N, 61 but also of the number of particles N env = N – N, with E and N fixed: ln W

 ln M  ln g ( E E

, N N )  ln E S ( E E

, N N )  const

m, N

env

m, N

env

m, N

S

S

(2.102)

env

env

S

E

N  const.

env E , N

E , N

m, N

E ,

E

N

N

env

env

To simplify this relation, let us rewrite Eq. (1.52) in the following equivalent form:

1

P

dS

dE

dV

dN.

(2.103)

T

T

T

Hence, if the entropy S of a system is expressed as a function of E, V, and N, then

S

 

1

S

 

P

S

 

 ,

 ,

  .

(2.104)

E

 

T

V

 

T

N

 

T

V , N

E, N

E, V

Applying the first one and the last one of these relations to the last form of Eq. (102), and using the equality of the temperatures T and chemical potentials  in the system under study and its environment, at equilibrium (as was discussed in Sec. 1.5), we get

61 The additional index in the new notation Em,N for the energy of the system of interest reflects the fact that its spectrum is generally dependent on the number N of particles in it.

Chapter 2

Page 31 of 44

Essential Graduate Physics

SM: Statistical Mechanics

1

ln W

S ( E , N )  E

N  const .

(2.105)

m, N

env

m,

T

N

T

Again, exactly as at the derivation of the Gibbs distribution in Sec. 4, we may argue that since Em, N, T, and  do not depend on the choice of environment’s size, i.e. on E and N, the probability Wm,N for a system to have N particles and be in m th quantum state in the whole grand canonical ensemble should also obey Eq. (105). As a result, we get the so-called grand canonical distribution: 1

N

  E

Grand

m N

W

exp

,

(2.106) canonical

m N

 .

,

Z

T

distribution

G

Just as in the case of the Gibbs distribution, the constant Z G (most often called the grand statistical sum, but sometimes the “grand partition function ) should be determined from the probability normalization condition, now with the summation of probabilities Wm,N over all possible values of both m and N:

N

  E

Grand

m, N

Z

exp

.

(2.107)

canonical

G

 

sum

m, N

T

Now, using the general Eq. (29) to calculate the entropy for the distribution (106) (exactly like we did it for the canonical ensemble), we get the following expression,

E

N

S  

W

ln W

 ln Z ,

(2.108)

m, N

m, N

G

m, N

T

T

which is evidently a generalization of Eq. (62).62 We see that now the grand thermodynamic potential 

(rather than the free energy F) may be expressed directly via the normalization coefficient ZG: 1

N

  Em N

  F   N E TS   N T ln

  T ln exp

,

.

(2.109)  from ZG

Z

T

G

m, N

Finally, solving the last equality for ZG, and plugging the result back into Eq. (106), we can rewrite the grand canonical distribution in the form

  N

  Em, N

W

 exp

,

(2.110)

m, N

T

similar to Eq. (65) for the Gibbs distribution. Indeed, in the particular case when the number N of particles is fixed, N =  N, so that  +  N =  +  N  F, Eq. (110) is reduced to Eq. (65).

2.8. Systems of independent particles

Now let us apply the general statistical distributions discussed above to a simple but very important case when the system we are considering consists of many similar particles whose explicit (“direct”) interaction is negligible. As a result, each particular energy value Em,N of such a system may 62 The average number of particles  N is exactly what was called N in thermodynamics (see Chapter 1), but I keep this explicit notation here to make a clear distinction between this average value of the variable, and its particular values participating in Eqs. (102)-(110).

Chapter 2

Page 32 of 44

Essential Graduate Physics

SM: Statistical Mechanics

be represented as a sum of energies εk of the particles, where the index k numbers single-particle states

– rather than those of the whole system, as the index m does.

Let us start with the classical limit. In classical mechanics, the energy quantization effects are negligible, i.e. there is a formally infinite number of quantum states k within each finite energy interval.

However, it is convenient to keep, for the time being, the discrete-state language, with the understanding that the average number  Nk  of particles in each of these states, usually called the state occupancy, is very small. In this case, we may apply the Gibbs distribution to the canonical ensemble of single particles, and hence use it with the substitution Emεk, so that Eq. (58) becomes

 

Boltzmann

k

N

c exp

(2.111)

k



  ,

1

distribution

T

where the constant c should be found from the normalization condition:

N  .

1

(2.112)

k

k

This is the famous Boltzmann distribution.63 Despite its formal similarity to the Gibbs distribution (58), let me emphasize the conceptual difference between these two important formulas. The Gibbs distribution describes the probability to find the whole system on one of its states with energy Em, and it is always valid – more exactly, for a canonical ensemble of systems in thermodynamic equilibrium. On the other hand, the Boltzmann distribution describes the occupancy of an energy level of a single particle, and, as we will see in just a minute, is valid for quantum particles only in the classical limitNk  << 1, even if they do not interact directly.

The last fact may be surprising, because it may seem that as soon as particles of the system are independent, nothing prevents us from using the Gibbs distribution to derive Eq. (111), regardless of the value of  Nk . This is indeed true if the particles are distinguishable, i.e. may be distinguished from each other – say by their fixed spatial positions, or by the states of certain internal degrees of freedom (say, spin), or by any other “pencil mark”. However, it is an experimental fact that elementary particles of each particular type (say, electrons) are identical to each other, i.e. cannot be “pencil-marked”.64 For such particles we have to be more careful: even if they do not interact explicitly, there is still some implicit dependence in their behavior, which is especially evident for the so-called fermions (elementary particles with semi-integer spin): they obey the Pauli exclusion principle that forbids two identical particles to be in the same quantum state, even if they do not interact explicitly.65

63 The distribution was first suggested in 1877 by L. Boltzmann. For the particular case when  is the kinetic energy of a free classical particle (and hence has a continuous spectrum), it is reduced to the Maxwell distribution (see Sec. 3.1 below), which was derived earlier – in 1860.

64 This invites a natural question: what particles are “elementary enough” for their identity? For example, protons and neutrons have an internal structure, in some sense consisting of quarks and gluons; can they be considered elementary? Next, if protons and neutrons are elementary, are atoms? molecules? What about really large molecules (such as proteins)? viruses? The general answer to these questions, given by quantum mechanics (or rather experiment :-), is that any particles/systems, no matter how large and complex they are, are identical if they not only have the same internal structure but also are exactly in the same internal quantum state – for example, in the ground state of all their internal degrees of freedom.

65 For a more detailed discussion of this issue, see, e.g., QM Sec. 8.1.

Chapter 2

Page 33 of 44

Essential Graduate Physics

SM: Statistical Mechanics

Note that the term “the same quantum state” carries a heavy meaning load here. For example, if two particles are confined to stay at different spatial positions (say, reliably locked in different boxes), they are distinguishable even if they are internally identical. Thus the Pauli principle, as well as other particle identity effects such as the Bose-Einstein condensation to be discussed in the next chapter, are important only when identical particles may move in the same spatial region. To emphasize this fact, it is common to use, instead of “identical”, a more precise (though grammatically rather unpleasant) adjective indistinguishable.

In order to take these effects into account, let us examine statistical properties of a system of many non-interacting but indistinguishable particles (at the first stage of calculation, either fermions or bosons) in equilibrium, applying the grand canonical distribution (109) to a very unusual grand canonical ensemble: a subset of particles in the same quantum state k (Fig. 14).

single-particle energy levels:

k

Fig. 2.14. The grand canonical

ensemble of particles in the same

1

quantum state with energy 

k

0

schematically.

particle #: 1 2 … j

In this ensemble, the role of the environment may be played just by the set of particles in all other states k’ k, because due to infinitesimal interactions, the particles may gradually change their states. In the resulting equilibrium, the chemical potential  and temperature T of the system should not depend on the state number k, though the grand thermodynamic potential  k of the chosen particle subset may. Replacing N with Nk – the particular (not average!) number of particles in the selected k th state, and the particular energy value Em,N with  kNk, we reduce the final form of Eq. (109) to

N k

N

  N

 

k

k

k

  k 

   T ln

T

(2.113)

k

exp

   ln exp

,

T

N

 



N

T

k

 

k

where the summation should be carried out over all possible values of Nk. For the final calculation of this sum, the elementary particle type is essential.

On one hand, for fermions, obeying the Pauli principle, the numbers Nk in Eq. (113) may take only two values, either 0 (the state k is unoccupied) or 1 (the state is occupied), and the summation gives

Nk

  

 

k  

  k 

   T ln

exp

ln 1 exp

.

(2.114)

k

 





T  



N  ,01

T  

T 

k

Now the state occupancy may be calculated from the last of Eqs. (1.62) – in this case, with the (average) N replaced with  Nk:

 

Fermi-

k

1

N

 

(2.115) Dirac

k





 

.

/

  

T

k

distribution

T , V

e

1

Chapter 2

Page 34 of 44

Essential Graduate Physics

SM: Statistical Mechanics

This is the famous Fermi-Dirac distribution, derived in 1926 independently by Enrico Fermi and Paul Dirac.

On the other hand, bosons do not obey the Pauli principle, and for them the numbers Nk can take any non-negative integer values. In this case, Eq. (113) turns into the following equality:

 

Nk

  

 

k  

N

 

k

k

   T ln

exp

ln

 ,

with

exp

.

(2.116)

k

  T





N 0

T  

N 0

T

k

k

This sum is just the usual geometric series, which converges if  < 1, giving

1

   

  T

 ln

T ln 1 exp

k

 ,

for    .

(2.117)

k

1





k

 

T 

In this case, the average occupancy, again calculated using Eq. (1.62) with N replaced with  Nk , obeys the Bose-Einstein distribution,

Bose-

 

1

Einstein

k

N

 

,

for   ,

(2.118)

k





distribution

  

k

/ T

 

k

T V,

e

1

which was derived in 1924 by Satyendra Nath Bose (for the particular case  = 0) and generalized in 1925 by Albert Einstein for an arbitrary chemical potential. In particular, comparing Eq. (118) with Eq.

(72), we see that harmonic oscillator’s excitations,66 each with energy , may be considered as bosons, with the chemical potential equal to zero. As a reminder, we have already obtained this equality ( = 0) in a different way – see Eq. (93). Its physical interpretation is that the oscillator excitations may be created inside the system, so that there is no energy cost  of moving them into the system under consideration from its environment.

The simple form of Eqs. (115) and (118), and their similarity (besides “only” the difference of the signs before the unity in their denominators), is one of the most beautiful results of physics. This similarity, however, should not disguise the fact that the energy dependences of the occupancies  Nk

given by these two formulas are very different – see their linear and semi-log plots in Fig. 15.

In the Fermi-Dirac statistics, the level occupancy is not only finite, but below 1 at any energy, while in the Bose-Einstein it may be above 1, and diverges at  k   . . However, as the temperature is increased, it eventually becomes much larger than the difference ( k – ). In this limit,  Nk << 1, both quantum distributions coincide with each other, as well as with the classical Boltzmann distribution (111) with c = exp{/ T}:

Boltzmann

distribution:

   k

N

exp

N

.

(2.119)

identical

k

,

for

 0

particles

T

k

This distribution (also shown in Fig. 15) may be, therefore, understood also as the high-temperature limit for indistinguishable particles of both sorts.

66 As the reader certainly knows, for the electromagnetic field oscillators, such excitations are called photons; for mechanical oscillation modes, phonons. It is important, however, not to confuse these mode excitations with the oscillators as such, and be very careful in prescribing to them certain spatial locations – see, e.g., QM Sec. 9.1.

Chapter 2

Page 35 of 44

Essential Graduate Physics

SM: Statistical Mechanics

10

0.8

0.6

N

N k

k

1

0.4

0.2

0

0.1

 3

 2

 1

0

1

2

3

 3

 2

 1

0

1

2

3

  / T

   /

k

T

k

Fig. 2.15. The Fermi-Dirac (blue line), Bose-Einstein (red line), and Boltzmann (dashed line) distributions for indistinguishable quantum particles. (The last distribution is valid only asymptotically, at  N

k << 1.)

A natural question now is how to find the chemical potential  participating in Eqs. (115), (118), and (119). In the grand canonical ensemble as such (Fig. 13), with the number of particles variable, the value of  is imposed by the system’s environment. However, both the Fermi-Dirac and Bose-Einstein distributions are also approximately applicable (in thermal equilibrium) to systems with a fixed but very large number N of particles. In these conditions, the role of the environment for some subset of N’ << N

particles is essentially played by the remaining N – N’ particles. In this case,  may be found by the calculation of  N from the corresponding probability distribution, and then requiring it to be equal to the genuine number of particles in the system. In the next section, we will perform such calculations for several particular systems.

For that and other applications, it will be convenient for us to have ready formulas for the entropy S of a general (i.e. not necessarily equilibrium) state of systems of independent Fermi or Bose particles, expressed not as a function of Wm of the whole system, as in Eq. (29), but via the occupancy numbers  Nk . For that, let us consider an ensemble of composite systems, each consisting of M >> 1

similar but distinct component systems, numbered by index m = 1, 2, … M, with independent (i.e. not directly interacting) particles. We will assume that though in each of M component systems the number N ( m)

()

k

of particles in their k th quantum state may be different (Fig. 16), their total number Nk in the composite system is fixed. As a result, the total energy of the composite system is fixed as well, M

M

( m)



( m)



N N  const,

E

N

N

,

(2.120)

k

k

k

 const

k

k

k

k

m 1

m 1

so that an ensemble of many such composite systems (with the same k), in equilibrium, is microcanonical.

Chapter 2

Page 36 of 44

Essential Graduate Physics

SM: Statistical Mechanics

number of particles in the k th

single-particle quantum state:

)

1

(

N

(2)

N

( m)

N

( M )

N

k

k

k

k

k

component system’s number: 1

2 … m

M

Fig. 2.16. A composite system of N ()

k

particles in the k th

quantum state, distributed between M component systems.

According to Eq. (24a), the average entropy Sk per component system in this microcanonical ensemble may be calculated as

ln M

S

k

 lim

,

(2.121)

k

M

M

where M

()

k is the number of possible different ways such a composite system (with fixed Nk

) may be

implemented. Let us start the calculation of Mk for Fermi particles – for which the Pauli principle is valid. Here the level occupancies N ( m)

k

may be only equal to either 0 or 1, so that the distribution

problem is solvable only if N ()

()

k

M, and evidently equivalent to the choice of Nk balls (in arbitrary order) from the total number of M distinct balls. Comparing this formulation with the definition of the binomial coefficient,67 we immediately get

M

M

!

M C

(2.122)

k



N



 .

k

( M N )! N !

k

k

From here, using the Stirling formula (again, in its simplest form (27)), we get

Fermions:

entropy

S   N

ln N   N

N

(2.123)

k

k

k

1

k  ln1

k ,

where



N

N

k

 lim

(2.124)

k

M

M

is exactly the average occupancy of the k th single-particle state in each system, which was discussed earlier in this section. Since for a Fermi system,  Nk  is always somewhere between 0 and 1, its entropy (123) is always positive.

In the Bose case, where the Pauli principle is not valid, the number N ( m) k

of particles on the k th

energy level in each of the systems is an arbitrary (non-negative) integer. Let us consider N () k

particles

and ( M – 1) partitions (shown by vertical lines in Fig. 16) between M systems as ( M – 1 + N () k

)

mathematical objects ordered along one axis. Each specific location of the partitions evidently fixes all N ( m)

k

. Hence Mk may be calculated as the number of possible ways to distribute the ( M – 1) indistinguishable partitions among these ( M – 1 + N ()

k

) ordered objects, i.e. as the following binomial

coefficient:68

67 See, e.g., MA Eq. (2.2).

68 See also MA Eq. (2.4).

Chapter 2

Page 37 of 44

Essential Graduate Physics

SM: Statistical Mechanics



M N 1

( M 1 N )!

M

k

C

k

(2.125)

k

M 1

 .

( M  )!

1 N !

k

Applying the Stirling formula (27) again, we get the following result,

S   N

ln N   N

N

(2.126) Bosons:

k

k

k

1

k  ln1

k ,

entropy

which again differs from the Fermi case (123) “only” by the signs in the second term, and is valid for any positive  Nk.

Expressions (123) and (126) are valid for an arbitrary (possibly non-equilibrium) case; they may be also used for an alternative derivation of the Fermi-Dirac (115) and Bose-Einstein (118) distributions, which are valid only in equilibrium. For that, we may use the method of Lagrange multipliers, requiring (just like it was done in Sec. 2) the total entropy of a system of N independent, similar particles, S   S ,

(2.127)

k

k

considered as a function of state occupancies  Nk, to attain its maximum, under the conditions of the fixed total number of particles N and total energy E:

N N  const,

.

(2.128)

k

N

E  const

k

k

k

k

The completion of this calculation is left for the reader’s exercise.

In the classical limit, when the average occupancies  Nk  of all states are small, the Fermi and Bose expressions for Sk tend to the same limit

S   N ln N , for N

 1.

(2.129) Boltzmann

k

k

k

k

entropy

This expression, frequently referred to as the Boltzmann (or “classical”) entropy, might be also obtained, for arbitrary  Nk , directly from the functionally similar Eq. (29), by considering an ensemble of systems, each consisting of just one classical particle, so that Em   k and Wm   Nk . Let me emphasize again that for indistinguishable particles, such identification is generally (i.e. at  Nk  ~ 1) illegitimate even if the particles do not interact explicitly. As we will see in the next chapter, indistinguishability may affect the statistical properties of identical particles even in the classical limit.

2.9. Exercise problems

2.1. A famous example of macroscopic irreversibility was suggested in 1907 by P. Ehrenfest.

Two dogs share 2 N >> 1 fleas. Each flea may jump onto another dog, and the rate  of such events (i.e.

the probability of jumping per unit time) does not depend either on time or on the location of other fleas.

Find the time evolution of the average number of fleas on a dog, and of the flea-related part of the total dogs’ entropy (at arbitrary initial conditions), and prove that the entropy can only grow.69

69 This is essentially a simpler (and funnier :-) version of the particle scattering model used by L. Boltzmann to prove his famous H- theorem (1872). Besides the historic significance of that theorem, the model used in it (see Sec. 6.2 below) is as cartoonish, and not more general.

Chapter 2

Page 38 of 44

Image 2562

Image 2563

Image 2564

Image 2565

Image 2566

Image 2567

Image 2568

Image 2569

Image 2570

Image 2571

Image 2572

Image 2573

Image 2574

Image 2575

Essential Graduate Physics

SM: Statistical Mechanics

2.2. Use the microcanonical distribution to calculate thermodynamic properties (including the entropy, all relevant thermodynamic potentials, and the heat capacity), of a two-level system in thermodynamic equilibrium with its environment, at temperature T that is comparable with the energy gap . For each variable, sketch its temperature dependence, and find its asymptotic values (or trends) in the low-temperature and high-temperature limits.

Hint: The two-level system is any quantum system with just two different stationary states, whose energies (say, E 0 and E 1) are separated by a gap   E 1 – E 0. Its most popular (but by no means the only!) example is the spin-½ of a particle, e.g., an electron, in an external magnetic field.70

2.3. Solve the previous problem using the Gibbs distribution. Also, calculate the probabilities of the energy level occupation, and give physical interpretations of your results, in both temperature limits.

2.4. Calculate low-field magnetic susceptibility  of a quantum spin-½ particle with a gyromagnetic ratio , in thermal equilibrium with an environment at temperature T, neglecting its orbital motion. Compare the result with that for a classical spontaneous magnetic dipole m of a fixed magnitude m0, free to change its direction in space.

Hint: The low-field magnetic susceptibility of a single particle is defined71 as

m z

,

H 0

H

where the z-axis is aligned with the direction of the external magnetic field H.

2.5. Calculate the low-field magnetic susceptibility of a particle with an arbitrary (either integer or semi-integer) spin s, neglecting its orbital motion. Compare the result with the solution of the previous problem.

Hint: Quantum mechanics72 tells us that the Cartesian component m z of the magnetic moment of such a particle, in the direction of the applied field, has (2 s + 1) stationary values: m  

m ,

m

with

  s,  s  ,...,

1

s  ,

1 s ,

z

s

s

where  is the gyromagnetic ratio of the particle, and  is Planck’s constant.

2.6.* Analyze the possibility of using a system of non-interacting spin-½ particles, placed into a strong, controllable external magnetic field, for refrigeration.

2.7. The rudimentary “zipper” model of DNA replication is a

chain of N links that may be either open or closed – see the figure on the

right. Opening a link increases the system’s energy by  > 0; a link may 1 2 ... n

...

n 1

N

change its state (either open or closed) only if all links to the left of it are

70 See, e.g., QM Secs. 4.6 and 5.1, for example, Eq. (4.167).

71 This “atomic” (or “molecular”) susceptibility should be distinguished from the “volumic” susceptibility m 

Mz/H, where M is the magnetization, i.e. the magnetic moment of a unit volume of a system – see, e.g., EM

Eq. (5.111). For a uniform medium with nN/ V non-interacting dipoles per unit volume, m = n.

72 See, e.g., QM Sec. 5.7, in particular Eq. (5.169).

Chapter 2

Page 39 of 44

Essential Graduate Physics

SM: Statistical Mechanics

open, while those on the right of it, are closed. Calculate the average number of open links at thermal equilibrium, and analyze its temperature dependence, especially for the case N >> 1.

2.8. Use the microcanonical distribution to calculate the average entropy, energy, and pressure of a classical particle of mass m, with no internal degrees of freedom, free to move in volume V, at temperature T.

Hint: Try to make a more accurate calculation than has been done in Sec. 2.2 for the system of N

harmonic oscillators. For that, you will need to know the volume Vd of a d-dimensional hypersphere of the unit radius. To avoid being too cruel, I am giving it to you:

d

d

/ 2

V  

d

 1,

 2

where () is the gamma function.73

2.9. Solve the previous problem starting from the Gibbs distribution.

2.10. Calculate the average energy, entropy, free energy, and the equation of state of a classical 2D particle (without internal degrees of freedom), free to move within area A, at temperature T, starting from:

(i) the microcanonical distribution, and

(ii) the Gibbs distribution.

Hint: For the equation of state, make the appropriate modification of the notion of pressure.

2.11. A quantum particle of mass m is confined to free motion along a 1D segment of length a.

Using any approach you like, calculate the average force the particle exerts on the “walls” (ends) of such

“1D potential well” in thermal equilibrium, and analyze its temperature dependence, focusing on the low-temperature and high-temperature limits.

Hint: You may consider the series    

 2

exp

n  a known function of . 74

n1

2.12.* Rotational properties of diatomic molecules (such as N2, CO, etc.) may be reasonably well described by the so-called dumbbell model: two point particles, of masses m 1 and m 2, with a fixed distance d between them. Ignoring the translational motion of the molecule as the whole, use this model to calculate its heat capacity, and spell out the result in the limits of low and high temperatures. Discuss whether your solution is valid for the so-called homonuclear molecules, consisting of two similar atoms, such as H2, O2, N2, etc.

2.13. Calculate the heat capacity of a heteronuclear diatomic molecule, using the simple model described in the previous problem, but now assuming that the rotation is confined to one plane.75

73 For its definition and main properties, see, e.g., MA Eqs. (6.6)-(6.9).

74 It may be reduced to the so-called elliptic theta-function 3( z, ) for a particular case z = 0 – see, e.g., Sec. 16.27

in the Abramowitz-Stegun handbook cited in MA Sec. 16(ii). However, you do not need that (or any other) handbook to solve this problem.

75 This is a reasonable model of the constraints imposed on small atomic groups (e.g., ligands) by their atomic environment inside some large molecules.

Chapter 2

Page 40 of 44

Image 2576

Image 2577

Image 2578

Image 2579

Image 2580

Image 2581

Image 2582

Image 2583

Image 2584

Image 2585

Essential Graduate Physics

SM: Statistical Mechanics

2.14. A classical, rigid, strongly elongated body (such as a thin needle), is free to rotate about its center of mass, and is in thermal equilibrium with its environment. Are the angular velocity vector 

and the angular momentum vector L, on average, directed along the elongation axis of the body, or normal to it?

2.15. Two similar classical electric dipoles, of a fixed magnitude d, are separated by a fixed distance r. Assuming that each dipole moment d may take any spatial direction and that the system is in thermal equilibrium, write the general expressions for its statistical sum Z, average interaction energy E, heat capacity C, and entropy S, and calculate them explicitly in the high-temperature limit.

2.16. A classical 1D particle of mass m, residing in the potential well

U x   

x ,

with   0 ,

is in thermal equilibrium with its environment, at temperature T. Calculate the average values of its potential energy U and the full energy E, using two approaches:

(i) directly from the Gibbs distribution, and

(ii) using the virial theorem of classical mechanics.76

2.17. For a thermally-equilibrium ensemble of slightly anharmonic classical 1D oscillators, with mass m and potential energy

U q

2

3

x   x ,

2

with a small coefficient , calculate  x in the first approximation in low temperature T.

2.18.* A small conductor (in this context, usually called the

single-electron island) is placed between two conducting

C

electrodes, with voltage V applied between them. The gap between

one of the electrodes and the island is so narrow that electrons may V

"island"

tunnel quantum-mechanically through this gap (the “weak tunnel

Q -ne

junction”) – see the figure on the right. Calculate the average

n

tunnel

C

junction

charge of the island as a function of V at temperature T.

0

Hint: The quantum-mechanical tunneling of an electron

through a weak junction77 between two macroscopic conductors and their subsequent energy relaxation, may be considered as a single inelastic (energy-dissipating) event, so that the only energy relevant for the thermal equilibrium of the system is its electrostatic potential energy.

2.19. An LC circuit (see the figure on the right) is in thermodynamic equilibrium with its environment. Calculate the r.m.s. fluctuation  V   V 21/2 V

C

L

76 See, e.g., CM Problem 1.12.

77 In this particular context, the adjective “weak” denotes a junction with the tunneling transparency so low that the tunneling electron’s wavefunction loses its quantum-mechanical coherence before the electron has a chance to tunnel back. In a typical junction of a macroscopic area this condition is fulfilled if its effective resistance is much higher than the quantum unit of resistance (see, e.g., QM Sec. 3.2), R Q  /2 e 2  6.5 k.

Chapter 2

Page 41 of 44

Image 2586

Image 2587

Image 2588

Image 2589

Essential Graduate Physics

SM: Statistical Mechanics

of the voltage across it, for an arbitrary ratio T/, where  = ( LC)-1/2 is the resonance frequency of this

“tank circuit”.

2.20. Derive Eq. (92) from simplistic arguments, representing the blackbody radiation as an ideal gas of photons treated as classical ultra-relativistic particles. What do similar arguments give for an ideal gas of classical but non-relativistic particles?

2.21. Calculate the enthalpy, the entropy, and the Gibbs energy of blackbody electromagnetic radiation with temperature T inside volume V, and then use these results to find the law of temperature and pressure drop at an adiabatic expansion.

2.22. As was mentioned in Sec. 6(i), the relation between the temperatures T of the visible Sun’s surface and that ( T o) of the Earth’s surface follows from the balance of the thermal radiation they emit. Prove that the experimentally observed relation indeed follows, with good precision, from a simple model in which the surfaces radiate as perfect black bodies with constant temperatures.

Hint: You may pick up the experimental values you need from any (reliable :-) source.

2.23. If a surface is not perfectly radiation-absorbing (“black”), the electromagnetic power of its thermal radiation differs from the Planck radiation law by a frequency-dependent factor  < 1, called the emissivity. Prove that such surface reflects the (1 – ) fraction of the incident radiation.

2.24. If two black surfaces, facing each other, have different

temperatures (see the figure on the right), then according to the Stefan

radiation law (89), there is a net flow of thermal radiation, from a warmer T

net

P

T T

1

2

1

surface to the colder one:

net

P

   4

4

T T .

1

2 

A

For many applications, notably including most low-temperature experiments, this flow is detrimental.

One way to suppress it is to reduce the emissivity  (for its definition, see the previous problem) of both surfaces – say by covering them with shiny metallic films. An alternative way toward the same goal is to place, between the surfaces, a thin layer (usually called the thermal shield), with a low emissivity of both surfaces – see the dashed line in Fig. above. Assuming that the emissivity is the same in both cases, find out which way is more efficient.

2.25. Two parallel, well-conducting plates of area A are separated by a free-space gap of a constant thickness t << A 1/2. Calculate the energy of the thermally-induced electromagnetic field inside the gap at thermal equilibrium with temperature T in the range

c

c

 T



.

A 1/ 2

t

Does the field push the plates apart?

2.26. Use the Debye theory to estimate the specific heat of aluminum at room temperature (say, 300 K), and express the result in the following popular units:

Chapter 2

Page 42 of 44

Image 2590

Image 2591

Image 2592

Essential Graduate Physics

SM: Statistical Mechanics

(i) eV/K per atom,

(ii) J/K per mole, and

(iii) J/K per gram.

Compare the last number with the experimental value (from a reliable book or online source).

2.27. Low-temperature specific heat of some solids has a considerable contribution from thermal excitation of spin waves, whose dispersion law scales as   k 2 at   0.78 Neglecting anisotropy, calculate the temperature dependence of this contribution to CV at low temperatures, and discuss conditions of its experimental observation.

Hint: Just as the photons and phonons discussed in section 2.6, the quantum excitations of spin waves (called magnons) may be considered as non-interacting bosonic quasiparticles with zero chemical potential, whose statistics obeys Eq. (2.72).

2.28. Derive a general expression for the specific heat of a very

m

m

m

long, straight chain of similar particles of mass m, confined to move only in

the direction of the chain, and elastically interacting with effective spring

constants  – see the figure on the right. Spell out the result in the limits of very low and very high temperatures.



2

2

d

Hint: You may like to use the following integral:79

.

sinh 2 

6

0

2.29. Calculate the r.m.s. thermal fluctuation of the middle point of a uniform guitar string of length l, stretched by force T, at temperature T. Evaluate your result for l = 0.7 m, T = 103 N, and room temperature.

1

1

1

2

Hint: You may like to use the following series: 1

 ...  

.

32

52

2

m0 2 m  

1

8

2.30. Use the general Eq. (123) to re-derive the Fermi-Dirac distribution (115) for a system in equilibrium.

2.31. Each of two identical particles, not interacting directly, may be in any of two quantum states, with single-particle energies  equal to 0 and . Write down the statistical sum Z of the system, and use it to calculate its average total energy E at temperature T, for the cases when the particles are: (i) distinguishable (say, by their positions);

(ii) indistinguishable fermions;

(iii) indistinguishable bosons.

Analyze and interpret the temperature dependence of E for each case, assuming that  > 0.

2.32. Calculate the chemical potential of a system of N >> 1 independent fermions, kept at a fixed temperature T, if each particle has two non-degenerate energy levels separated by gap .

78 Note that the same dispersion law is typical for bending waves in thin elastic rods – see, e.g., CM Sec. 7.8.

79 It may be reduced, via integration by parts, to the table integral MA Eq. (6.8d) with n = 1.

Chapter 2

Page 43 of 44

Essential Graduate Physics

SM: Statistical Mechanics

This page is

intentionally left

blank

Chapter 2

Page 44 of 44

Essential Graduate Physics

SM: Statistical Mechanics

Chapter 3. Ideal and Not-So-Ideal Gases

In this chapter, the general principles of thermodynamics and statistics, discussed in the previous two chapters, are applied to examine the basic physical properties of gases, i.e. collections of identical particles (for example, atoms or molecules) that are free to move inside a certain volume, either not interacting or weakly interacting with each other. We will see that due to the quantum statistics, properties of even the simplest, so-called ideal gases, with negligible direct interactions between particles, may be highly nontrivial.

3.1. Ideal classical gas

Direct interactions of typical atoms and molecules are well localized, i.e. rapidly decreasing with distance r between them and becoming negligible at a certain distance r 0. In a gas of N particles inside volume V, the average distance r ave between the particles is ( V/ N)1/3. As a result, if the gas density n

N/ V = ( r

-3

3

ave)-3 is much lower than r 0 , i.e. if nr 0 << 1, the chance for its particles to approach each other and interact is rather small. The model in which such direct interactions are completely ignored is called the ideal gas.

Let us start with a classical ideal gas, which may be defined as the ideal gas in whose behavior the quantum effects are also negligible. As was discussed in Sec. 2.8, the condition of that is to have the average occupancy of each quantum state low:

N

 1.

(3.1)

k

It may seem that we have already found all properties of such a system, in particular the equilibrium occupancy of its states – see Eq. (2.111):

  k

N

 const  exp

.

(3.2)

k



T

In some sense this is true, but we still need, first, to see what exactly Eq. (2) means for the gas, a system with an essentially continuous energy spectrum, and, second, to show that, rather surprisingly, the particles’ indistinguishability affects some properties of even classical gases.

The first of these tasks is evidently easiest for gas out of any external fields, and with no internal degrees of freedom.1 In this case,  k is just the kinetic energy of the particle, which is an isotropic and parabolic function of p:

p

p 2

2

p 2  p 2

x

y

z

 

.

(3.3)

k

2 m

2 m

Now we have to use two facts from other fields of physics, hopefully well known to the reader. First, in quantum mechanics, the linear momentum p is associated with the wavevector k of the de Broglie wave, 1 In more realistic cases when particles do have internal degrees of freedom, but they are all in a certain (say, ground) quantum state, Eq. (3) is valid as well, with k referred to the internal ground-state energy. The effect of thermal excitation of the internal degrees of freedom will be briefly discussed at the end of this section.

© K. Likharev

Essential Graduate Physics

SM: Statistical Mechanics

p = k. Second, the eigenvalues of k for any waves (including the de Broglie waves) in free space are uniformly distributed in the momentum space, with a constant density of states, given by Eq. (2.82): dN

gV

dN

gV

states

states

,

i.e.

,

(3.4)

3

3

3

3

d k

(2 )

d p

(2 

 )

where g is the degeneracy of particle’s internal states (for example, for all spin-½ particles, the spin degeneracy g = 2 s + 1 = 2). Even regardless of the exact proportionality coefficient between dN states and d 3 p, the very fact that this coefficient does not depend on p means that the probability dW to find the particle in a small region d 3 p = dp 1 dp 2 dp 3 of the momentum space is proportional to the right-hand side of Eq. (2), with  k given by Eq. (3):

2

2

2

2

p

p p p

3

1

2

3

dW C exp

d p C exp

dp dp dp .

(3.5)

Maxwell

1

2

3

2 mT

2 mT

distribution

This is the famous Maxwell distribution.2 The normalization constant C may be readily found from the last form of Eq. (5), by requiring the integral of dW over all the momentum space to equal 1.

Indeed, the integral is evidently a product of three similar 1D integrals over each Cartesian component pj of the momentum ( j = 1, 2, 3), which may be readily reduced to the well-known dimensionless Gaussian integral,3 so that we get

3

3



2



p

j

2

C   exp

 

dp

(3.6)

j

2 mT 1/2  e

d   2 mT 3/2.

 2 mT





As a sanity check, let us use the Maxwell distribution to calculate the average energy corresponding to each half-degree of freedom:

2

2

2



2

2



2

p

p

p

p

 

p

j

j

j

j

j

1/ 3

dW   C

exp

1/ 3



dp j    C

exp

'

 

dp j

2 m

2 m

2 m

 2 mT

 

 2

'

mT





(3.7)



T

2

2 

e

d.

1/ 2



The last, dimensionless integral equals /2,4 so that, finally,

2

2

p

mv

j

j

T

(3.8)

2 m

2

2

2 This formula had been suggested by J. C. Maxwell as early as 1860, i.e. well before the Boltzmann and Gibbs distributions were developed. Note also that the term “Maxwell distribution” is often associated with the distribution of the particle momentum (or velocity) magnitude,

2

2

p

mv

dW  4

2

Cp exp

dp  4

3 2

Cm v exp

dv,

0

with  p, v  ,

2 mT

2

T

which immediately follows from the first form of Eq. (5), combined with the expression d 3 p = 4 p 2 dp due to the spherical symmetry of the distribution in the momentum/velocity space.

3 See, e.g., MA Eq. (6.9b).

4 See, e.g., MA Eq. (6.9c).

Chapter 3

Page 2 of 34

Essential Graduate Physics

SM: Statistical Mechanics

This result is (fortunately :-) in agreement with the equipartition theorem (2.48). It also means that the r.m.s. velocity of each particle is

1/ 2

1/ 2

3

1/ 2

1/ 2

T

2

2

v

  v

  v

 3 2

v

(3.9)

j

j

3  .

j 1

m

For a typical gas (say, for N2, the air’s main component), with m  28 m p  4.710-26 kg, this velocity, at room temperature ( T = k B T K  k B300 K  4.110-21 J) is about 500 m/s, comparable with the sound velocity in the same gas – and with the muzzle velocity of a typical handgun bullet. Still, it is measurable using even the simple table-top equipment (say, a set of two concentric, rapidly rotating cylinders with a thin slit collimating an atomic beam emitted at the axis) that was available in the end of the 19th century. Experiments using such equipment gave convincing early confirmations of the Maxwell distribution.

This is all very simple (isn’t it?), but actually the thermodynamic properties of a classical gas, especially its entropy, are more intricate. To show that, let us apply the Gibbs distribution to a gas portion consisting of N particles, rather than just one of them. If the particles are exactly similar, the eigenenergy spectrum { k} of each of them is also exactly the same, and each value Em of the total energy is just the sum of particular energies  k( l) of the particles, where k( l), with l = 1, 2, … N, is the number of the energy level on which the l th particle resides. Moreover, since the gas is classical,  Nk

<< 1, the probability of having two or more particles in any state may be ignored. As a result, we can use Eq. (2.59) to write

E

m

 1



k ( l ) 

Z  exp

  exp 

...

exp

,

(3.10)

k ( l )   

 

m

T k( l)

T l

k )1

( k (2)

k ( N )

T

l

where the summation has to be carried over all possible states of each particle. Since the summation over each set { k( l)} concerns only one of the operands of the product of exponents under the sum, it is tempting to complete the calculation as follows:

N

 

 

 

 



Z Z

 

exp

k )

1

(



 exp

k (2)



...

  exp

k ( N )



  exp

k



 , (3.11)

dist





k )

1

(

T k(2)

T k( N)

T   k

T 

where the final summation is over all states of one particle. This formula is indeed valid for distinguishable particles.5 However, if the particles are indistinguishable (again, meaning that they are internally identical and free to move within the same spatial region), Eq. (11) has to be modified by what is called the correct Boltzmann counting:

N

Correct

1 



Boltzmann

Z

exp

k



 ,

(3.12)

counting

N!



k

T 

that considers all quantum states different only by particle permutations, as the same state.

5 Since, by our initial assumption, each particle belongs to the same portion of gas, i.e. cannot be distinguished from others by its spatial position, this requires some internal “pencil mark” for each particle – for example, a specific structure or a specific quantum state of its internal degrees of freedom.

Chapter 3

Page 3 of 34

Essential Graduate Physics

SM: Statistical Mechanics

This expression is valid for any set { k} of eigenenergies. Now let us use it for the translational 3D motion of free particles, taking into account that the fundamental relation (4) implies the following rule for the replacement of a sum over quantum states of such motion with an integral:6

gV

3

gV

 

...   

... dN

...

...

.

(3.13)

states

  d k

 

3 

3 

d 3 p

k

(2 )

(2 )

In application to Eq. (12), this rule yields

N

3



2

1  gV



p

j

Z

 exp

.

(3.14)

3

 

dp

j

N! (2 ) 

2



mT

 

 

The integral in the square brackets is the same one as in Eq. (6), i.e. is equal to (2 mT)1/2, so that finally N

N

1 

3 / 2

gV

1

3 / 2

mT

Z

(2 mT )

gV

 .

(3.15)

N 

3



! (2 )

N

! 



2 

 2   

Now, assuming that N >> 1,7 and applying the Stirling formula, we can calculate the gas’ free energy: 1

V

F T ln

  NT ln

Nf ( T ),

(3.16a)

Z

N

with

 

3 / 2 

mT

