Tor Kjellsson Stockholm University Chapter 4 4.15 Q. A hydrogen atom starts out in the following linear combination of the stationary states n, l, m = 2, 1, 1 and n, l, m = 2, 1, −1: 1 Ψ(r, 0) = √ (ψ211 + ψ21−1 ) . 2 (1) a) Q. Construct Ψ(r, t) and simplify it as much as you can. Sol: The normalized hydrogen wave functions are: s ψnlm = 2 na 3 (n − l − 1)! r e− na 3 2n [(n + l)!] 2r l 2l+1 Ln−l−1 (2r/na) Ylm (θ, φ) na (2) m where L2l+1 n−l−1 (x) and Yl (θ, φ) are the Laguerre polynomials and Spherical harmonics. In Griffiths (second edition) you find the formal definitions in chapter 4.2 and 4.1 respectively. A few examples have also been listed on p.139 and p.153 which we will refer the reader to. What we do now is to simply put in our two sets of {n, l, m} into eq. (2): s ψ211 = 2 2a 3 (2 − 1 − 1)! 2 · 2 [(2 + 1)!] 3 e r − 2a 2r 1 2·1+1 L2−1−1 (2r/2a) Y11 (θ, φ) 2a r r r 1 1 e− 2a L30 (r/a) Y11 (θ, φ) 3 3 a 4·6 a Now we look up the Laguerre polynomial on p.153 and the Spherical Harmonic on p.139 that we need: r 3 3 1 L0 (r/a) = 6 and Y1 (θ, φ) = − sin(θ)eiφ . 8π ψ211 = Thus we have arrived at the following expression: r ψ211 = − 1 r a5 64π 1 re− 2a sin(θ)eiφ . (3) The same procedure for n, l, m = 2, 1, −1 now gives: r r r 1 1 − 2a ψ21−1 = e L30 (r/a) Y1−1 (θ, φ). a3 4 · 63 a Checking the new Spherical harmonic on p.139 we now obtain: r ψ21−1 = r 1 re− 2a sin(θ)e−iφ a5 64π and thus: 1 Ψ(r, 0) = √ (ψ211 + ψ21−1 ) 2 ! r r r r 1 1 1 − 2a iφ − 2a −iφ Ψ(r, 0) = √ re sin(θ)e + re sin(θ)e − a5 64π a5 64π 2 r r 1 Ψ(r, 0) = − re− 2a sin(θ) eiφ − e−iφ 5 2a 64π r r 1 Ψ(r, 0) = − i re− 2a sin(θ) sin(φ) a5 32π and to get the full (time-dependent) wave function we just multiply with the usual exponential: r Ψ(r, t) = − r 1 i re− 2a sin(θ) sin(φ) e−iE2 t/~ a5 32π which contains the time dependence and the energy of the state. b) Q. Find the expectation value of the potential energy, hV i. Does it depend on t? Give both the formula and the actual number, in electron volts. Sol: The expectation value is, like always, given by: hV i = hΨ|Vˆ |Ψi and when the states are functions this is given by the integral (evaluated over the space): Z Z ∗ˆ hV i = Ψ V Ψ dr = Ψ∗ V Ψ dr (4) we could drop the hat on Vˆ because it is just a function. Also, note the bold integration letter - this is a notation for a volume integral, which we use because we are working in R3 . 2 Now we simplify the integrand: "r Ψ∗ Vˆ Ψ = # " r r 1 i re− 2a sin(θ) sin(φ) eiE2 t/~ V (r) − 5 a 32π # r 1 − 2a −iE2 t/~ i re sin(θ) sin(φ) e = a5 32π r 1 r2 e− a sin2 (θ) sin2 (φ)V (r) a5 32π 2 e Insert this now into eq. (4) and insert the potential V (r) = V (r) = − 4π : 0r Z r −1 e2 hV i = 5 r2 e− a sin2 (θ) sin2 (φ) dr. a 32π 4π0 r Note that r = (x, y, z) is given in the cartesian coordinate system while our functions are represented using spherical coordinates. Making the transition to spherical coordinates in the integral we get: dr = dxdydz = r2 sin(θ)drdθdφ e2 hV i = − 128a5 π 2 0 Z ∞ r 3 −a r e Z π dr r=0 Z 3 2π sin (θ) dθ θ=0 sin2 (φ) dφ. φ=0 The last two integrals can be evaluated fairly easily but the first might most of you be unfamiliar with. You can evaluate it with the formula: Z ∞ r rn e− a dr = n! an+1 r=0 and you can find it in Physics Handbook as the Gamma function. Anyhow, evaluating the three integrals we get: 4 4 e2 e2 hV i = − · 6a · · [π] = − . 5 2 128a π 0 3 16aπ0 a is the Bohr radius and is given by the expression: 4π0 ~2 (5) me2 so inserting this into our expectation value of the potential energy we obtain: a≡ hV i = − e2 me2 1 m · =− · 2 16π0 4π0 ~2 2 2~ 3 e2 4π0 2 = 1 E1 = −6.8 eV 2 which is, as we see, independent of t. 4.18 Q. The raising and lowering operators change the value of m by one unit: m±1 L±flm = (Am l ) fl (6) m where Am l is some constant. What is this constant if the eigenfunctions fl are to be normalized? Hint: First show that L∓ is the hermitian conjugate of L± (since Lx and Ly are observables you may assume that they are hermitian but prove it if you like). Then use: L2 = L± L∓ + L2z ∓ ~Lz (7) to get the answer: Am l =~ p p l(l + 1) − m(m ± 1) = ~ (l ∓ m)(l ± m + 1). (8) Sol: † Recall that L± = Lx ± iLy . Following the hint we first show that (L+ ) = L− by letting L+ work on a test function g: hg|L+ gi = hg| (Lx + iLy ) gi = hg|Lx gi + hg|iLy gi (9) since Lx and Ly are observables they must be hermitian. It thus follows that: hg|L+ gi = hg|Lx gi + hg|iLy gi = hLx g|gi + h−iLy g|gi = hL− g|gi. (10) (You proved in problem 3.5 that you must complex conjugate any complex constant if you move it from the ket to the bra.) Continuing to follow the hint we now use eq.(7). Recall that to find a scalar from an operator we must compute some sort of inner product. So we compute hflm |L2 flm i: hf m |L2 f m i = hf m |L± L∓ flm i + hflm |L2z flm i + hflm | ∓ ~Lz flm i . {z } | {z } | {z } | l {z l } | l (1) (2) (3) On the next page we evaluate the terms. 4 (4) (11) (1) : Recall that flm are eigenfunctions of L2 : L2 flm = ~2 l(l + 1)flm =⇒ hflm |L2 flm i = ~2 l(l + 1) (12) (2) : † Here we use (L+ ) = L− : m∓1 m∓1 m∓1 m∓1 2 hflm |L± L∓ flm i = hL∓ flm |L∓ flm i = hAm |Am i = |Am |f i l fl l fl l | hfl {z l } | 1 2 hflm |L± L∓ flm i = |Am l | (13) (3) : Recall that flm are eigenfunctions of Lz : Lz flm = ~mflm =⇒ hflm |Lz flm i = ~m. (14) Now we use this to show that: 2 hflm |L2z flm i = hflm |Lz ~mflm i = hflm |~m · ~mflm i = (~m) hflm |flm i | {z } 1 2 hflm |L2z flm i = (~m) (15) hflm | ∓ ~Lz flm i = ∓~2 m (16) (4) : From eq.(14) get: Inserting the results from (1)-(4) into eq.(11) now gives: hf m |L2 f m i = hf m |L± L∓ flm i + hflm |L2z flm i + hflm | ∓ ~Lz flm i . | l {z l } | l {z } | {z } | {z } (1) (2) (3) (4) 2 2 2 ~2 l(l + 1) = |Am l | + (~m) ∓ ~ m. (17) p p 2 |Am l | = ~ l(l + 1) − m ± m = ~ l(l + 1) − m(m ∓ 1). (18) The last algebraic step in eq.(8) is left out. A more important remark is the fact that eq.(18) does not determine the phase of Am l . The phases introduced by the ladder operators have no physical significance so we are free to set them to 0. However, in more advanced problems than we consider in this book you must be careful with choosing the phase. Also note that in eq.(11), term 2, we have used L∓ rather than L± . Therefore, eq.(18) have inverted signs. 5 4.19 Starting with the canonical commutation relations for position and momentum: [ri , pj ] = i~δij , [ri , rj ] = [pi , pj ] = 0. (19) where (r1 , r2 , r3 ) = (x, y, z) and (p1 , p2 , p3 ) = (px , py , pz ) ... a) Q. ... work out the following commutators: [Lz , x] = i~y, [Lz , y] = −i~x, [Lz , px ] = i~py , [Lz , py ] = −i~px , [Lz , pz ] = 0 [Lz , z] = 0 (20) (21) Sol: Insert Lz = xpy − ypx in each commutator and do the algebra. We will start by doing this in detail for [Lz , x]: [Lz , x] = [xpy − ypx , x] = [xpy , x] − [ypx , x]. (22) Focus on one commutator at a the time: [xpy , x] = x[py , x] + [x, x]x = 0 + 0 (23) where we applied: 1) [AB, C] = A[B, C] + [A, C]B 2) eq.(19) for [py , x] = 0 3) [x, x] = 0 for obvious reasons. Turning now to the other commutator: [ypx , x] = y[px , x] + [y, x]px = −i~y + 0 (24) where we used eq.(19). Inserting the results from eq.(23,24) into eq.(22) gives: [Lz , x] = [xpy , x] − [ypx , x] = 0 − (−i~y) = i~y. Similarly we get: [Lz , y] = [xpy − ypx , y] = x [py , y] + [x, y] py − y [px , y] − [y, y] px = −i~x | {z } | {z } | {z } | {z } 0 −i~ 0 0 For the remaining commutators [Lz , z], [Lz , px ], [Lz , py ], [Lz , pz ] and from now on we only write out non-zero commutators using the underbraces we show do the same steps and mark if anything is not 0. 6 [Lz , z] = [xpy − ypx , z] = x[py , z] + [x, z]py − y[px , z] − [y, z]px = 0 [Lz , px ] = [xpy − ypx , px ] = x[py , px ] + [x, px ] py − y[px , px ] − [y, px ]px = i~py | {z } i~ [Lz , py ] = [xpy − ypx , py ] = x[py , py ] + [x, py ]py − y[px , py ] − [y, py ] px = −i~px | {z } i~ [Lz , pz ] = [xpy − ypx , pz ] = x[py , pz ] + [x, pz ]py − y[px , pz ] − [y, pz ]px = 0 b) Q. Use the results from the previous problem to obtain [Lz , Lx ] = i~Ly directly from: Lx = ypz − zpy , Ly = zpx − xpz , Lz = xpy − ypx . (25) Sol: Insert the definition of Lx : [Lz , Lx ] = [Lz , ypz − zpy ] = [Lz , ypz ] − [Lz , zpy ] = −[ypz , Lz ] + [zpy , Lz ] = − y [pz , Lz ] + [y, Lz ] pz + z [py , Lz ] + [z, Lz ] py = −i~xpz + i~zpx = i~Ly | {z } | {z } | {z } | {z } 0 i~x 0 i~px c) Q. Evaluate the commutators [Lz , x2 + y 2 + z 2 ] and [Lz , p2x + p2y + p2z ]. Sol: [Lz , x2 + y 2 + z 2 ] = [Lz , x2 ] + [Lz , y 2 ] + [Lz , z 2 ] = − [x2 , Lz ] + [y 2 , Lz ] + [z 2 , Lz ] = − x [x, Lz ] + [x, Lz ] x + y [y, Lz ] + [y, Lz ] y + z [z, Lz ] + [z, Lz ] z | {z } | {z } | {z } | {z } | {z } | {z } −i~y −i~y i~x i~x 0 =0 0 [Lz , p2x + p2y + p2z ] = [Lz , p2x ] + [Lz , p2y ] + [Lz , p2z ] = − [p2x , Lz ] + [p2y , Lz ] + [p2z , Lz ] = − px [px , Lz ] + [px , Lz ] x+py [py , Lz ] + [py , Lz ] py +pz [pz , Lz ] + [pz , Lz ] pz = 0 | {z } | {z } | {z } | {z } | {z } | {z } −i~py −i~py i~px i~px 7 0 0 d) Q. Show that the Hamiltonian H = p2 /(2m) + V commutes with all three components of L, provided that V only depends on r. (Thus H, L2 and Lz are mutually compatible observables.) Sol: [H, L] = 1 2 [p , L] + [V, L] 2m In the previous problem we proved that [Lz , p2x + p2y + p2z ] = 0. This also holds for Lx and Ly - just make the exchange in the previous problem and you will see it. Since all components of L commute with p2 it follows that L, p2x + p2y + p2z = 0 . Now we will show that [V, L] = 0: p Assume that the potential is only a function of r = x2 + y 2 + z 2 , as we are asked to do. Since V only depends on r it is useful to look what L looks like in spherical coordinates: L=r×p= L= rˆ L = r ∂ ∂r 1 ∂ r ∂θ (26) ∂ 1 ∂ 1 ∂ , , . ∂r r ∂θ r sin(θ) ∂φ φˆ ∂ = φˆ 1 ∂ − θˆ 1 0 r ∂θ sin(θ) ∂φ 1 ∂ ~ (r, 0, 0) × i θˆ 0 ~ ~ r×∇ i (27) (28) r sin(θ) ∂φ Note that all terms in eq.(28) commute with any function V (r) since the only operator dependent on r is the scalar valued 1/r, which certainly commutes with V (r). This concludes that [L, V (r)] = 0 and thus also that: [H, L] = 1 2 [p , L] + [V, L] = 0 2m 8 (29) 4.21 Q. Derive the following equation: 2 ∂ ∂ ∂2 ∂ 2 2 + cot(θ) + cot (θ) 2 + i L+ L− = −~ ∂θ2 ∂θ ∂φ ∂φ (30) using the definition of the raising/lowering operators: ∂ ∂ ±iφ L± = ±~e ± i cot(θ) . ∂θ ∂φ (31) Sol: We will now work with the explicit form of the operators and whenever yo do that it is a good idea to use a test function. Our testfunction in this case will be named g(r) and from eq. (31) we get: ∂ ∂ ∂ ∂ −iφ iφ + i cot(θ) −~e − i cot(θ) g(r). L+ L− g(r) = ~e ∂θ ∂φ ∂θ ∂φ Now we first let L− act on the testfunction: ∂ ∂ ∂g ∂g iφ −iφ L+ L− g(r) = ~e + i cot(θ) −~e − i cot(θ) ∂θ ∂φ ∂θ ∂φ and then L+ shall act on L− g(r): ∂ ∂g ∂g ∂g ∂g ∂ L+ L− g(r) = ~eiφ −~e−iφ − i cot(θ) −~e−iφ − i cot(θ) +i cot(θ) ∂θ ∂φ ∂φ ∂θ ∂φ ∂θ | | {z } {z } A B where we have named two parts to work on separtely in order to avoid algebraic errors. A: ∂2g ∂g −iφ ∂ A = −~e + ~e i cot(θ) ∂θ2 ∂θ ∂φ 2 ∂ ∂g ∂ ∂g −iφ ∂ g = −~e − i (cot(θ)) · − i cot(θ) · ∂θ2 ∂θ ∂φ ∂θ ∂φ 2 ∂ g 1 ∂g ∂2g = −~e−iφ +i 2 − i cot(θ) . 2 ∂θ ∂θ∂φ sin (θ) ∂φ −iφ B: ∂ B = −~ ∂φ e −iφ ∂g ∂θ ∂ + i~ cot(θ) ∂φ 9 e −iφ ∂g ∂φ (32) = −~ ∂g ∂ ∂ e−iφ · + e−iφ · ∂φ ∂θ ∂φ ∂g ∂θ +i~ cot(θ) ∂g ∂ ∂ e−iφ · + e−iφ · ∂φ ∂φ ∂φ ∂g ∂2g ∂g ∂2g − ~e−iφ + ~e−iφ cot(θ) + i~e−iφ cot(θ) 2 ∂θ ∂φ∂θ ∂φ ∂φ ∂g ∂2g ∂g ∂2g = ~e−iφ i − + cot(θ) + i cot(θ) 2 . ∂θ ∂φ∂θ ∂φ ∂φ ∂g ∂φ = i~e−iφ (33) From the expression just above eq. (32): L+ L− g(r) = ~eiφ [A + i cot(θ)B] (34) we see that we need to work out A + i cot(θ)B: ∂2g ∂2g i ∂g − i cot(θ) + ∂θ2 ∂θ∂φ sin2 (θ) ∂φ ∂g ∂2g ∂g ∂2g i cot(θ)B = i cot(θ) · ~e−iφ i − + cot(θ) + i cot(θ) 2 . ∂θ ∂φ∂θ ∂φ ∂φ A = −~e−iφ A normal assumption about functions in physics is that partial derivatives with ∂2g ∂2g = ∂φ∂θ so when we add respect to different variables commute. Hence ∂θ∂φ the two equations above the terms with mixed partial derivatives cancel each other out: −1 z }| 2 { ∂ g ∂g ∂2g −1 ∂g −iφ 2 2 A+i cot(θ)B = ~e − 2 +i −cot(θ) −cot (θ) + cot (θ) ∂θ ∂φ ∂θ ∂φ2 sin2 (θ) 2 ∂g ∂g ∂2g −iφ ∂ g 2 = −~e +i + cot(θ) + cot (θ) 2 . ∂θ2 ∂φ ∂θ ∂φ Now we are (finally!) done: L+ L− g(r) = ~eiφ (A + i cot(θ)B) 2 ∂g ∂g ∂2g ∂ g 2 L+ L− g(r) = ~eiφ −~e−iφ + i + cot(θ) + cot (θ) ∂θ2 ∂φ ∂θ ∂φ2 2 ∂ ∂ ∂2 ∂ 2 L+ L− g(r) = −~2 + i + cot(θ) + cot (θ) g(r) ∂θ2 ∂φ ∂θ ∂φ2 2 ∂ ∂ ∂ ∂2 2 L+ L− = −~2 + i + cot(θ) + cot (θ) . (35) ∂θ2 ∂φ ∂θ ∂φ2 10 b) Q. Use the following three equations: 2 ∂ ∂ ∂ ∂2 2 + i L+ L− = −~2 + cot(θ) + cot (θ) ∂θ2 ∂φ ∂θ ∂φ2 Lz = ~ ∂ i ∂φ L2 = L± L∓ + L2z ∓ ~Lz (36) (37) (38) to derive: L2 = −~2 1 ∂ sin(θ) ∂θ ∂2 ∂ 1 . sin(θ) + ∂θ sin2 (θ) ∂φ2 (39) Sol: Now we let the ”upper” version of eq. (38) act on a test function g(r): L2 g(r) = L+ L− g(r) + L2z g(r) − ~Lz g(r). (40) We know the first term from the tedious calculation in the previous problem. The last two terms can be calculated by using eq. (37): ~ ∂ ~ ∂g g(r) = i ∂φ i ∂φ ~ ∂ ~ ∂g ∂2g Lz (Lz g(r)) = = −~2 2 . i ∂φ i ∂φ ∂φ Lz g(r) = (41) (42) Now we insert this into eq. (40): L2 g(r) = L+ L− g(r) + L2z g(r) − ~Lz g(r). 2 2 ∂ ∂ ∂ ∂2 ~ ∂ 2 2 ∂ L2 g(r) = −~2 + i + cot(θ) + cot (θ) g(r)+ −~ g(r)−~ g(r) ∂θ2 ∂φ ∂θ ∂φ2 ∂φ2 i ∂φ so apparently: L2 = −~2 ∂ ∂ ∂2 ∂2 ∂ ∂2 2 + i + cot(θ) + cot (θ) + − i ∂θ2 ∂φ ∂θ ∂φ2 ∂φ2 ∂φ 2 ∂ ∂ ∂2 ∂2 2 L2 = −~2 + cot(θ) + cot (θ) + ∂θ2 ∂θ ∂φ2 ∂φ2 2 ∂2 ∂ ∂ 2 + cot(θ) + cot (θ) + 1 L2 = −~2 ∂θ2 ∂θ ∂φ2 2 ∂ cos(θ) ∂ 1 ∂2 2 2 L = −~ + + ∂θ2 sin(θ) ∂θ sin2 (θ) ∂φ2 1 ∂2 ∂ 1 ∂2 L2 = −~2 sin(θ) 2 + cos(θ) + . sin(θ) ∂θ ∂θ sin2 (θ) ∂φ2 11 Note now that you can re-write the first parenthesis: ∂ ∂2 ∂ ∂ ∂ ∂ sin(θ) 2 + cos(θ) = sin(θ) · + (sin(θ)) · ∂θ ∂θ ∂θ ∂θ ∂θ ∂θ ∂ ∂ = sin(θ) ∂θ ∂θ where the last manipulation follows from the differrentiation rule for a product. Hence we have arrived at the desired result: 2 L = −~ 2 1 ∂ sin(θ) ∂θ ∂ 1 ∂2 . sin(θ) + ∂θ sin2 (θ) ∂φ2 4.24 Q. Two particles of mass m are attached to the ends of a massless rigid rod of length a. The system is free to rotate in three dimensions about the center (but the center point itself is fixed). a) Q. Show that the allowed energies of this rigid rotor are: ~2 n(n + 1) for n = 0, 1, 2, . . . (43) ma2 Hint: first express the classical energy in terms of the total angular momentum. En = Sol: ˆ always tells you the energy of the Recall that the Hamiltonian operator H system: ˆ n i = En |αn i. H|α (44) You are used to the appearance: 2 ˆ = pˆ + V (x) H (45) 2m but this is just the operators for the kinetic and potential energies of the system! So what do these two operators look like in our current system? If the particles are not moving at relativistic speed each has the kinetic energy mv 2 /2. Since there are two of them the total kinetic energy of the system is mv 2 . Since there is no potential energy we thus get our Hamiltonian to be: ˆ = mv 2 . H 12 (46) Following the hint given we can try to express this in terms of the angular momentum of the system: ˆ2 ˆ = 2mrv = 2m a v = mav ⇐⇒ H ˆ = L L (47) 2 ma2 ˆ 2 ! Given an eigenbut we know both the eigenvalues and eigenfunctions of L function |flm i we have: ˆ 2 |f m i = ~2 l(l + 1)|f m i L l l (48) so the allowed energies for this system is (calling n = l): En = ~2 n(n + 1) for n = 0, 1, 2, . . . ma2 (49) b) Q. What are the normalized eigenfunctions for this system and what is the degeneracy of the n:th energy level? Sol: ˆ 2 are the Spherical Harmonics: Y m (θ, φ). The normalized eigenfunctions of L l for each l there are 2l +1 values that m can acquire so the degeneracy of the n:th energy level (remember, we called n = l in the previous problem) is 2n + 1 . 4.27 Q. An electron is in the spin state: |χi = A 3i . 4 (50) a) Q. Determine the normalization constant A. Sol: The normalization condition is hχ|χi = 1, so: 3i hχ|χi = A∗ −3i 4 A = |A|2 · 25 4 (51) which gives: 1 = |A|2 · 25 ⇐⇒ A = 13 1 iφ e 5 (52) where the last exponential is called a phase. Now, like we discussed in the solution to problem 4.18 the phase carries no physical significance, so you might as well put φ = 0 (from now on, we will always do this). Hence A = 1/5 . b) Q. Find the expectation values of Sx , Sy and Sz . Sol: hSx i = hχ|Sx |χi = hSy i = hχ|Sy |χi = hSz i = hχ|Sz |χi = 1 −3i 25 1 −3i 25 1 −3i 25 4 4 4 ~/2 3i =0 0 4 (53) 12 −i~/2 3i =− ~ 0 4 25 (54) 7 0 3i =− ~ −~/2 4 50 (55) 0 ~/2 0 i~/2 ~/2 0 c) Q. Find the ”uncertainties” σSx , σSy and σSz . (Note, these are not the Pauli spin matrices!) Sol: Recall that a variance can be written as: σa2 = ha2 i − hai2 (56) so we see that we need to work out hSi2 i for i ∈ {x, y, z}. ~2 1 0 0 ~/2 0 ~/2 2 Sx = = ~/2 0 ~/2 0 4 0 1 ~2 1 0 0 −i~/2 0 −i~/2 S2y = = i~/2 0 i~/2 0 4 0 1 ~2 1 0 ~/2 0 ~/2 0 S2z = = 0 −~/2 0 −~/2 4 0 1 (57) (58) (59) which means that: hSx2 i = hSy2 i = hSz2 i 1 −3i = 25 ~2 4 4 1 0 ~2 0 3i = 1 4 4 (60) (this result will always be true since we are operating on normalized states us2 ing the identity matrix multiplied by ~4 .) 14 Inserting this result and our previous results into the equation about the variance we get: ~2 4 2 49 2 12 ~2 σS2 y = hSy2 i − hSy i2 = − ~ = ~ 4 25 2500 2 576 2 ~2 7 ~ = ~ σS2 x = hSz2 i − hSz i2 = 4 50 2500 σS2 x = hSx2 i − hSx i2 = (61) (62) (63) so: σSx = ~ 7 12 , σSy = ~, σSz = ~. 2 50 25 (64) d) Q. Confirm that your results are consistent with all three uncertainty principles: σSx σSy ≥ ~ hSz i, 2 σSz σSx ≥ ~ hSy i, 2 σSy σSz ≥ ~ hSx i 2 (65) Sol: Start by computing the left hand sides: 7 2 6 2 7 · 12 2 ~ , σSz σSx = ~ , σSy σSz = ~ 100 25 50 · 25 and then look at the respective right hand sides: σSx σSy = ~ 7 2 6 2 ~ ~ hSz i = hSy i = hSx i = 0 (66) ~ , ~ , 2 100 2 25 2 and in each case we see that the uncertainty principles are fulfilled. (Note that the two that are not trivial (where the right hand side is 0) hit the minimum values.) 15 4.29 a) Q. Find the eigenvalues and eigenspinors of Sy . Sol: We know the matrix representation of this operator: 0 −i~/2 Sy = i~/2 0 (67) so we can directly solve for the eigenvalues and eigenspinors by solving the eigenvalue equation: a a Sy =λ b b ~ 0 −i a a =λ b b 2 i 0 which gives the characteristic equation: det(Sy − λI) = 0 ~ ~ −λ · (−λ) − −i ·i =0 2 2 ~ λ=± . 2 Inserting the eigenvalues into the eigenvalue equation we can then solve for the eigenspinors: ~ 0 −i ~ a a =± (68) b 2 i 0 2 b which gives the linear system of equations: −ib = ±a =⇒ b = ±ia. ia = ±b note that the two equations are linearily dependent (which comes from the characteristic equation.). Also, note that it is important to keep track of the order of the plus and minus signs because they come from inserting two different eigenvalues. Our normalized eigenspinors are thus: (y) |χ± i 1 =√ 2 16 1 . ±i (69) b) Q. If you measured Sy on a particle in the general (normalized state: (z) (z) |χi = a|χ+ i + b|χ− i (70) what values might you get and with what probabilities? Check that the probabilities add up to 1. Remember that a and b need not be real! Sol: The possible values are the eigenvalues of Sy and we know that they are ± ~2 . To get the probabilities we must express |χi in the eigenvectors of Sy . This is (y) done by using the inner product of |χi with each of the eigenvectors |χ± i: (y) (y) (y) (y) |χi = hχ+ |χi |χ+ i + hχ+ |χi |χ+ i Working in the usual Sz -basis we have that: 1 0 (z) (z) |χ+ i = |χ− i = 0 1 (71) (72) From the previous problem we know then know that the eigenvectors of Sy are: 1 1 1 1 (y) (y) , |χ− i = √ (73) |χ+ i = √ 2 i 2 −i which gives us: 1 a = = √ (1, −i) b 2 1 a (y) hχ− |χi = √ (1, +i) = b 2 (y) hχ+ |χi a − ib √ . 2 (74) a + ib √ . 2 (75) which gives us the state (expressed in the y-basis): a − ib (y) a + ib (y) |χi = √ |χ+ i + √ |χ− i 2 2 (76) Now that we have expressed the state in the right basis we get the probabilities by computing the magnitude squared of the coefficients: a − ib 2 ~ |a|2 − iba∗ + iab∗ + |b|2 P = √ = 2 2 2 2 a + ib ~ |a|2 + iba∗ − iab∗ + |b|2 P − = √ = 2 2 2 ~ ~ 2(|a|2 + |b|2 ) P +P − = = |a|2 + |b|2 = 1 2 2 2 (77) where the last equality comes from the fact that the original state was normalized. 17 c) Q. If you measured Sy2 , what values might you get and with what probabilities? Sol: Recall from eq. (58) what Sy2 looks like in matrix form: ~2 1 0 S2y = 4 0 1 We don’t need to find its eigenstates because they are just: 1 0 and 0 1 (78) and have the same eigenvalue (the matrix is diagonal and the entries are the same). Since there is only one possible value to get, the answer is: ~2 with probability 1. 4 4.31 Q. Construct the spin matrices (Sx , Sy and Sz ) for a particle of spin 1. Hint: How many eigenstates of Sz are there? Determine the action of Sz , S+ and S− on each of these states. Follow the procedure used in the text for spin 1/2. Also, study pages 173-174 carefully before attempting this problem. Sol: First we need to determine the number of eigenstates of Sˆz 1 . This is given by the total number of different ms we can have: ms ∈ {−s, −s + 1, · · · , 0, · · · , s}. For a spin 1 particle this is just ms ∈ {−1, 0, 1}. Let us call these states: |1 1i |s mi = |1 0i |1 -1i By knowing the action of the operators on these states we can determine their matrix representation - precisely as was done in the book in section 4.4.1. From eqns. (4.134-4.136) in the text book we read: Sˆ2 |1 mi = 2~2 |1 mi Sˆz |1 mi = ~m|1 mi 1 This is of course the same number as for Sˆy , Sˆx . 18 p Sˆ± |1 mi = ~ 2 − m(m ± 1) |1 (m ± 1)i. Since we are asked to find the operators on matrix form we make the natural choice: 1 0 0 |1 1i = 0 |1 0i = 1 |1 -1i = 0 0 0 1 With this choice, we easily2 obtain Sz : 1 Sz = ~ 0 0 0 0 0 0 0 −1 and now we proceed with the ladder operators. To find the matrix version of ˆ recall that the elements are given by Amn = hem |A|e ˆ n i: an operator A, S± : Note: h1 m|Sˆ± |1 ni = h1 n|~ =~ p p 2 − n(n ± 1) |1 (n ± 1)i 2 − n(n ± 1) · h1 m|1 (n ± 1)i =⇒ p h1 m|Sˆ+ |1 ni = ~ 2 − n(n + 1) · δm,(n+1) p h1 m|Sˆ− |1 ni = ~ 2 − n(n − 1) · δm,(n−1) . Now we need to check the different combinations of m and n. Start with the ones for Sˆ+ : For m = 1 we will see that only n = 0 will give a contribution: he1 |Sˆ+ |e1 i = h1 1|Sˆ+ |1 1i = h1 1|0 = 0 √ √ √ he1 |Sˆ+ |e2 i = h1 1|Sˆ+ |1 0i = h1 1| 2~|1 1i = 2~ h1 1|1 1i = 2~ | {z } 1 √ he1 |Sˆ+ |e3 i = h1 1|Sˆ+ |1 -1i = h1 1| 2~|1 0i = √ 2~ h1 1|1 0i = 0 | {z } 0 so thus far we know that our matrix must have the form: 2 If you do not see this, pause for a second and breathe. Now look at what S ˆz does to these kets. 19 √ 0 S+ = ~ 2 0 Continuing with the next row we obtain: he2 |Sˆ+ |e1 i = h1 0|Sˆ+ |1 1i = h1 0|0 = 0 √ √ he2 |Sˆ+ |e2 i = h1 0|Sˆ+ |1 0i = h1 0| 2~|1 1i = 2~ h1 0|1 1i = 0 | {z } 0 √ √ √ he2 |Sˆ+ |e3 i = h1 0|Sˆ+ |1 -1i = h1 0| 2~|1 0i = 2~ h1 0|1 0i = 2~ | {z } 1 and thus we now know the second row in the matrix: √ 0 2 √0 S+ = ~ 0 0 2 Finally, for the last row all matrix elements are 0: he3 |Sˆ+ |e1 i = h1 -1|Sˆ+ |1 1i = h1 -1|0 = 0 √ √ he3 |Sˆ+ |e2 i = h1 -1|Sˆ+ |1 0i = h1 -1| 2~|1 1i = 2~ h1 -1|1 1i = 0 | {z } 0 √ √ he3 |Sˆ+ |e3 i = h1 -1|Sˆ+ |1 -1i = h1 -1| 2~|1 0i = 2~ h1 -1|1 0i = 0 | {z } 0 0 S+ = ~ 0 0 √ 2 0 0 √0 2 . 0 The exact same procedure (do it!) for the lowering operator gives: 0 0 √0 S− = ~ 2 √0 0 . 0 2 0 Now we proceed to construct Sx and Sy . On p.174 we see that we get these as: Sx = 1 (S+ + S− ) 2 i Sy = − (S+ − S− ) 2 so: 20 0 ~ 1 Sx = √ 2 0 1 0 1 0 ~ 1 Sy = i √ 2 0 0 1 0 −1 0 1 0 −1 0 4.33 Q. An electron is at rest in an oscillating magnetic field: B = B0 cos(ωt)ˆ z, (79) where B0 and ω are constants. a) Q. Construct the Hamiltonian matrix for this system. Sol: The Hamiltonian for an electron at rest in a magnetic field B is: H = −µ · B = −γS · B (80) where µ is the magnetic dipole moment and γ is the gyromagnetic ratio. Inserting B we obtain: ~ 1 0 (81) H = −γSz · B0 cos(ωt) = −γB0 cos(ωt) 2 0 −1 since the magnetic field is pointing in the zˆ-direction. b) Q. The electron starts out (at t = 0) in the spin-up state with respect to the (x) x-axis. (that is: |χ(0)i = |χ+ i). Determine |χ(t)i at any subsequent time. Beware: this is a time-dependent Hamiltonian, so you cannot get |χ(t)i in the usual way from stationary states. Fortunately, in this case you can solve the time-dependent Schr¨ odinger equation: i~ ∂|χi = H|χi ∂t directly. 21 (82) Sol: First we express our state as a column vector: a(t) |χ(t)i = b(t) (83) and plug it in, together with our Hamiltonian, into the Schr¨odinger equation: ∂|χi = H|χi ∂t ∂ a(t) ~ 1 0 a(t) i~ = −γB0 cos(ωt) b(t) ∂t b(t) 2 0 −1 ~ a(t) d a(t) = −γB0 cos(ωt) i~ b(t) dt 2 −b(t) i~ (84) which gives us a system of equations: i da(t) B0 cos(ωt) = −γ a(t) dt 2 i db(t) B0 cos(ωt) =γ b(t). dt 2 Note that the two equations are almost identical. We can thus focus on one of them (say, the upper one) and later re-use the derivation. da B0 cos(ωt) = −γ a dt 2 now we separate the variables: i i da B0 cos(ωt) = −γ dt a 2 and integrate both sides: Z Z B0 da i = −γ cos(ωt)dt a 2 B0 1 i log |a| + C = −γ sin(ωt) + D 2 ω B0 1 log |a| = iγ sin(ωt) + i(C − D) 2 ω B0 a(t) = Aeγ 2ω sin(ωt) where A = ei(C−D) . Following the same steps to find b(t) we obtain: B0 b(t) = Be−iγ 2ω 22 sin(ωt) . This gives us our state: B0 |χ(t)i = Aeγ 2ω sin(ωt) B0 Be−iγ 2ω sin(ωt) ! and from the initial value condition we can determine A and B : 1 1 1 (x) |χ(0)i = |χ+ i = √ =⇒ A = B = √ 2 1 2 (85) giving: 1 |χ(t)i = √ 2 B0 eiγ 2ω sin(ωt) B0 e−iγ 2ω sin(ωt) ! . (86) c) Q. Find the probability of getting −~/2 if you measure Sx . (Note: this is of course at time t.) Sol: Recall that in order to find the probability of obtaining an eigenvalue in a specific basis you must first express the state in that basis. Currently our state is expressed in the z -basis3 : ! B0 1 eiγ 2ω sin(ωt) (z) |χ (t)i = √ . B0 2 e−iγ 2ω sin(ωt) Now we change basis by using the inner product of the state in our current basis with the eigenstates of the new basis: (x) (x) (x) (x) |χ(x) (t)i = hχ+ |χ(z) (t)i |χ+ i + hχ− |χ(z) (t)i |χ− i. (87) Note here that to answer the question we only need to compute the inner prod(x) uct hχ− |χ(z) (t)i because we are asked about −~/2. In problem 4.29 you found the eigenstates of Sy . If you follow the same procedure for Sx you will find that: 1 1 (x) |χ− i = √ (88) 2 −1 so: ! B0 1 B0 1 eiγ 2ω sin(ωt) (x) (z) hχ− |χ (t)i = √ 1 −1 √ = i sin γ sin(ωt) (89) B0 2ω 2 2 e−iγ 2ω sin(ωt) where we in the last step used one of Euler’s formulae. 3 Because we got this state from working with Sz = it must be expressed in the z-basis. 23 ~ 2 1 0 0 −1 . Since this matrix is diagonal, Computing the square of the absolute value of eq.(89) gives the probability: P ~ B0 − = sin2 γ sin(ωt) . 2 2ω (90) d) Q. What is the minimum field (B0 ) required to force a complete flip in Sx ? Sol: (x) The system starts out in |χ+ i and we want to estimate the minimum value of (x) B0 so that the system is changed to |χ− i. This would mean that: ~ P − =1 2 B0 sin(ωt) = 1 sin2 γ 2ω B0 sin γ sin(ωt) = ±1 2ω B0 nπ sin(ωt) = ± ∀n ∈ Z+ . 2ω 2 Now, since we want the minimum value of B0 we set n = 1 (for larger n then B0 gets larger.). The term sin(ωt) is of course dependent on time but it reaches maximum magnitude of ±1, so the minimum value of B0 (which occurs at sin(ωt) = ±1) is: γ B0 (min) = πω π2ω = . 2γ γ 24 4.34 a) √ Q. Apply S− to |1 0i and confirm that you get 2~|1 -1i. Sol: Let us first start to clarify the notation that we will use: i) For the composite system we will write |S M i where S can go from |s1 − s2 | up to s1 + s2 in integer steps. The M value can then range in integer steps from −s to s. ii) The composite system can be written as a linear combination of the product of the individual systems: X |S M i = Ci |s1 m1 i|s2 m2 i (91) i The coefficient Ci is called Clebsch-Gordan coefficient, but in this problem we will not discuss these further. iii) When s1 = s2 it is customary to only use m1 , m2 for brevity: X |s mi = Ci |m1 m2 i. (92) i In this problem we consider s1 = s2 = total spin can either be 0 or 1. 1 2. The first thing to note is that the The values mi can only be ± 12 and we will simply refer to them as ±. Thus the following is true in our notation: 1 m1 1 m2 ≡ |m1 m2 i = | + −i (93) 2 2 if m1 = + 12 and m2 = − 12 . From the book (eq.(177) we know the following: |S M i |m1 m2 i |1 1i = | + +i 1 |1 0i = √ [| + −i + | − +i] 2 (94) |1 -1i = | − −i In the composite system of particle 1 and particle 2, our spin operator is the sum of the respective spin operators: (1) (2) S− = S− + S− 25 (95) in the sense that they each act on ”their own” particle. From the book (eq. 4.136) we know the ladder operators to a general spin state: p (j) S± |sj mi = ~ s(s + 1) − m(m ± 1)|sj (m ± 1)i so in our case: S+ |−i = ~|+i, S− |−i = 0, S+ |+i = 0 (96) S− |+i = ~|−i. (97) Now we use eq (94) and these to get: 1 (1) (2) S− |1 0i = S− + S− √ [| + −i + | − +i] 2 i 1 h (1) (2) (1) (2) S− |1 0i = √ S− + S− | + −i + S− + S− | − +i 2 1 S− |1 0i = √ (~| − −i + 0) + (0 + ~| − −i) 2 1 S− |1 0i = √ 2~| − −i 2 √ S− |1 0i = 2~| − −i which we see from eq (94) is: S− |1 0i = √ 2~|1 − 1i. b) Q. Apply S± to |0 0i and confirm that you get 0. Sol: From the book we see: 1 |0 0i = √ [| + −i − | − +i] 2 (98) So applying the ladder operators (one at a time): 1 (2) (1) S+ |0 0i = S+ + S+ √ [| + −i − | − +i] 2 i 1 h (1) (2) (1) (2) S+ |0 0i = √ S+ + S+ | + −i − S+ + S+ | − +i 2 26 1 S+ |0 0i = √ (0 + ~| + +i) − (~| + +i + 0) 2 S+ |0 0i = 0. (99) Similarly: 1 (1) (2) S− |0 0i = S− + S− √ [| + −i − | − +i] 2 i 1 h (1) (2) (1) (2) S− |0 0i = √ S− + S− | + −i − S− + S− | − +i 2 1 S− |0 0i = √ (~| − −i + 0) − (0 + ~| − −i) 2 S± |0 0i = 0. (100) c) Q. Show that |1 1i and |1 − 1i are eigenstates of S 2 , with the appropriate eigenvalues. Sol: For a state |S M i to be an eigenstate with the proper eigenvalue it needs to fulfil the following equation: S 2 |S M i = ~2 S(S + 1)|S M i (101) so we need to show the the right hand side is indeed what we get if we start with the left hand side. Starting with |1 1i = | + +i in the individual systems: 2 2 2 S 2 |1 1i = S(1) + S2) | + +i = S (1) + S (2) + 2S(1) · S2) | + +i S 2 |1 1i = S 2 |1 1i = S (1) 2 2 + S (2) + 2S(1) · S2) | + +i 2 2 S (1) + S (2) + 2S(1) · S2) | + +i S 2 |1 1i = ~2 s1 · (s1 + 1)| + +i + ~2 s2 · (s2 + 1)| + +i + 2S(1) · S(2) | + +i 2 S |1 1i = 3 (1) (2) 21 3 ~ · | + +i + ~ · | + +i + 2S · S | + +i . 2 2 2 2 21 27 (102) The first two terms might have been easy but what about the last one? Note here that: S(1) · S(2) = Sx(1) Sx(2) + Sy(1) Sy(2) + Sz(1) Sz(2) (103) so: S(1) · S(2) | + +i = Sx(1) Sx(2) | + +i + Sy(1) Sy(2) | + +i + Sz(1) Sz(2) | + +i. (104) Recall now that we are using the z -basis so the last term is easy: Sz(1) Sz(2) | + +i = (1) (2) (1) ~ ~ · | + +i. 2 2 (105) (2) For the Sx Sx | + +i + Sy Sy | + +i we have to do some more work. The fastest way is to use the matrix representation of Sx , Sy : ~ 0 −i 1 Sy = 0 2 i 0 ~ 0 1 ~ 0 ~ 1 Sx |+i = = = |−i 0 2 1 0 2 1 2 ~ 0 −i ~ 0 ~ 1 Sy |+i = =i = i |−i. 0 2 i 0 2 1 2 ~ Sx = 2 0 1 (106) We let the reader verify the following using the information in eq(106): Sx(1) Sx(2) | + +i + Sy(1) Sy(2) | + +i = 2 2 ~ ~ | − −i + i | − −i = 0. (107) 2 2 In total we therefore get: S 2 |1 1i = ~2 s1 · (s1 + 1)| + +i + ~2 s2 · (s2 + 1)| + +i + 2S(1) · S(2) | + +i S 2 |1 1i = 3~2 3~2 ~2 | + +i + | + +i + 2 · | + +i 4 4 4 S 2 |1 1i = 2~2 | + +i = 2~2 |1 1i . As an exercise, follow the same steps to show that S 2 |1 -1i = 2~2 |1 -1i. 28 4.36 a) Q. A particle of spin 1 and a particle of spin 2 are at rest in a configuration such that the total spin is 3, and its z component is ~. If you measured the z component of the angular momentum of the spin-2 particle what values might you get and what is the probability of each one? Sol: When you combine angular momentum J1 and J2 , the possible results are: Jtot = |J1 − J2 |, |J1 − J2 | + 1, . . . , J1 + J2 and the new M-value (the projection on an axis) can range from −Jtot to Jtot in integer steps. In our case we are told that Jtot = 3 and that M = 1 (since the z -component is ~M = ~ · 1). The system is thus in the total state |3 1i. Recall that this system is built up by eigenstates of the two individual particles. To find the probability of obtaining a certain value after a measurement of one of the particles we can use the Clebsch-Gordan table. From the table labelled 2 × 1 we find the column +13 . Then we see that there are three rows with fractions and to the left of these one can see the individual m-values. The state is thus4 : X s1 ,s2 ,J |J M i = Cm |s1 m1 i|s2 m2 i 1 ,m2 ,m m1 +m2 =M r |3 1i = 1 |2 2i|1 -1i + 15 r 8 |2 1i|1 0i + 15 r 6 |2 0i|1 1i 15 (108) From eq.(108) we see that the possible m-values of the spin 2 particle are −1, 0, 1. The possible results and probabilities are thus: 1 15 8 2) |2 1i, → 1 · ~ with P (~) = 15 6 3) |2 0i, → 0 · ~ with P (0) = 15 1) |2 2i, → 2 · ~ with 4 You P (2~) = need to practice this by yourself - there are no shortcuts! . 29 b) Q. An electron with spin down in the state ψ510 of the hydrogen atom. If you could measure the total angular momentum squared of the electron alone (not including the proton spin), what values might you get and what is the probability of each? Sol: Note that this problem takes the opposite perspective from the previous problem. In the previous problem you knew the total angular momentum and had to find out information about the individual systems. In this problem you actually have information about the individual systems: |L ml i|s ms i = |1 0i| 12 - 21 i and want to find information about the total system |JM i: X L,s,J |L ml i|s ms i = Cm |J M i 1 ,m2 ,m J |1 0i| 12 - 21 i = X L,s,J Cm |J M i. 1 ,m2 ,m J L,s,J To find the coefficients Cm we go to the Clebsch-Gordan (1 × 1/2) table. 1 ,m2 ,m In the previous problem you read off a column - in this case we instead read off a row. We are specifically looking for the row (0 - 12 ). Once you find it you will see: r r 2 3 1 1 1 1 1 1 |1 0i| 2 - 2 i = | - i+ | - i 3 2 2 3 2 2 So now we can answer the question of the possible values we might obtain and with what probabilities when we measure the total angular momentum squared: 3 1 3 15 2 23 → ~ +1 = ~ 1) 2 2 2 2 4 1 1 3 1 1 2) + 1 = ~2 → ~2 2 2 2 2 4 30 with with 15 2 2 P ~ = 4 3 3 2 1 P ~ = 4 3 4.38 Q. Consider the three-dimensional harmonic oscillator, for which the potential is: V (r) = 1 mω 2 r2 . 2 (109) a) Q. Show that separation of variables in Cartesian coordinates turns this into three one-dimensional oscillators, and exploit your knowledge of the latter to determine the allowed energies. Sol: Consider the Schr¨ odinger equation: i~ ∂Ψ(x, t) ˆ = HΨ(x, t) ∂t (110) where 2 2 ˆ = pˆ + V (x) = − ~ H 2m 2m ∂2 ∂2 ∂2 + + ∂x2 ∂y 2 ∂z 2 1 + mω 2 x2 + y 2 + z 2 . (111) 2 Assume now a separable solution: Ψ(x, t) = ψ(x, y, z)T (t) (112) which inserted to the Schr¨ odinger equation gives: 2 ∂T ~2 ∂ ∂2 ∂2 1 2 2 2 2 i~ψ(x, y, z) = − + 2 + 2 + mω x + y + z ψ(x, y, z)T ∂t 2m ∂x2 ∂y ∂z 2 2 1 ∂T 1 ~2 ∂ ∂2 ∂2 1 2 2 2 2 i~ = − + 2 + 2 + mω x + y + z ψ(x, y, z) T ∂t ψ(x, y, z) 2m ∂x2 ∂y ∂z 2 now, from the assumption about separable variables it is implied that the variables must be independent of each other. Seeing that the left hand side is only a function of T and the right hand side is only a function of X,Y,Z we must deduce that: i~ 1 ∂T =E T ∂t (113) where E is a constant5 . (If it was not a constant, then T would depend on X,Y,Z which contradicts the assumption.) 5 Try to work out the dimension on the left hand side and you’ll see the appropriateness of the letter E. Hint: T (t) should give you a quantity measured in a time unit. 31 Now we can insert this back to obtain the time-independent Schr¨odinger equation: 2 ~2 ∂2 ∂2 1 ∂ 2 2 2 2 Eψ(x, y, z) = − + 2 + 2 + mω x + y + z ψ(x, y, z) 2m ∂x2 ∂y ∂z 2 which of course something familiar to us all. Assume further now, in our separation of variables, that ψ(x, y, z) = X(x)Y (y)Z(z): 2 ∂2 ∂2 1 ~2 ∂ 2 2 2 2 + + + x mω + y + z XY Z E XY Z = − 2m ∂x2 ∂y 2 ∂z 2 2 E XY Z = − ~2 2m d2 Y d2 Z 1 d2 X Y Z 2 + XZ 2 + XY 2 + mω 2 x2 + y 2 + z 2 XY Z dx dy dz 2 and now divide by XY Z: ~2 1 d2 X 1 d2 Y 1 d2 Z 1 + + + mω 2 x2 + y 2 + z 2 = E. − 2 2 2 2m X dx Y dy Z dz 2 Lastly, group together terms that are dependent on the same variable: ~2 1 d2 X 1 ~2 1 d 2 Y 1 ~2 1 d 2 Z 1 2 2 2 2 2 2 − + mω x + − + mω y + − + mω z = E. 2m X dx2 2 2m Y dy 2 2 2m Z dz 2 2 Now, each of the terms in the round brackets are only dependent on that respective variable and our assumption is that they are independent of each other. Since the right hand side is constant it follows that each of these terms must be constant! Let us call these constants for Ex , Ey , Ez : ~2 1 d2 X 1 1 1 ~2 1 d2 Y ~2 1 d 2 Z 2 2 2 2 − + mω x + mω y + mω 2 z 2 = E − − 2 2 2 2m Z dz {z 2 | 2m X dx{z 2 } | 2m Y dy{z 2 } } | Ex Ey Ez (where Ex + Ey + Ez = E) so effectively we now have one second order differential equation per variable: ~2 d 2 X 1 + mω 2 x2 X = Ex X 2m dx2 2 (114) − ~2 d2 Y 1 + mω 2 y 2 Y = Ey Y 2 2m dy 2 (115) − ~2 d2 Z 1 + mω 2 z 2 Z = Ez Z. 2m dz 2 2 (116) − 32 These three differential equations are each equivalent to the one dimensional Harmonic Oscillator from section 2.3 in the text book! The allowed energies in that section (for one dimension) are derived to be: 1 En = n + ~ω (117) 2 and in our case it applies in each direction i ∈ {x, y, z}: 1 ~ω Ei (ni ) = ni + 2 (118) with: 1 En = Ex (nx ) + Ey (ny ) + Ez (nz ) = n + 3 · ~ω 2 (119) where n = nx + ny + nz and nx , ny , nz goes from 0,1,2.... b) Q. Determine the degeneracy d(n) of En . Sol: Given the criteria on nx , ny , nz , the degeneracy is given by how many ways we can get the same value n. The easiest way is to look for some sort of structure: We start by choosing nx , then look at the possibilities left: If nx = n then ny = 0 and nz = 0, this gives one way. If nx = n − 1 then ny = 0, 1 and nz = 1, 0, this gives two ways. If nx = n − 2 then ny = 0, 1, 2 and nz = 2, 1, 0, this gives three ways. Since we can continue with this down to: If nx = 0 then ny = 0, 1, 2, . . . n and nz = n, n − 1, n − 2, . . . , 0, this gives n+1 ways. Summing all these possibilities we obtain: d(n) = n+1 X i i= (n + 1)(n + 2) . 2 33 (120) 4.40 a) Q. Prove the three-dimensional Virial theorem: ~ i 2hT i = hr · ∇V (121) for stationary states. Sol: In problem 3.31 we used eq.(3.71): d ˆ i ˆ ˆ hQi = h[H, Q]i + dt ~ * ˆ ∂Q ∂t + (122) to show that dV d hxpi = 2hT i − x dt dx (123) and since all expectation values are constant in time for stationary states we obtained the one dimensional Virial theorem: dV . 2hT i = x dx Now we exchange Q = xp to the three-dimensional analogy Q = rp in eq.(122): i ˆ ∂r · p d hr · pi = h[H, r · p]i + . (124) dt ~ ∂t For a stationary state we know that the left hand side must be 0. Also, the last term on the right hand side is 0 because there is no explicit time dependence in neither r or p. Therefore: 0= i ˆ h[H, r · p]i. ~ (125) Now we expand commutator in eq.(125): ˆ r · p] = [H, ˆ r1 p1 + r2 p2 + r3 p3 ] = −[r1 p1 + r2 p2 + r3 p3 , H] ˆ [H, ˆ + [r2 p2 , H] ˆ + [r3 p3 , H] ˆ = − [r1 p1 , H] ˆ + [r1 , H]p ˆ 1 + r2 [p2 , H] ˆ + [r2 , H]p ˆ 2 + r3 [p3 , H] ˆ + [r3 , H]p ˆ 3 = − r1 [p1 , H] 34 ˆ = Inserting H p2 2m + V (r) gives: p2 p2 pˆ2 ˆ [H, r · p] = − r1 [p1 , + V (r)] + [r1 , + V (r)]p1 + r2 [p2 , + V (r)] + 2m 2m 2m p2 p2 p2 + V (r)]p2 + r3 [p3 , + V (r)] + [r3 , + V (r)]p3 . − [r2 , 2m 2m 2m In the equation above we have 12 commutators if we expand each of the six into two commutators. However, a lot of them will actually be 0 and now we hunt those down. Recall problem 4.19 and the canonical commutators: [rj , pk ] = i~δjk , [rj , rk ] = [pj , pk ] = 0. Any component of p will commute with p2 so: [pj , p2 p2 + V (r)] = [pj , ] +[pj , V (r)] 2m | {z2m} 0 ˆ r · p]: which simplifies some of the terms in our hideous expression of [H, p2 ˆ [H, r · p] = − r1 [p1 , V (r)] + [r1 , + V (r)]p1 + r2 [p2 , V (r)] + 2m p2 p2 − [r2 , + V (r)]p2 + r3 [p3 , V (r)] + [r3 , + V (r)]p3 . 2m 2m Note now that any function f (r) must commute with the coordinate variables x, y, z ( r1 , r2 , r3 ): [rj , V (r)] = rj V (r) − V (r)rj = rj V (r) − rj V (r) = 0 Thus: [rj , (126) p2 p2 + V (r)] = [rj , ] + [rj , V (r)] 2m 2m | {z } 0 which then gives: 2 ˆ r · p] = − r1 [p1 , V (r)] + [r1 , p ]p1 + r2 [p2 , V (r)] + [H, 2m p2 p2 − [r2 , ]p2 + r3 [p3 , V (r)] + [r3 , ]p3 . 2m 2m From problem 3.13c) we know that: [pj , V (r)] = −ih 35 ∂V (r) ∂rj (127) so inserting that we get: 2 ˆ r · p] = − r1 −ih ∂V (r) + [r1 , p ]p1 + r2 −ih ∂V (r) [H, + ∂r1 2m ∂r2 p2 ∂V (r) p2 − [r2 , ]p2 + r3 −ih + [r3 , ]p3 . 2m ∂r3 2m Now we only have three more commutators and we will evaluate them all at once: 3 3 p2 1 1 X 1 X 2 2 2 2 2 [rj , p1 + p2 + p3 ] = [pk , rj ] = [rj , ]= [rj , pk ] = − 2m 2m 2m 2m k=1 =− 3 3 k=1 k=1 k=1 1 X 1 X i~ (pk [pk , rj ] + [pk , rj ]pk ) = − (pk (−i~) + (−i~)pk ) δkj = 2m 2m m which we now insert into our commutator and then simplify the expression: ˆ r · p] = − r1 −ih ∂V (r) + i~ p21 + r2 −ih ∂V (r) [H, + ∂r1 m ∂r2 ∂V (r) i~ i~ 2 p2 + r3 −ih p23 + − m ∂r3 m ˆ r · p] = ih r1 ∂V (r) + r2 ∂V (r) + r3 ∂V (r) − i~ p2 + p2 + p2 [H, 1 2 3 ∂r1 ∂r2 ∂r3 m 2 ˆ r · p] = ih · r · ∇V ~ − i~ · p . [H, m Referring back to back into eq. (125) we get: i ˆ h[H, r · p]i = 0 ~ 2 i ~ − i~ · p ih · r · ∇V =0 ~ m 2 p ~ hr · ∇V i = m 2 p and since hT i = h 2m i we have finally arrived at: ~ i = 2hT i. hr · ∇V 36 b) Q. Apply the Virial theorem to the case of Hydrogen and show that: hT i = −En ; hV i = 2En . (128) Sol: The Virial theorem states: ~ i = 2hT i. hr · ∇V 2 e which gives: For Hydrogen we have the potential V (r) = − 4π 0r 1 ∂V (r) ˆ e2 ~ = ∂V (r) rˆ + 1 ∂V (r) θˆ + rˆ ∇V φ= ∂r r ∂θ r sin(θ) ∂φ 4π0 r2 2 e2 e ~ =⇒ hr · ∇V i = rˆ r· rˆ = = −hV i. 4π0 r2 4π0 r Exchanging the left hand side in the Virial theorem with this gives: −hV i = 2hT i. (129) Lastly, recall Ehrenfest’s theorem - expectation values obey classical laws. So: hT i + hV i = En . (130) Combining eq.(129) and eq.(130) we obtain: hT i = −En hV i = 2En . 37 (131) c) Q. Apply the Virial theorem to the three-dimensional harmonic oscillator (Problem 4.38) and show that in this case: hT i = hV i = En /2. (132) Sol: The same steps as the previous problem: The Virial theorem states: ~ i = 2hT i. hr · ∇V For 3D harmonic oscillator we have the potential V (r) = 21 mω 2 r2 giving: 1 ∂V (r) ˆ ~ = ∂V (r) rˆ + 1 ∂V (r) θˆ + ∇V φ = mω 2 rˆ r ∂r r ∂θ r sin(θ) ∂φ ~ i = hrˆ =⇒ hr · ∇V r · mω 2 rˆ ri = hmω 2 r2 i = 2hV i. Exchanging the left hand side in the Virial theorem with this gives: 2hV i = 2hT i ⇐⇒ hV i = hT i. (133) Lastly, recall Ehrenfest’s theorem - expectation values obey classical laws: hT i + hV i = En . (134) Combining eq.(133) and eq.(134) gives: 2hT i = 2hV i = En which is what we wanted to show. 38 (135)
© Copyright 2024 ExpyDoc