Section 12.2 The Early Universe
The standard theory of cosmology is the Big Bang model, in which the universe began as a hot dense ball of stuff and has been expanding and cooling ever since. The first and primary evidence for this model was Edwin Hubble's discovery in the 1920's that the universe is expanding. From analysis of Doppler shifted spectra and other observations, Hubble showed that most distant galaxies are receding from our galaxy. Furthermore, their velocity of recession is roughly proportional to their distance from us.
To see how this could happen, imagine baking a giant loaf of raisin bread. As the dough expands, each raisin gets farther from all the others and, if the expansion is uniform, the recessional rates at any instant of time are proportional to the separations. The recession rates of galaxies in the expanding universe are similarly proportional to the separation between galaxies. The proportionality can be expressed simply as
where \(V\) is the velocity of recession, \(R\) is the galaxy distance and \(H\) is Hubble's constant. By constant, we mean it is the same number for many different galaxies, not that it doesn't change with time. In fact we argue below that it must be decreasing with time. Its current accepted value is \(20.8\Xunits{km/s}\) per million light-years. This means that if galaxy \(B\) is a million light years more distant than galaxy \(A\text{,}\) then \(B\) is receding \(20.4\Xunits{km/s}\) faster than \(A\text{.}\)
Okay, now here is the key point: if the universe is expanding, then it is also cooling down. Especially when considering the early moments of the universe, you can compare it to a gas that is undergoing an adiabatic expansion (no heat flow into or out of the system, which is the entire universe in this case). Recall from PHYS 211 that a gas cools down when undergoing an adiabatic expansion. It can be shown that 1 the temperature \(T\) of the universe during its early stages — during the “radiation-dominated” epochs, i.e., the first million years — is given roughly by the equation
where time \(t\) is in seconds and temperature \(T\) is in Kelvin. Even though the assumptions of this derivation 2 are very much oversimplified, (12.2) gives results correct within a factor of 2 or 3 up to \(t = 10^6\) years. After that time, the universe is cool enough to start acting like a collection of material particles rather than a bunch of photons. 3 Thus, for times near the present, (12.2) overestimates the temperature of radiation considerably.
A useful relation can be obtained from (12.2). Recall from kinetic theory (PHYS 211) that for particles in equilibrium, the average kinetic energy equals \(\frac{3}{2}kT\text{,}\) where Boltzmann's constant \(k= 8.63\times 10^{-5}\Xunits{eV/K}\text{.}\) If we round these figures to the same accuracy as (12.2), we find \(E \simeq kT\) with \(k \simeq 10^{-4}\Xunits{eV/K}\text{,}\) and thus typical particle energies of the early universe are given by
where energy \(E\) is in eV and time \(t\) is again in seconds. Equations (12.2) and (12.3) are handy relations that will be used throughout the chapter.