* Authors: Chris Clarkson et al.
* First author’s affiliation: University of Cape Town
Distant objects race away from us at ever-faster speeds, a cosmological expansion of space itself driven by dark energy (DE). The speed of this expansion depends on w, the ratio of pressure to energy density in the DE: the more negative it is, the faster the expansion. Measuring how far away distant objects were and are tells us the expansion rate — and that constrains w, which is, so far, the best grasp we have on DE’s elusive essence.
So if you care about dark energy, structure formation (which DE strongly affects), or the future of the Universe (flat, de Sitter vacuum or violent death in a “big rip”?), you must care deeply about precision measurements of large-scale distances. Although “precision cosmological distance measurements” may not sound like the most exciting thing in astrophysics, for those who enjoy the more violently explosive side of our discipline, not to fear — observing supernovae is a primary technique to measure cosmological distance, since they are roughly “standard candles.” We know how bright they ought to be at a given distance (luminosity), so how bright they are here (flux) tells us their distance.
That is where today’s paper, Clarkson et al.’s “Interpreting supernovae observations in a lumpy universe,” begins. Clarkson et al. point out that the bundle of light beams from SNe (specifically, SNIa’s) captured by a telescope is extremely narrow — less than 1 AU for a source at z ~ 1. On the other hand, the cosmological simulations and perturbation theory we use to model and correct these measurements are coarse — smoothed far above scales ~ 1 pc. In particular, since 1 AU is less than the mean distance between massive objects (stars, small DM halos, galaxies), bundles with width on the corresponding angular scale almost never encounter an object. Interestingly for history buffs, the basics of this problem were first pointed out by Zel’dovich (of later fame for the SZ effect) and independently by Feynman (“Surely You’re Joking?”).
So are models of the effects of inhomogeneities on beam-bundles from SNIa adequate? Clarkson et al. seek to answer this question. They compare four approaches using a numerical simulation of looking through a large void and large overdensity — see Figure 2. Here, I’ll explain the four approaches they model.
As background to these approaches, we should know that because the bundles are very narrow, the lensing (relative magnification or demagnification) they experience comes mostly from traveling through gravitational potential gradients created by objects outside the bundle. This is called Weyl focusing, in contrast to Ricci focusing (focusing due to mass inside the bundle).
- Weinberg (of electro-weak unification fame) proposed a conservation of photons argument claiming that the magnification of a small portion of the beam-bundles from passing through overdensities will average out with the relative demagnification of the majority of beam-bundles, which propagate through vacuum, and so one can model the Weyl focusing with a homogeneous fluid of average density equal to the average of the under and overdensities (in other words, replace the Weyl focusing by an “effective” Ricci focusing). In this approach, termed by Clarkson et al. “FL background,” ‘the average luminosity distance is the same as the luminosity distance in the FLRW-cosmology background.’ But was Weinberg right, especially for the very narrow bundles used to measure SNIa?
- If not, the other standard approach is called “Dyer-Roeder” (Dyer and Roeder, 1972-1974): following Zel’dovich, they assumed most of the bundles propagate through vacuum, and so add in a factor alpha < 1 (the fraction of the mean matter the bundle intercepts) to reduce the “effective” Ricci focusing. Like Weinberg, they assume the Weyl focusing is identically zero.
- But surely the Weyl focusing can be better captured by a model accounting for the local curvature fluctuations the bundle passes through (created for instance by DM halos) and not just trying to encode it in an “effective” Ricci focusing term? With this motivation, Clarkson et al. reduce the matter density by a factor of alpha in the Friedmann equation, which means we are no longer a critical density universe today and so have a curvature term. This also accounts for the local change in expansion rate because of the mass inhomogeneities. Clarkson et al. call this “modified Dyer-Roeder.”
- Finally, Clarkson et al. consider a fourth approach (so far, we have “background FL,” “Dyer-Roeder,” and “modified Dyer-Roeder”). In the so-called “shell” method, they set up a density profile today and use the Lemaitre-Tolman-Bondi solution of GR to evolve it back in time — allowing them to calculate the distance exactly as a function of the density profile today.
So what is the upshot? Clarkson et al.’s results for distance modulus are highly dependent on which approach is used, as Figure 1 also shows! Indeed, the differences between the models are large enough to be significant for the type of cosmological questions SNIa are used to ask. So Clarkson et al. conclude that “the old problem of modeling narrow [bundles of light] beams remains unsolved. . . [but must soon be] to ensure precision cosmology delivers correct answers as well as precise ones.”
Latest posts by Zachary Slepian (see all)
- BICEP2 results: inflation and the tensor modes – March 17, 2014
- Galaxy in a Bottle: Simulating Spiral Galaxy Formation – December 26, 2013
- Stuck in neutral: how did the Universe become reionized? – October 25, 2013