Daily paper summaries

Are supernovae really so super? Inhomogeneities and the accuracy of distance measurements

* Paper title: Interpreting supernovae observations in a lumpy universe
* Authors: Chris Clarkson et al.
* First author’s affiliation: University of Cape Town

Distant objects race away from us at ever-faster speeds, a cosmological expansion of space itself driven by dark energy (DE). The speed of this expansion depends on w, the ratio of pressure to energy density in the DE: the more negative it is, the faster the expansion. Measuring how far away distant objects were and are tells us the expansion rate — and that constrains w, which is, so far, the best grasp we have on DE’s elusive essence.

Change to the distance modulus 4 different approximations for dealing with inhomogeneities along bundle path would yield. DR is Dyer-Roeder, mDR modified Dyer-Roeder. Significance: 1) effects are big. 2) they are different for each approximation. All are discussed in text. Fig. 1 in the paper.

So if you care about dark energy, structure formation (which DE strongly affects), or the future of the Universe (flat, de Sitter vacuum or violent death in a “big rip”?), you must care deeply about precision measurements of large-scale distances. Although “precision cosmological distance measurements” may not sound like the most exciting thing in astrophysics, for those who enjoy the more violently explosive side of our discipline, not to fear — observing supernovae is a primary technique to measure cosmological distance, since they are roughly “standard candles.” We know how bright they ought to be at a given distance (luminosity), so how bright they are here (flux) tells us their distance.

That is where today’s paper, Clarkson et al.’s “Interpreting supernovae observations in a lumpy universe,” begins. Clarkson et al. point out that the bundle of light beams from SNe (specifically, SNIa’s) captured by a telescope is extremely narrow — less than 1 AU for a source at z ~ 1. On the other hand, the cosmological simulations and perturbation theory we use to model and correct these measurements are coarse — smoothed far above scales ~ 1 pc. In particular, since 1 AU is less than the mean distance between massive objects (stars, small DM halos, galaxies), bundles with width on the corresponding angular scale almost never encounter an object. Interestingly for history buffs, the basics of this problem were first pointed out by Zel’dovich (of later fame for the SZ effect) and independently by Feynman (“Surely You’re Joking?”).

So are models of the effects of inhomogeneities on beam-bundles from SNIa adequate? Clarkson et al. seek to answer this question.  They compare four approaches using a numerical simulation of looking through a large void and large overdensity — see Figure 2.  Here, I’ll explain the four approaches they model.

As background to these approaches, we should know that because the bundles are very narrow, the lensing (relative magnification or demagnification) they experience comes mostly from traveling through gravitational potential gradients created by objects outside the bundle. This is called Weyl focusing, in contrast to Ricci focusing (focusing due to mass inside the bundle).

  •  Weinberg (of electro-weak unification fame) proposed a conservation of photons argument claiming that the magnification of a small portion of the beam-bundles from passing through overdensities will average out with the relative demagnification of the majority of beam-bundles, which propagate through vacuum, and so one can model the Weyl focusing with a homogeneous fluid of average density equal to the average of the under and overdensities (in other words, replace the Weyl focusing by an “effective” Ricci focusing). In this approach, termed by Clarkson et al. “FL background,” ‘the average luminosity distance is the same as the luminosity distance in the FLRW-cosmology background.’ But was Weinberg right, especially for the very narrow bundles used to measure SNIa?

Simulation of how bundle's passing through void (left) and overdensity (right) affects distance modulus in each approximation. DR is Dyer-Roeder, mDR modified Dyer-Roeder, FL background follows the approach of Weinberg as discussed in the text. Fig. 2 in the paper.

  • If not, the other standard approach is called “Dyer-Roeder” (Dyer and Roeder, 1972-1974): following Zel’dovich, they assumed most of the bundles propagate through vacuum, and so add in a factor alpha < 1 (the fraction of the mean matter the bundle intercepts) to reduce the “effective” Ricci focusing. Like Weinberg, they assume the Weyl focusing is identically zero.
  • But surely the Weyl focusing can be better captured by a model accounting for the local curvature fluctuations the bundle passes through (created for instance by DM halos) and not just trying to encode it in an “effective” Ricci focusing term? With this motivation, Clarkson et al. reduce the matter density by a factor of alpha in the Friedmann equation, which means we are no longer a critical density universe today and so have a curvature term. This also accounts for the local change in expansion rate because of the mass inhomogeneities. Clarkson et al. call this “modified Dyer-Roeder.”
  • Finally, Clarkson et al. consider a fourth approach (so far, we have “background FL,” “Dyer-Roeder,” and “modified Dyer-Roeder”). In the so-called “shell” method, they set up a density profile today and use the Lemaitre-Tolman-Bondi solution of GR to evolve it back in time — allowing them to calculate the distance exactly as a function of the density profile today.

So what is the upshot? Clarkson et al.’s results for distance modulus are highly dependent on which approach is used, as Figure 1 also shows! Indeed, the differences between the models are large enough to be significant for the type of cosmological questions SNIa are used to ask. So Clarkson et al. conclude that “the old problem of modeling narrow [bundles of light] beams remains unsolved. . . [but must soon be] to ensure precision cosmology delivers correct answers as well as precise ones.”

Download this article as an e-book (epub or mobi format)

The following two tabs change content below.

Zachary Slepian

I’m a 2nd year grad student in Astronomy at Harvard, working with Daniel Eisenstein on the effect of relative velocities between regular and dark matter on the baryon acoustic oscillations. I did my undergrad at Princeton, where I worked with Rich Gott on dark energy, Jeremy Goodman on dark matter, and Roman Rafikov on planetesimals. I also spent a year at Oxford getting a master’s in philosophy of physics, which remains an interest.

Discussion

3 Responses to “Are supernovae really so super? Inhomogeneities and the accuracy of distance measurements”

  1. Semantic note: an Einstein-de Sitter universe is one made of pressureless matter (a~t^2/3). If dark energy has w=-1 it will end up asymptotically approaching a de Sitter universe, which is of course quite different. De Sitter was a pretty busy guy.

    Posted by Adam Solomon | September 19, 2011, 10:06 pm
    • Quite right — the link was to the de Sitter universe wikipedia entry, but the text should have been “de Sitter” and not “Einstein-de Sitter”; corrected now.

      Posted by Zachary Slepian | September 19, 2011, 11:31 pm

Trackbacks/Pingbacks

  1. […] That is, literally, the biggest-picture view of the importance of exploding stars.  (See also this post on supernovae, and this post explaining how DE leads to accelerated […]

Leave a Reply

Want an Astrobites t-shirt?

Enter the Astrobites reader survey to help us focus our content and style to serve you best. You could win a free Astrobites t-shirt!

Follow us on Twitter

Like us on Facebook

Astroplots

    http://astrobites.tumblr.com/post/65549158012http://astrobites.tumblr.com/post/62345439779http://astrobites.tumblr.com/post/60939853775http://astrobites.tumblr.com/post/59050954779Visit Astroplots to explore astronomy research through data representation.

Archives

Enter your email address to subscribe to Astrobites and receive notifications of new posts by email.