How Essential is the Vacuum Energy to Our Present Model of the Expanding Universe?

How Essential is the Vacuum Energy to Our Present Model of the Expanding Universe?

How essential to modern cosmology is the vacuum energy concept, and what would happen to our model of the expanding universe if it were proved to be non-existent?

I'm neither a quantum physicist, nor a cosmologist, so I'm going only from my laymans understanding.

The framework for constructing theories to explain everything in the universe except gravity which has proved most popular and useful for the last 50 or so years, is quantum field theory. In this theory, there are multiple fields (conceptually similar to the electric and magnetic fields) suffusing every point in space and time. From the values of these fields, we can, in principle, compute the probabilities of the outcomes of any experiment, including the probability of detecting a particular kind of particle at a particular place and time, and from the rules of these computations we can derive all the normal "laws of physics" such as the conservation of energy, the types of particles which exist and so on, as "emergent" properties.

I described QFT as a "framework" because there are many possible sets of fields and rules for their interactions, each of which gives different physics. From the experiments we can do, we can develop some constraints on the overall structure, but we also know that the fields and interactions that we know about cannot be the whole story. Over short distance and time scales and large energy scales they would give rise to non-sensical predictions, so they must be wrong, but we cannot yet do experiments to find out what really happens.

Anyway, the vacuum is the configuration of those fields which has the lowest possible energy, but that does not have all the fields being zero, and when we calculate that minimal energy (as far we can) using only the known particles and fields we get a ridiculously high value (that's one of the non-sensical predictions I mentioned). This is not taken as a sign that the vacuum energy is that high, but that the unknown high-energy fields must somehow contribute to lowering it.

None of this is very relevant to cosmology, which, except for the very earliest moments of the universe, does not really depend on the details of QFT, but only on the "emergent" properties such as the behaviour of gasses, stars, light waves, neutrinos, etc. and on the one thing that (so far) does not seem to fit into QFT, namely gravity (or general relativity, if you prefer). For, sure, if the vacuum energy actually were the silly value computing from known forces using QFT, the gravitational effects of that energy would make anything like the universe we see impossible, but that is just taken as confirming that there is as yet unexplored physics on small scales.

So, the summary is "not essential at all". Cosmology and the expanding universe depends on GR and properties of matter and radiation on large scales and at relatively low energies. The Vacuum energy relates to the properties of the fields that underly that matter and energy on very small scales.

Essential Guide to the EU – Introduction

Now more than ever, the exploration of our starry Universe excites the imagination. Never before has space presented so many pathways for research and discovery.

New observational tools enable us to “see” formerly-invisible portions of the electromagnetic spectrum, and the view is spectacular. Telescope images in X-ray, radio, infrared and ultraviolet light reveal exotic structure and intensely energetic events that continually redefine the quest as a whole.

Spectrographic interpretation has grown hand-in-hand with faster, large-memory computers and programs, in sophistication and in broad scientific data processing, imaging and modeling capability.

Standing out amidst an avalanche of new images is the greatest surprise of the space age: evidence for pervasive electric currents and magnetic fields across the universe, all connecting and animating what once appeared as isolated islands in space. The intricate details revealed are not random, but exhibit the unique behavior of charged particles in plasma under the influence of electric currents.

The telltale result is a complex of magnetic fields and associated electromagnetic radiation. We see the effects on and above the surface of the Sun, in the solar wind, in plasma structures around planets and moons, in the exquisite structure of nebulas, in the high-energy jets of galaxies, and across the unfathomable distances between galaxies.

Thanks to the technology of the 20th century, astronomers of the 21st century will confront an extraordinary possibility. The evidence suggests that intergalactic currents, originating far beyond the boundaries of galaxies themselves, directly affect galactic evolution. The observed fine filaments and electromagnetic radiation in intergalactic and interstellar plasma are the signature of electric currents. Even the power lighting the galaxies’ constituent stars may indeed be found in electric currents winding through galactic space.

In a Coronal Mass Ejection (CME), charged particles are explosively accelerated away from the Sun in streaming filaments, defying the Sun’s immense gravity. Electric fields accelerate charged particles, and nothing else known to science can achieve the same effect. If the Sun is the center of an electric field, how many other enigmatic features of this body will find direct explanation? Credit: SOHO (NASA/ESA)

It was long thought that only gravity could do “work” or act effectively across cosmic distances. But perspectives in astronomy are rapidly changing. Specialists trained in the physics of electricity and magnetism have developed new insights into the forces active in the cosmos. A plausible conclusion emerges. Not gravity alone, but electricity and gravity have shaped and continue to shape the universe we now observe.

A Little History

The early theoretical foundation for modern astronomy was laid by the work of Johannes Kepler and Isaac Newton in the 17th and 18th centuries. Since 1687 when Newton first explained the movement of the planets with his Law of Gravity, science has relied on gravity to explain all large scale events, such as the formation of stars and galaxies, or the births of planetary systems.

This foundation rested on the observed role of gravity in our solar system. Research into the nature and potential of electricity had not yet begun.

Franklin’s experiments with electricity occurred after the directions of gravity-only astronomy were already well-established. Credit: Photo courtesy of the Benjamin Franklin Tercenary

Then, in the 19th Century, research pioneers — whose very names crackle with electricity — Alessandro Volta (1745-1827), André Ampère (1775-1836), Michael Faraday (1791-1867), Joseph Henry (1797-1878), James Clerk Maxwell (1831-1879), and John H. Poynting (1852-1914) began to empirically verify the “laws” governing magnetism and electrodynamic behavior, and developed useful equations describing them.

By the start of the 20th Century a Norwegian researcher, Kristian Birkeland (1867-1917), was exploring the relationship between the aurora borealis and the magnetic fields he was able to measure on the Earth below them. He deduced that flows of electrons from the Sun were the source of the “Northern Lights” — a conclusion confirmed in detail by modern research. It would be at least another seventy years before the phrase “Birkeland currents” began to enter the astronomers’ lexicon.

Subsequent work by other scientists — James Jeans (1877-1946), Nobel Laureate Irving Langmuir (1881-1957), Willard Bennett (1903-1987) and Nobel Laureate Hannes Alfvén (1908-1995), author of Cosmic Plasma — continued to extend our understanding of ionized matter (plasma, the fourth state of matter).

In the latter half of the 20th Century, Alfvén’s close colleague Anthony Peratt published a groundbreaking textbook on space plasma, Physics of the Plasma Universe, the culmination of his hands-on, high-energy plasma experiments and supercomputer particle-in-cell plasma simulations at the Department of Energy’s Los Alamos Laboratory in New Mexico, USA. The book has continued to serve as a guide to specialists in the field.

A new tone in astronomy occurred as engineers pointed radio telescopes to the sky and began to detect something astronomers had not expected — radio waves from energetic events in the “emptiness” of space. At the Second IEEE International Workshop on Plasma Astrophysics and Cosmology, 1993, Kevin Healy of the National Radio Astronomy Observatory (NRAO) presented a paper, A Window on the Plasma Universe: The Very Large Array, (VLA) in which he concluded,

“With the continuing emergence of serious difficulties in the “standard models” of astrophysics [and] the rise of the importance of plasma physics in the description of many astrophysical systems, the VLA (Very Large Array) is a perfect instrument to provide the observational support for laboratory, simulation, and theoretical work in plasma physics. Its unprecedented flexibility and sensitivity provide a wealth of information on any radio emitting region of the universe.”

Active galaxy 3C31 (circled at center) is dwarfed by the plasma jets along its polar axis, moving at velocities a large fraction of the speed of light. How might the electrical potential along the immense volume of this active region affect the evolution of this galaxy and its billions of stars? Credit: NRAO’s Very Large Array, and Patrick Leahy’s Atlas of DRAGNs

At the start of the 21st Century, Wallace Thornhill and David Talbott wrote their collaborative book, The Electric Universe, and electrical engineer and professor Donald E. Scott authored The Electric Sky. Together these works provide the first general introduction to a new understanding of electric currents and magnetic fields in space.

Leading the way in technical publication has been the Nuclear and Plasma Sciences Society, a division of the Institute of Electrical and Electronic Engineers (IEEE). This professional organization is one of the world’s largest publishers of scientific and technical literature.

Standing on the shoulders of the electrical pioneers, Carl Fälthammar, Gerrit Verschuur, Per Carlqvist, Göran Marklund and many others continue to extend groundbreaking plasma research to this day.

The Limits of Gravitational Theory

The Law of Gravity, which relies exclusively on the masses of celestial bodies and the distances between them, works very well for explaining planetary and satellite motions within our solar system. But when astronomers tried to apply it to galaxies and clusters of galaxies, it turns out that nearly 90% of the mass necessary to account for the observed motion is missing.

The trouble began in 1933 when astronomer Fritz Zwicky calculated the mass-to-light ratio for 8 galaxies in the Coma Cluster of the Coma Berenices (“Berenices’s hair”) constellation. At the time, it was assumed that the amount of visible light coming from stars should be proportional to their masses (a concept called “visual equilibrium”). As Zwicky was to realize, the apparent rapid velocities of the galaxies, around their common center of mass (“barycenter”), suggested that much more mass than could be seen was required to keep the galaxies from flying out of the cluster.

Zwicky concluded that the missing mass must therefore be invisible or “dark”. Other astronomers, such as Sinclair Smith (who performed calculations on the Virgo Cluster in 1936) began to find similar problems. To make matters worse, in the 1970s, radial velocity plots (radius from the center versus stars’ speed of rotation) for stars in the Milky Way galaxy revealed that the speeds flatten out rather than trail down, implying that velocity continues to increase with radius, contrary to what Newton’s Law of Gravity predicts for, and which is observed in, the Solar System.

In short, astronomers using the Gravity Model were forced to add a lot more mass to every galaxy than can be detected at any wavelength. They called this extra matter “dark” its existence can only be inferred from the failure of predictions. To cover for the insufficiency they gave themselves a blank check, a license to place this imagined stuff wherever needed to make the gravitational model work.

Other mathematical conjectures followed. Assumptions about the redshift of objects in space led to the conclusion that the universe is expanding. Then other speculations led to the notion that the expansion is accelerating. Faced with an untenable situation, astronomers postulated a completely new kind of matter, an invisible “something” that repels rather than attracts. Since Einstein equated mass with energy (E = mc²), this new kind of matter was interpreted as being of a form of mass that acts like pure energy — regardless of the fact that if the matter has no mass it can have no energy according to the equation. Astronomers called it “dark energy”, assigning to it an ability to overcome the very gravity on which the entire theoretical edifice rested.

Dark energy is thought to be something like an electrical field, with one difference. Electric fields are detectable in two ways: when they accelerate electrons, which emit observable photons as synchrotron and Bremsstrahlung radiation, and by accelerating charged particles as electric currents which are accompanied by magnetic fields, detected through Faraday rotation of polarized light. Dark energy seems to emit nothing and nothing it purportedly does is revealed through a magnetic field. One suggestion is that some property of empty space is responsible. But empty space, by definition, contains no matter and therefore has no energy. The concept of dark energy is philosophically unsound and is a poignant reminder that the gravity-only model never came close to the original expectations for it.

This artistic view of the standard model of the Big Bang and the expanding Universe seems to present a precise picture of cosmic history. A much different story emerges as we learn about plasma phenomena and electric currents in space. Credit: NASA WMAP

Taking the postulated dark matter and dark energy together, something on the order of twenty-four times as much mass in the form of invisible stuff would have to be added to the visible, detectable mass of the Universe. That’s to say, in the Gravity Model all the stars and all the galaxies and all the matter between the stars that we can detect only amount to a minuscule 4% of the estimated mass:

Chandra X-ray Observatory estimates of the “total energy content of the Universe”. Only “normal” matter can be directly detected with telescopes. The remaining “dark”matter and energy are invisible. Image Credit: NASA WMAP

Critics often point out that a theory requiring speculative, undetectable stuff on such a scale also stretches credulity to the breaking point. Something very real, perhaps even obvious, is almost certainly missing in the standard Gravity Model.

Is it possible that the missing component could be something as familiar to the modern world as electricity?

The Energy Conservation in Our Universe and the Pressureless Dark Energy

Recent observations confirm that a certain amount of unknown dark energy exists in our universe so that the current expansion of our universe is accelerating. It is commonly believed that the pressure of the dark energy is negative and the density of the dark energy is almost a constant throughout the universe expansion. In this paper, we show that the law of energy conservation in our universe has to be modified because more vacuum energy is gained due to the universe expansion. As a result, the pressure of dark energy would be zero if the total energy of our universe is increasing. This pressureless dark energy model basically agrees with the current observational results.

1. Introduction

In the past decades, the data from supernovae confirmed the accelerating expansion of our universe [1, 2]. This acceleration can be explained by assuming the existence of a cosmological constant

in the Einstein field equation. Usually, this constant is regarded as a kind of energy called “dark energy” that exists in our universe. The CDM model, which is the most robust scenario nowadays to describe the evolution of our universe, suggests that the dark energy density

is a constant throughout the evolution of our universe. This model provides good fits for the data on the large-scale structure of the universe and the Cosmic Microwave Background [3, 4]. Besides this major model, there are some other dark energy models which can also satisfy the current observational constraints [5–9].

In fact, quantum physics shows that vacuum is not really nothing but contains energy. The discovery of the Casimir effect indicates that some nonzero energy exists in vacuum, which is called the vacuum energy [10]. Therefore, many cosmologists believe that dark energy is indeed the vacuum energy [11–13]. However, theoretical calculations show that the predicted value of vacuum energy is nearly 120 orders of magnitude larger than the observed value in our universe [11]. Although there are some theoretical suggestions which can alleviate the problem, no satisfactory explanation is obtained [14–17]. Moreover, the idea of vacuum energy suffered from the “coincidence problem” [13]. It states that the ratio of dark energy density to matter energy density is extremely very small at the very beginning of universe expansion while the current value of dark energy density is so close to the present matter density [18]. Therefore, some suggest invoking a time-dependent dark energy, which is now known as the quintessence model, to solve the “coincidence problem” [19–21].

Although the idea of vacuum energy has two major problems, recent observations show that the dark energy density is really very close to a constant value [22–24]. If is a constant, the parameter of the dark energy equation of state should be

, where the pressure of dark energy is given by

. The most recent observational constraint is

[25]. Therefore, it seems that it may not be necessary to invoke the idea of quintessence, which suggests that is not exactly equal to . Furthermore, most of the quintessence models involve some free parameters and arbitrary scalar potential functions which make the model much more complicated than the vacuum energy model. Therefore, based on the observations and the simplicity of model, vacuum energy model is still a better one that can explain the required dark energy in our universe.

Since positive pressure and energy would produce attractive gravitational effect on our universe expansion, the negative pressure in dark energy is usually interpreted as the effect of “antigravity”. It is very strange because we do not know anything that is positive in energy but produces negative pressure. However, the result of the negative pressure of dark energy is solely based on the assumption of energy conservation. What would be the equation of state of the dark energy if the total energy of our universe is increasing? In this paper, we show that dark energy can be pressureless if we assume that the total energy of our universe is increasing when the universe is expanding. We first review the essential equations that govern the evolution of our universe in the standard picture. Then we discuss the effect of the equations after allowing that our universe’s total energy is increasing.

2. The Evolution of Our Universe in the Standard Picture

The original Friedmann equation with dark energy term in a flat universe is given by

where is the cosmic scale factor, is the universal constant for gravitation, and

with and being the energy density of matter and radiation, respectively. On the other hand, if the total energy of our universe remains constant, the law of energy conservation gives

where is the total pressure with and being the matter and radiation pressure, respectively, and is the speed of light. By differentiating (1), we get

By substituting (4) to (3) and , we get

The above equation is the standard equation to describe our universe expansion. Since the effects of and are negligible in the matter and dark energy dominated universe, the accelerating universe expansion requires

that is, dark energy should have negative pressure. For the standard CDM model, is a constant and . Our universe is accelerating now since

3. The Increasing Energy in Our Universe

However, the above calculations are based on the assumption of the constant total energy (2). If dark energy is really the vacuum energy, the total energy of our universe is increasing when the universe is expanding. The more the space is involved in our universe, the more the energy would be contained in our universe [26]. In other words, some extra energy is “flowing” into our universe. If this is the case, (2) should be rewritten as

where is the rate of energy “flowing” into our universe. Since our universe is expanding, the actual “volume” of our universe is also increasing. Therefore the amount of dark energy (vacuum energy) in our universe is increasing due to the increasing vacuum “volume.” Since the density of vacuum energy is a constant and the cube of the scale factor is directly proportional to the total “vacuum volume,” we should have

Therefore, by using (6) and (7), we get

By putting (8) into (3), the additional term finally rewrites the acceleration equation as

Obviously, the expansion of our universe can be accelerating even if . From the traditional model (our universe as a closed system), observations indicate that

[25]. Surprisingly, by comparing (5) with (9), this is equivalent to in the new model (the total energy of our universe is increasing). This shows that dark energy is pressureless if we rewrite the energy conservation law.

4. Discussion

The traditional cosmological model considers the total energy of our universe being constant so that dark energy must provide negative pressure in our universe. However, if we assume that dark energy is indeed the vacuum energy, more energy would be contained in our universe as the universe is expanding. In other words, the total energy in our universe should be increasing. If we include the increasing energy term in the fluid equation, we need not require the dark energy to have negative pressure. Recent observations suggest that in the traditional model is actually equivalent to in our model. This indicates that dark energy is pressureless. Therefore, our model is consistent and compatible with the observational evidence about a negative .

Basically, the idea of negative pressure in dark energy serves as the “antigravity” to balance the gravity effect from matter. If dark energy is pressureless, how can it balance the attractive gravitational force? The above result tells us that the Einstein field equation can intrinsically give acceleration of universe expansion if the total energy of universe is increasing. In the traditional model, the increased vacuum energy due to universe expansion is used to do negative work so that the total energy remains unchanged. However, we have no clear idea why the dark energy has to do negative work to dissipate the energy gained by itself due to expansion. If dark energy is pressureless, no work has to be done by the dark energy. All the energy gained during universe expansion is vacuum energy. Since the term in (6) would be cancelled by (7), the scale dependence of matter and radiation would not be affected by the amount of vacuum energy if .

In fact, assuming that the total energy of our universe is increasing and the dark energy is pressureless is mathematically equivalent to the view that the total energy of our universe remains constant with negative pressure dark energy. However, the physical interpretations are different from each other. The former view is a better interpretation because we need not assume some new form of energy which has negative pressure. In fact, the Casimir effect just shows us that vacuum energy exists, but not the existence of negative pressure. Also, no work has to be done by the dark energy when the universe is expanding. The only drawback is that we need to assume that the total energy of our universe is not always constant, which has never been proven to be a strict law in astrophysics. Nevertheless, law of energy conservation can still be applied in general situations as the effect of vacuum energy is negligible. This would not be true only if we consider our entire universe as an object.

In our model, we are not saying that the universe is expanding in a greater space with increasing its size and volume in a background space and proceeding in the environment. Based on general relativity, the “new volume” in our universe is created by the expansion of the universe. According to quantum mechanics, this simultaneously creates the “new vacuum energy” because the vacuum energy (dark energy) is associated with the volume. Therefore, the increasing energy in our universe is a direct consequence of the theories of general relativity and quantum mechanics.

Although our model can give a new interpretation of the nature of dark energy, it cannot offer a satisfactory explanation to the cosmological constant problem. Some new mechanisms are required to cancel out the large amount of vacuum energy. Also, this model cannot account for the scalar field that is responsible for inflation.

5. Conclusion

If the total energy of our universe is increasing during the universe expansion, the dark energy could be regarded as pressureless. This interpretation does not violate any observational results in cosmology.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.


  1. A. G. Riess, A. V. Filippenko, P. Challis et al., “Observational evidence from supernovae for an accelerating universe and a cosmological constant,” The Astronomical Journal, vol. 116, no. 3, pp. 1009–1038, 1998. View at: Publisher Site | Google Scholar
  2. S. Perlmutter, G. Aldering, G. Goldhaber et al., “Measurements of Ω and Λ from 42 high-redshift supernovae,” The Astrophysical Journal, vol. 517, pp. 565–586, 1999. View at: Publisher Site | Google Scholar
  3. D. J. Eisenstein, I. Zehavi, D. W. Hogg et al., “Detection of the baryon acoustic peak in the large-scale correlation function of SDSS luminous red galaxies,” The Astrophysical Journal, vol. 633, no. 2, pp. 560–574, 2005. View at: Publisher Site | Google Scholar
  4. P. A. R. Ade, N. Aghanim, M. I. R. Alves et al., “Planck 2013 results. I. Overview of products and scientific results,” Astronomy & Astrophysics, vol. 571, article A1, 48 pages, 2014. View at: Publisher Site | Google Scholar
  5. R. A. El-Nabulsi, “Some late-time cosmological aspects of a Gauss-Bonnet gravity with nonminimal coupling à la Brans-Dicke: solutions and perspectives,” Canadian Journal of Physics, vol. 91, no. 4, pp. 300–321, 2013. View at: Publisher Site | Google Scholar
  6. Y. H. Li, J. F. Zhang, and X. Zhang, “New initial condition of the new agegraphic dark energy model,” Chinese Physics B, vol. 22, Article ID 039501, 2013. View at: Publisher Site | Google Scholar
  7. W. Zimdahl, J. C. Fabris, S. del Campo, and R. Herrera, “Cosmology with Ricci-type dark energy,” AIP Conference Proceedings, vol. 1647, no. 1, pp. 13–18, 2015. View at: Publisher Site | Google Scholar
  8. R. Kallosh, A. Linde, and M. Scalisi, “Inflation, de Sitter landscape and super-Higgs effect,” Journal of High Energy Physics, vol. 2015, no. 3, article 111, 2015. View at: Publisher Site | Google Scholar
  9. K. Bamba, S. Nojiri, and S. D. Odintsov, “Reconstruction of scalar field theories realizing inflation consistent with the Planck and BICEP2 results,” Physics Letters B, vol. 737, pp. 374–378, 2014. View at: Publisher Site | Google Scholar
  10. K. A. Milton, “Calculating Casimir energies in renormalizable quantum field theory,” Physical Review D, vol. 68, Article ID 065020, 2003. View at: Publisher Site | Google Scholar
  11. S. Weinberg, “The cosmological constant problem,” Reviews of Modern Physics, vol. 61, no. 1, pp. 1–23, 1989. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  12. M. Szydlowski and W. Godlowski, “Acceleration of the universe driven by the casimir force,” International Journal of Modern Physics D, vol. 17, no. 2, pp. 343–366, 2008. View at: Publisher Site | Google Scholar
  13. R. Bousso, “The cosmological constant problem, dark energy, and the landscape of string theory,” Proceedings of “Subnuclear Physics: Past, Present and Future”, Pontificial Academy of Sciences, Vatican, View at: Google Scholar
  14. S. D. Bass, “The cosmological constant puzzle: vacuum energies from QCD to dark energy,” Symposium “Quantum Chromodynamics: History and Prospects”, Oberwoelz, Austria, View at: Google Scholar
  15. A. Dupays, B. Lamine, and A. Blanchard, “Can dark energy emerge from quantum effects in a compact extra dimension?” Astronomy and Astrophysics, vol. 554, article A60, 2013. View at: Publisher Site | Google Scholar
  16. H. Razmi and S. M. Shirazi, “Is the free vacuum energy infinite?” Advances in High Energy Physics, vol. 2015, Article ID 278502, 3 pages, 2015. View at: Publisher Site | Google Scholar
  17. Y. Fujii, “Is the zero-point energy a source of the cosmological constant?” View at: Google Scholar
  18. E. J. Copeland, M. Sami, and S. Tsujikawa, “Dynamics of dark energy,” International Journal of Modern Physics D. Gravitation, Astrophysics, Cosmology, vol. 15, no. 11, pp. 1753–1935, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  19. P. J. Steinhardt, L. Wang, and I. Zlatev, “Cosmological tracking solutions,” Physical Review D, vol. 59, no. 12, Article ID 123504, 1999. View at: Publisher Site | Google Scholar
  20. T. Chiba, “Slow-roll thawing quintessence,” Physical Review D, vol. 79, no. 8, Article ID 083517, 8 pages, 2009. View at: Publisher Site | Google Scholar
  21. A. Barreira and P. P. Avelino, “Anthropic versus cosmological solutions to the coincidence problem,” Physical Review D, vol. 83, Article ID 103001, 2011. View at: Publisher Site | Google Scholar
  22. M. Hicken, W. M. Wood-Vasey, S. Blondin et al., “Improved dark energy constraints from 񾄀 New CfA supernova type Ia light curves,” The Astrophysical Journal, vol. 700, pp. 1097–1140, 2009. View at: Publisher Site | Google Scholar
  23. A. Carnero, E. Sánchez, M. Crocce, A. Cabré, and E. Gazta༚ga, “Clustering of photometric luminous red galaxies—II. Cosmological implications from the baryon acoustic scale,” Monthly Notices of the Royal Astronomical Society, vol. 419, no. 2, pp. 1689–1694, 2012. View at: Publisher Site | Google Scholar
  24. N. Suzuki, D. Rubin, C. Lidman et al., “The Hubble Space Telescope Cluster Supernova Survey. V. Improving the dark-energy constraints above z > 1 and building an early-type-hosted supernova sample,” The Astrophysical Journal, vol. 746, p. 85, 2012. View at: Publisher Site | Google Scholar
  25. S. Postnikov, M. G. Dainotti, X. Hernandez, and S. Capozziello, “Nonparametric study of the evolution of the cosmological equation of state with SNeIa, BAO, and high-redshift GRBs,” Astrophysical Journal, vol. 783, no. 2, article 126, 2014. View at: Publisher Site | Google Scholar
  26. B. W. Carroll and D. A. Ostlie, An Introduction to Modern Astrophysics, Pearson, San Francisco, Calif, USA, 2007.


Copyright © 2015 Man Ho Chan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

One of the most striking results of the discovery of Higgs boson (Aad et al., 2012 Chatrchyan et al., 2012) has been that its mass lies in a regime that predicts the current vacuum state to be a false vacuum, that is, there is a lower energy vacuum state available to which the electroweak vacuum can decay into (Degrassi et al., 2012 Buttazzo et al., 2013). That this was a possibility in the Standard Model (SM) has been known for a long time (Hung, 1979 Sher, 1993 Casas et al., 1996 Isidori et al., 2001 Ellis et al., 2009 Elias-Miro et al., 2012). The precise behavior of the Higgs potential is sensitive to the experimental inputs, in particular the physical masses for the Higgs and the top quark and also physics beyond the SM. The current best estimates of the Higgs and top quark masses (Tanabashi et al., 2018),

place the Standard Model squarely in the metastable region.

As in any quantum system, there are three main ways in which the vacuum decay can happen. They are illustrated in Figure 1. If the system is initially in the false vacuum state, the transition would take place through quantum tunneling. On the other hand, if there is sufficient energy available, for example in a thermal equilibrium state, it may be possible for the system to move classically over the barrier. The third way consists of quantum tunneling from an excited initial state. This is often the dominant process if the temperature is too low for the fully classical process. All three mechanisms can be relevant for the decay of the electroweak vacuum state, and their rates depending on the conditions. In each of them, the transition happens initially locally in a small volume, nucleating a small bubble of the true vacuum. The bubble then starts to expand, reaching the speed of light very quickly, any destroying everything in its way.

Figure 1. Illustration of vacuum decay for a potential with a metastable vacuum at the origin.

If the Universe was infinitely old, even an arbitrarily low vacuum decay rate would be incompatible with our existence. The implications of vacuum metastability can therefore only be considered in the cosmological context, taking into account the finite age and the cosmological history of the Universe. Although the vacuum decay rate is extremely slow in the present day, that was not necessarily the case in the early Universe. High Hubble rates during inflation and high temperatures afterwards could have potentially increased the rate significantly. Therefore the fact that we still observe the Universe in its electroweak vacuum state allows us to place constraints on the cosmological history, for example the reheat temperature and the scale of inflation, and on Standard Model parameters, such as particle masses and the coupling between the Higgs field and spacetime curvature.

In this review we discuss the implications of Higgs vacuum metastability in early Universe cosmology and describe the current state of the literature. We also discuss all the theoretical frameworks, with detailed derivations, that are needed for the final results. This article complements earlier comprehensive reviews of electroweak vacuum metastability (Sher, 1989 Schrempp and Wimmer, 1996), which focus on the particle physics aspects rather than the cosmological context, and the recent introductory review (Moss, 2015) that explores the role of the Higgs field in cosmology more generally.

In section 2 we present renormalization group improvement in flat space by using the Yukawa theory as an example before discussing the full SM. Section 3 contains an overview of quantum field theory on curved backgrounds relevant for our purposes, including the modifications to the SM. In section 4 we go through the various ways vacuum decay can occur. In section 5 we discuss the connection to cosmology and in section 6 we present our concluding remarks.

Our sign conventions for the metric and curvature tensors are (−, −, −) in the classification of Misner et al. (1973) and throughout we will use units where the reduced Planck constant, the Boltzmann constant and the speed of light are set to unity, ℏ ≡ kBc ≡ 1. The reduced Planck mass is given by Newton's constant as

We will use φ for the vacuum expectation value (VEV) of a spectator field (usually the Higgs), ϕ for the inflaton and Φ for the SM Higgs doublet. The inflaton potential is U(ϕ) and the Higgs potential V(φ). The physical Higgs and top masses read Mh and Mt.

Dark energy hides behind phantom fields

Quintessence and phantom fields, two hypotheses formulated using data from satellites, such as Planck and WMAP, are among the many theories that try to explain the nature of dark energy. Now researchers from Barcelona and Athens suggest that both possibilities are only a mirage in the observations and it is the quantum vacuum which could be behind this energy that moves our universe.

Cosmologists believe that some three quarters of the universe are made up of a mysterious dark energy which would explain its accelerated expansion. The truth is that they do not know what it could be, therefore they put forward possible solutions.

One is the existence of quintessence, an invisible gravitating agent that instead of attracting, repels and accelerates the expansion of the cosmos. From the Classical World until the Middle Ages, this term has referred to the ether or fifth element of nature, together with earth, fire, water and air.

Another possibility is the presence of an energy or phantom field whose density increases with time, causing an exponential cosmic acceleration. This would reach such speed that it could break the nuclear forces in the atoms and end the universe in some 20,000 million years, in what is called the Big Rip.

The experimental data that underlie these two hypotheses comes from satellites such as Planck of the European Space Agency (ESA) and Wilkinson Microwave Anisotropy Probe (WMAP) of NASA. Observations from the two probes are essential for solving the so-called equation of the state of dark energy, a characterising mathematical formula, the same as that possessed by solid, liquid and gaseous states.

Now researchers from the University of Barcelona (Spain) and the Academy of Athens (Greece) have used the same satellite data to demonstrate that the behaviour of dark energy does not need to resort to either quintessence or phantom energy in order to be explained. The details have been published in the 'Monthly Notices of the Royal Astronomical Society' journal.

"Our theoretical study demonstrates that the equation of the state of dark energy can simulate a quintessence field, or even a phantom field, without being one in reality, thus when we see these effects in the observations from WMAP, Planck and other instruments, what we are seeing is an mirage," said Joan Solà, one of the authors from University of Barcelona.

Nothing fuller than the quantum vacuum

"What we think is happening is a dynamic effect of the quantum vacuum, a parameter that we can calculate," explained the researcher. The concept of the quantum vacuum has nothing to do with the classic notion of absolute nothingness. "Nothing is more 'full' than the quantum vacuum since it is full of fluctuations that contribute fundamentally to the values that we observe and measure," Solà pointed out.

These scientists propose that dark energy is a type of dynamical quantum vacuum energy that acts in the accelerated expansion of our universe. This is in contrast to the traditional static vacuum energy or cosmological constant.

The drawback with this strange vacuum is that it is the source of problems such as the cosmological constant, a discrepancy between the theoretical data and the predictions of the quantum theory that drives physicists mad.

"However, quintessence and phantom fields are still more problematic, therefore the explanation based on the dynamic quantum vacuum could be the more simple and natural one," concluded Solà.

Big Rip theory

Yes. The 'big hit' is the current favorite theory of how the moon formed. It makes good sense when you look at geological and moon rock studies. The chemistry of moon rocks is unlike that of any other body in the solar system save earth.

I changed the title of the thread from "Big Hit Theory" to "Big Rip Theory". Chronos posted when the thread still had its original title.

I think the paper that first really pushed this idea was

For a non-technical exposition, read

Joe, this is just a side comment: look at the DATES on the material that George just gave you links to.

I haven't heard much about "big rip" scenario since like 2005.

I remember hearing a lot about it back in 2003-2004 but the buzz died down some since then.

The present model that astronomers use, which seems a good fit to the data that has come in since 2005 (eg from spacecraft like WMAP) so far, does not do a big rip. It has accelerated expansion but the acceleration is rather gentle and doesn't disassemble our galaxy, or our solar system, or anything on that scale.

It is always possible that the data is wrong, and that future data will indicate different cosmic parameters and a different expansion history, and the "big rip" scenario could make a comeback and become a fashionable idea once more.

Joe, this is just a side comment: look at the DATES on the material that George just gave you links to.

I haven't heard much about "big rip" scenario since like 2005.

I remember hearing a lot about it back in 2003-2004 but the buzz died down some since then.

The present model that astronomers use, which seems a good fit to the data that has come in since 2005 (eg from spacecraft like WMAP) so far, does not do a big rip. It has accelerated expansion but the acceleration is rather gentle and doesn't disassemble our galaxy, or our solar system, or anything on that scale.

It is always possible that the data is wrong, and that future data will indicate different cosmic parameters and a different expansion history, and the "big rip" scenario could make a comeback and become a fashionable idea once more.

The general consensus amongst Cosmologists and Astrophysicists is that the "Big Rip" is nonsense. It made a bit of a splash back a few years ago, and the media ran with it. And, typical of mainstream media, they made way more out of it, and implied it was a serious theory. which it was not.

Basically, it violates all sorts of laws of Physics. Specifically, it requires that the Cosmological Constant varies with time. which is NOT accepted by the majority of specialists in the field, nor is it supported by observational evidence. As far as we can be certain of things, the Cosmological constant is a constant, and does not vary with time.

Given that the CC does not vary (or increase) with time, then there is no way for "dark energy" to overcome the forces of bound structures, such as galaxies, solar systems, planets, people, molecules, atoms, etc.

4. Vacuum Decay

4.1. Quantum Tunneling and Bubble Nucleation

The main mechanism behind vacuum decay in the Standard Model is essentially a direct extension of ordinary quantum tunneling to quantum field theories. In ordinary quantum mechanics, the wave-function for particles trapped by a potential barrier can penetrate the classically forbidden region of the barrier, leading to a non-zero probability to be found on the other side. The transition rate for particles of energy E incident on a barrier described by potential W(x) can be estimated using the WKB method (Coleman, 1985),

where x1, x2 are the turning points of the potential. As is clear from this expression, the tunneling rate is suppressed by wide and tall barriers.

Although Equation (4.1) can in principle be evaluated directly, we will follow a different approach that readily generalizes to quantum field theories (Coleman, 1977 Brown and Weinberg, 2007). The idea is to use the equation of motion,

The region (x1, x2) is classically forbidden, since W(x) − E > 0 there. We can apply a trick, however, by analytically continuing time to an imaginary value: τ = it, which gives a Euclidean equation of motion,

The most notable feature of these equations is that the potential has effectively been inverted. This means that we can find a classical solution that rolls through the barrier between the turning points x1 and x2. If we can find this solution, it allows us to re-express the integral in Equation (4.1) as

where SE is the Euclidean action corresponding to Equation (4.3)

while xB(τ) is a bounce solution of the Euclidean equations of motion satisfying x ′ ( τ 1 ) = x ′ ( τ 2 ) = 0 , and xfv(τ) is a constant solution, sitting in the false vacuum with energy E. The 𠇋ounce” solution is so named because we see, by energy conservation, that it starts at x1, rolls down the inverted potential before 𠇋ouncing” off x2 and rolling back. By finding this solution and evaluating its action, we can compute the rate for tunneling through a barrier.

This argument generalized straightforwardly to many-body quantum systems, where we use the action

With more than one degree of freedom, however, there are actually an infinite number of paths that qi(τ) could take when passing through the barrier, corresponding to an infinite number of solutions. However, since the decay rate is exponentially dependent on the action, Γ ∝ e - S E [ q i ] , it is clear that only the solution with smallest Euclidean action will contribute significantly, as this will dominate the decay rate (in other words, the tunneling takes the “path of least resistance”).

The generalization from a many body system, qi, to a quantum field theory with scalar field φ(x) is then straightforward,

The integral here is over flat four-dimensional Euclidean space, and note that the opposite sign of the potential leads to an opposing sign in the equations of motion,

Although it is tempting to interpret V(ϕ) as the potential to be tunneled through, this is only somewhat true. The analog of W(qi) in Equation (4.6) is a functional of the field configuration φ(x), given by an integral over three-dimensional space,

where ∇φ represents the spatial derivative of the field. In the analogy with quantum mechanics, this term should be considered part of the potential, as its many body equivalent is a nearest-neighbor interaction between adjacent degrees of freedom, qi, qi. This means, in particular, while in quantum mechanics, the particle emerges after tunneling at a point x2 that has the same potential energy, W(x1) = W(x2), in quantum field theory, the field emerges lower down the potential V.

In a field theory, the analog of x2 is a field configuration, φ(x), given by slicing the bounce solution at its mid-way point. This is a nucleated “true-vacuum” bubble, whose decay rate is determined by the Euclidean action of the bounce solution, φB. As we will see in section 4.7, the dominant Euclidean solutions have O(4) symmetry, which means that the bubble nucleates with O(3, 1) symmetry. This causes it to expand at near the speed of light, resulting in the space around a nucleation point being converted to the true vacuum, releasing energy into the bubble wall. Apart from the destruction that this would unleash, and the different masses of fundamental particles in the bubble interior, the result is also gravitational collapse of the bubble (Coleman and De Luccia, 1980), making its nucleation in our past light-cone completely incompatible with the trivial observation that the vacuum has not decayed (yet).

In cosmological applications, but also other areas, it is also important to consider the effect of thermally induced fluctuations over the barrier. Brown and Weinberg (2007) describe how thermal effects can be included in the above argument. At non-zero temperature, we must integrate over the possible excited states, and the decay exponent which depends on energy,

where B(E) is the (energy dependent) difference in Euclidean action between the bounce solution and the excited state of energy E. This integral is dominated by the energy that minimizes the exponent βE + B(E), which is easily shown to satisfy

where τ1, τ2 are the initial and final values in imaginary time of the (energy dependent) bounce solution. In other words, the bounce solution is periodic in imaginary time, with period controlled by the temperature.

In quantum field theory, the decay rate per unit volume and time of a metastable vacuum decays was first discussed by Coleman (Coleman, 1977 Callan and Coleman, 1977), and is given by

is the difference between the Euclidean action of a so called bounce solution φB of the Euclidean (Wick rotated) equations of motion, and the action of the constant solution φfv which sits in the false vacuum. S″ denotes the second functional derivative of the Euclidean action of a given solution, and det′ denotes the functional determinant after extracting the four zero-mode fluctuations which correspond to translations of the bounce (these are responsible for the formula giving a decay rate per unit volume). Precise calculations of the pre-factor A in the Standard Model were performed in Isidori et al. (2001), and involve computing the fluctuations around the bounce solution of all fields that couple to the Higgs. This requires renormalizing the loop corrections, and also to avoid double-counting, expanding around the tree-level bounce, rather than the bounce in the loop corrected potential.

In the gravitational case, the prefactor A is harder to compute. The main issue is that it includes both Higgs and gravitational fluctuations, and without a way of renormalizing the resulting graviton loops, the calculation becomes much harder. Various attempts have been made to do this using the fluctuations discussed in section 4.5 (see Dunne and Wang, 2006 Lee and Weinberg, 2014 Koehn et al., 2015 for example), but a full description, especially for the Standard Model case, is not yet available.

In most cases, it is reasonable to estimate the prefactor A using dimensional analysis. Because A has dimension four, one would expect

where μ the characteristic energy scale of the instanton solution. Due to the exponential dependence on the decay exponent, B, this will not lead to large errors, and therefore we will use this result in the absence of more accurate estimates.

4.2. Asymptotically Flat Spacetime at Zero Temperature

In flat Minkowski space, the bounce solution corresponds to a saddle point of the Euclidean action,

with one negative eigenvalue (see section 4.5). Since Equation (4.12) depends exponentially on the bounce action, only the lowest action bounce solutions will contribute. In flat space, it is always the case that the lowest action solution has O(4) symmetry (Coleman et al., 1978). This means that the equations of motion for the bounce can be reduced to

subject to the boundary conditions φ . ( 0 ) = 0 and φ(r → ∞) → φfv. These ensure that the bounce action is finite and thus gives non-zero contribution to the decay rate. There are always trivial solutions corresponding to the minima of the potential V(φ), but they do not contribute to vacuum decay because they have no negative eigenvalues.

For example, in a theory with a constant negative quartic coupling, that is,

there exists the Lee-Weinberg or Fubini bounce (Fubini, 1976 Lee and Weinberg, 1986). This is a solution of the form:

where the arbitrary parameter rB characterizes the size of the bounce (and thus the nucleated bubble). This arbitrary parameter appears in the theory because the potential Equation (4.17) is conformally invariant, and thus bounces of all scales contribute equally with action

In fact, similar bounces contribute approximately in the Standard Model, where the running of the couplings breaks this approximate conformal symmetry, so that bounces of order the scale at which λ is most negative (which is the minimum of the λ(μ) running curve) dominate the decay rate (Isidori et al., 2001).

The complete calculation would also include gravity, and would therefore involve finding the corresponding saddle point of the action

where R is the Ricci scalar. The leading gravitational correction to Equation (4.19) is Isidori et al. (2008)

Another approach is to solve the bounce equations numerically, which makes it possible to use the exact field and Einstein equations and the full effective potential. The difference is a second order correction (Isidori et al., 2008). Using the tree-level RGI effective potential (2.23), the full numerical result including gravitational effects for Mt = 173.34 GeV, Mh = 125.15 GeV, αS(Mz) = 0.1184 and minimal coupling ξ = 0 is Rajantie and Stopyra (2017)

A non-minimal value of the Higgs curvature coupling ξ changes the action and the shape of the bounce solution (and thus the scale that dominates tunneling) (Isidori et al., 2008 Czerwinska et al., 2016 Rajantie and Stopyra, 2017 Salvio et al., 2016 Czerwinska et al., 2017). Figure 5 shows the bounce action B as a function of ξ, computed numerically in Rajantie and Stopyra (2017). As the plot shows, the action is smallest near the conformal value ξ = 1/6. For ξ ≈ 1/6, the result agrees well with the perturbative calculation (Salvio et al., 2016),

For comparison, for the same parameters, the numerically computed decay exponent in flat space is (Rajantie and Stopyra, 2017)

which is very close to the full gravitational result with the conformal coupling ξ = 1/6. The analytical approximation (4.19) using μ min = 2 . 79 × 1 0 17  GeV gives

Calculations of the prefactor A show that the decay rate (4.12) is well approximated by Isidori et al. (2001)

where the numerical value corresponds to the action (4.22). This agrees with the estimate from dimensional analysis (4.14). Note, however, that the rate is very sensitive to the top quark and Higgs boson masses, and also to higher-dimensional operators (Branchina and Messina, 2013 Branchina et al., 2015).

Figure 5. Plot of the decay rate for a flat false vacuum for different values of the non-minimal coupling, ξ. The minimal action is obtained close to the conformal value ξ = 1/6, and agrees well with the flat space result (4.24). Originally published in Rajantie and Stopyra (2017).

The presence of a small black hole can catalyze vacuum decay and make it significantly faster (Gregory et al., 2014 Burda et al., 2015a,b, 2016 Tetradis, 2016). The action of the vacuum decay instanton in the presence of a seed black hole is given by

where Mseed and Mremnant are the masses of the seed black hole and the left over remnant black hole. For black holes of mass M seed ≲ 1 0 5 M P ≈ 1 g the vacuum decay rate becomes unsuppressed. This can be interpreted (Tetradis, 2016 Mukaida and Yamada, 2017) as a thermal effect due to the black hole temperature T seed = M P 2 / M seed . The catalysis of vacuum decay does not necessarily rule out cosmological scenarios with primordial black holes, because positive values of non-minimal coupling ξ would suppress the vacuum decay in the presence of a black hole (Canko et al., 2018).

4.3. Non-zero Temperature

The presence of a heat bath with non-zero temperature has a significant impact on the vacuum decay rate Γ (Anderson, 1990 Arnold and Vokos, 1991). On one hand, the thermal bath modifies the effective potential of the Higgs field, and on the other hand, as discussed in section 4.1, it modifies the process itself because it can start from an excited state rather than the vacuum state.

At one-loop level, the finite-temperature effective potential can be written as Arnold and Vokos (1991)

where ni and M i 2 are given in Table 1 (taking H = 0). In the high-temperature limit, TMh, this can be approximated by

Therefore the thermal fluctuations give rise to a positive contribution to the quadratic term. This raises the height of the potential barrier, and therefore would appear to suppress the decay rate.

At non-zero temperatures the decay process is described by a periodic instanton solution with period β in the Euclidean time direction. In the high-temperature limit, the solution becomes independent of the Euclidean time, and has the interpretation of a classical sphaleron configuration. The instanton action is therefore given by

where Esph is the energy of the sphaleron, which is the three-dimensional saddle point configuration analogous to the Coleman bounce (4.16), and satisfies the equation

Using the approximation of constant negative λ, the action is Arnold and Vokos (1991)

Because γ ≪ 1, this is smaller than the zero-temperature action (4.19). Therefore the net effect of the non-zero temperature is to increase the vacuum decay rate compared to the zero-temperature case.

More accurately, the sphaleron solutions have been calculated numerically in Delle Rose et al. (2016) and Salvio et al. (2016). At high temperatures T ≳ 10 16 GeV, the action is roughly

When the temperature decreases, the action increases, so that B(10 14 GeV) ~ 400.

Salvio et al. (2016) obtained fully four-dimensional instanton solutions numerically, without assuming independence on the Euclidean time, and found that the three-dimensional sphaleron solutions have always the lowest action and are therefore the dominant solutions. They also showed that including the two-loop corrections to the quadratic term (4.30) or the one-loop correction to the Higgs kinetic term gives only small correction to the action.

Taking also the prefactor into account, the vacuum decay rate at non-zero temperature is (Espinosa et al., 2008 Delle Rose et al., 2016)

4.4. Vacuum Decay in de Sitter Space

In extending from flat space to curved space, the theorem (Coleman et al., 1978) that guarantees O(4) symmetry of the bounce no longer applies. There is some evidence, however, that in background metrics that do respect this symmetry, O(4) symmetric solutions should still dominate (Masoumi and Weinberg, 2012). This would include the special case of particular interest in this review - an inflationary, or de Sitter background 3 . A Wick rotated metric can be placed in a co-ordinate system 3that makes the O(4) symmetry of the bounce immediately manifest,

where χ is a radial variable, d Ω 3 2 is the 3-sphere metric, and a 2 (χ) is a scale factor that physically describes the radius of curvature of a surface at constant χ. The bounce equations of motion then take the form (Coleman and De Luccia, 1980)

We will consider the case in which the false vacuum has a positive energy density, Vfv) > 0, and therefore non-zero Hubble rate

The boundary conditions the bounce solution must satisfy require special attention: a(0) = 0 is required because of the definition of a(χ) as a radius of curvature of a surface of constant χ, while we require φ . ( 0 ) = φ . ( χ max ) = 0 , where χmax > 0 is defined by amax) = 0. These boundary conditions avoid the co-ordinate singularities at χ = 0, χmax giving infinite results, but allow for the peculiar property that the bounces are compact, and do not approach the false vacuum anywhere.

One way of understanding this peculiar feature was discussed by Brown and Weinberg (2007). They considered vacuum decay in de Sitter space, specifically the static patch co-ordinates where the metric takes the form

where d Ω n - 2 2 is the n − 2-sphere metric (in this case, n = 4). The important feature of these co-ordinates is that they are valid only up to the horizon at r = 1/H. The Euclidean action can then be re-written as

where hij is the remaining spatial metric. Brown and Weinberg interpreted this to mean that tunneling takes place on a compact Euclidean space, with a curved three-dimensional geometry. This compactness condition is reflected in the boundary conditions φ . ( 0 ) = φ . ( χ max ) , which inevitably produce a compact bounce solution. They observed that the same effect would be seen in considering a spatially curved universe with this same spatial geometry, but with a non-zero temperature,

This corresponds to the Gibbons-Hawking temperature of de Sitter space (Gibbons and Hawking, 1977), and implies that bounces in de Sitter space may have a thermal interpretation.

The simplest solution of Equations (4.37) and (4.38) is the Hawking-Moss solution (Hawking and Moss, 1982). This is a constant solution, for which φ = φbar sits at the top of the barrier for the entire Euclidean period, and the scale factor is given by

Hence χmax = π/HHM. The action difference of Equation (4.13) is then easily computed analytically to be

A particularly important limit is that in which Δ Vbar) = Vbar) − Vfv) ≪ Vfv). In that case, Equation (4.44) is approximately

where H 2 = V ( φ fv ) / 3 M P 2 is the background Hubble rate. The prefactor (4.14) in the decay rate can be expected to be at the scale of the Hubble, and therefore the vacuum decay rate due to the Hawking-Moss instanton can be approximated by

Equation (4.45) has a simple thermal interpretation: It is the ratio of the energy required to excite an entire Hubble volume, 4π/3H 3 from the false vacuum to the top of the barrier, divided by the background Gibbons-Hawking temperature (4.42). Therefore it can be understood as Boltzmann suppression in classical statistical physics.

The bounce equations (4.37) and (4.38) also often have Coleman-de Luccia (CdL) instantons, in which the field increases monotonically from φ(0) < φbar to φ(χmin) > φbar. For low false vacuum Hubble rates, H ≪ μmin, a CdL solution can be found as a perturbative correction to Equation (4.18), with the action (Shkerin and Sibiryakov, 2015)

Numerical HM and CdL bounce solutions in the Standard Model were found in Rajantie and Stopyra (2018) and the corresponding actions are shown in Figure 6, for the parameters Mh = 125.15GeV, Mt = 173.34GeV, αS = 0.1184. We can see that at low Hubble rates, the CdL solution has a lower action than the HM solution. For example, for the case of background Hubble rate H = 1.1937 × 10 8 GeV, the numerical result is BCdL = 1805.8 in a fixed de Sitter background metric, and BCdL = 1808.26 including gravitational back-reaction. The CdL action is also almost independent of the Hubble rate.

Figure 6. CdL bounce decay exponent plotted against the Hawking-Moss solution in the Standard Model with Mt = 173.34GeV, Mh = 125.15 GeV, αS(MZ) = 0.1184. The critical values H crit = 1 . 193 × 1 0 8  GeV , H cross = 1 . 931 × 1 0 8  GeV are also plotted, along with B0, the bounce action obtained at H = 0.

On the other hand, the Hawking-Moss action (4.44) decreases rapidly as the Hubble rate increases. It crosses below BCdL at Hubble rate (Rajantie and Stopyra, 2018)

At Hubble rates below this, H > Hcross vacuum decay is dominated by the Coleman-de Luccia instanton, which describes quantum tunneling through the potential barriers, whereas above this, H > Hcross, the dominant process is the Hawking-Moss instanton. This is discussed further in section 4.6.

In addition to the HM and CdL solutions, one may also find oscillating solutions (Hackworth and Weinberg, 2005 Weinberg, 2006 Lee et al., 2015, 2017), which cross the top of the barrier φbar multiple times between χ = 0 and χ = χmax, and additional CdL-like solutions with higher action (Hackworth and Weinberg, 2005 Rajantie and Stopyra, 2018). The latter were found numerically in the Standard Model in Rajantie and Stopyra (2018). Because these solution have a higher action than the HM and CdL solutions, they are highly subdominant as vacuum decay channels. Oscillating solutions also have more than one negative eigenvalues (Dunne and Wang, 2006 Lavrelashvili, 2006).

4.5. Negative Eigenvalues

In order for a stationary point of the action to describe vacuum decay, it has to have precisely one negative eigenvalue. The reason is that the decay rate of a metastable vacuum is determined by the imaginary part of the energy as computed by the effective action (Callan and Coleman, 1977), and thus only solutions that contribute an imaginary part to the vacuum energy will contribute to metastability.

This requirement comes in via the functional determinant which encodes the quantum corrections to the bounce solution. This functional determinant is given by a product over the eigenvalues for fluctuations around the relevant bounce solution. In flat space, these all satisfy (Callan and Coleman, 1977)

where φB is the solution expanded around. The O(4) symmetric bounce solutions in flat space can be shown to have at least one negative eigenvalue, since they possess zero modes corresponding to translations of the bounce around the space-time. In fact, there must only be one such eigenvalue. Solutions with more negative eigenvalues do not contribute to tunneling rates, because while they are stationary points of the Euclidean action, they are not minima of the barrier penetration integral (4.1) obtained from the WKB approximation (Coleman, 1985).

The situation is somewhat different in the gravitational case, however, due to the fact that in addition to the scalar field, we can also consider metric fluctuations about a bounce solution. A quadratic action for fluctuations about a bounce in curved space was first derived by Lavrelashvili et al. (1985) and has been considered by several authors (Lavrelashvili, 2006 Lee and Weinberg, 2014 Koehn et al., 2015). This takes the gauge invariant form

and f is a complicated function of a and φ which can be found in Lee and Weinberg (2014), Lavrelashvili (2006), and Koehn et al. (2015). The analysis of this Lagrangian is complicated, but some conclusions can be drawn. To begin with, it is possible to argue that expanded around a CdL bounce solution, this action always has an infinite number of negative eigenvalues. This is the so called “negative mode problem” (Lavrelashvili, 2006 Lee and Weinberg, 2014 Koehn et al., 2015). The argument, as expressed in Lee and Weinberg (2014), is that we can re-write Q using Equation (4.38) as

Note that the bounce always has a point satisfying ȧ = 0, which is the largest value obtained by a(χ). Consequently, there is always a region where Q is negative, so for the l = 0 modes it is possible to construct a negative kinetic term in Equation (4.50). This means that sufficiently rapidly varying fluctuations will have their action unbounded below, so there is an infinite tower of high frequency, rapidly oscillating fluctuations that all have negative eigenvalues. Note that for l = 1 the quadratic Lagrangian is zero (these are the zero-modes associated to translations of the bounce), and for l > 1, both numerator and denominator in Equation (4.50) are negative, thus the kinetic terms are always positive. Since Q = 1 in flat space (obtained by taking the MP → ∞ limit), it is clear that these “rapidly oscillating” modes are somehow associated to the gravitational sector.

At first, this seems concerning, however, it was pointed out in Lee and Weinberg (2014) that these high frequency oscillations are inherently associated with quantum gravity contributions, and thus may not affect tunneling. If we focus on the “slowly varying” modes, the structure of these is much more similar to the analogous flat space bounces. The conclusion we should draw then, is that a solution is relevant only if there is a single slowly varying negative eigenvalue.

4.6. Hawking-Moss/Coleman-de Luccia Transition

As discussed in section 4.4, there are two types of solutions that contribute to vacuum decay in de Sitter space. The first is the Hawking-Moss solution (4.43), and the second is the Coleman-de Luccia solution, which crosses the barrier once. By considering the negative eigenvalues of the HM solution, one gains insight into which solutions exist and contribute to vacuum decay at a given Hubble rate.

The eigenvalues of the Hawking-Moss solution are Lee and Weinberg (2014)

Because V ″ ( φ bar ) < 0 , the N = 0 mode is self evidently negative, and has degeneracy 1. Higher modes will all be positive if and only if

This imposes a lower bound on HHM, below which the Hawking-Moss solution has multiple negative eigenvalues. Hence, it cannot contribute to vacuum decay for Hubble rates below the critical threshold (Coleman, 1985 Brown and Weinberg, 2007). An alternative way of expressing this is in terms of a critical Hubble rate. If we define H 2 = V ( φ fv ) / 3 M P 2 to be the background Hubble rate in the false vacuum, then the condition for Hawking-Moss solutions to contribute to vacuum decay is H > Hcrit where

Here, ΔV(φ) ≡ V(φ) − Vfv). However, the second term generally only contributes significantly if the difference in height between the top of the barrier and the false vacuum is comparable to the Planck Mass. For most potentials, only the second derivative at the top of the barrier matters.

At low Hubble rates, H < Hcrit, the Hawking-Moss solution does not contribute to vacuum decay, but on the other hand, a CdL solution is guaranteed to exist (Balek and Demetrian, 2004). In most potentials, the CdL solution smoothly merges into the Hawking-Moss solution as the Hubble rate approached Hcrit from below, and the Hawking-Moss solution becomes relevant (Balek and Demetrian, 2005 Hackworth and Weinberg, 2005). Close to the critical Hubble rate, H ~ Hcrit, one can define the quantity (Tanaka and Sasaki, 1992 Balek and Demetrian, 2005 Joti et al., 2017)

which divides potentials into two classes (Balek and Demetrian, 2005 Rajantie and Stopyra, 2018). Those with Δ < 0 are “typical” potentials, for which the perturbative solution only exists for H < Hcrit (Balek and Demetrian, 2005), while those with Δ > 0 only have perturbative solutions for H > Hcrit. When a perturbative solution exists, its action is given by Balek and Demetrian (2005)

where φ0 is the true vacuum side value of the bounce (which approaches φHM in the HHcrit limit) and φbar is the top of the barrier.

Hence one can see that if Δ < 0, a CdL solution with lower action, BCdL < BHM, exists for H < Hcrit, and approaches the Hawking-Moss solution smoothly as HHcrit, until it vanishes at Hcrit. At the same point, the second eigenvalue of the HM solution turns positive, and therefore the HM solution starts to contribute to vacuum decay.

On the other hand, if Δ > 0, which is the case for the Standard Model Higgs potential (Rajantie and Stopyra, 2018), the perturbative CdL solution exists only for H > Hcrit. Below Hcrit, the HM solution has two negative eigenvalues, which means that it does not contribute to vacuum decay. Instead, the relevant solution is the CdL solution, which also has a lower action (see Figure 6). When the Hubble rate is increased, a second, perturbative CdL solution appears smoothly at H = Hcrit, at the same as the second eigenvalue of the HM solution becomes positive. At H > Hcrit there are, therefore, at least two distinct CdL solutions, and in fact, numerical calculations indicate that there are at least four (Rajantie and Stopyra, 2018). For the parameters used in Figure 6, the critical Hubble rate is H crit = 1 . 193 × 1 0 8  GeV .

4.7. Evolution of Bubbles After Nucleation

The bounce solution φB determines the field configuration to which the vacuum state tunnels (Callan and Coleman, 1977 Brown and Weinberg, 2007), and therefore sets the initial conditions for its later evolution. It is the equivalent of the second turning point on the true vacuum side, x2, appearing in Equation (4.1). In ordinary quantum mechanics, a particle with energy E emerges on the true vacuum side of the barrier at x2(E) after tunneling. This is related to the bounce solution, which starts at x1, rolls until reaching x2, and then bounces back to x1, thus x2 represents a slice of the bounce solution half way through.

In complete analogy, the field emerges at a configuration corresponding to a slice half way through the bounce solution (in Euclidean time). In flat space tunneling, the bounce is φB(χ) where χ 2 = τ 2 + r 2 , and thus touches the false vacuum at τ → ±∞. Hence the mid-way points occurs at τ = 0 and the solution emerges with ϕ(x, 0) = φB(r). One can then use this as an initial condition at t = 0 for the Lorentzian field equations,

However, this is not really necessary, as the O(4) symmetry of the bounce solution carries over into O(3, 1) solution (Callan and Coleman, 1977), and thus the solution can be read off as

From this one can see that the bubble wall is moving outwards asymptotically at the speed of light. The inside of the light cone corresponds to an anti-de Sitter spacetime collapsing into a singularity (Espinosa et al., 2008, 2015 Burda et al., 2016 East et al., 2017).

The situation in de Sitter space is considerably more complicated, but the conclusion is the same (Brown and Weinberg, 2007). First, de Sitter bounces can be thought of as bounces at finite temperature on a curved spatial background described by constant time slices of the static patch of de Sitter space,

The temperature in this case is the Gibbons-Hawking temperature (4.42) of de Sitter space. Bounces at finite temperature β = 1/kBT correspond to periodic bounces in Euclidean space (Brown and Weinberg, 2007), with period τperiod = β. In this case, the bounce starts at the false vacuum at τ = −π/H, hits its mid-point at τ = 0, and returns to the false vacuum side at τ = π/H. Thus, the τ = 0 hypersurface describes the final state of the field after tunneling.

Analytic continuation of the metric back to real space can be performed using the approach of Burda et al. (2016). The O(4) symmetric Euclidean metric is of the form

where in the de Sitter case,

Since it is straightforward to analytically continue the flat space metric back to real space via the transformation τ = it, then the same thing can be done with any conformally flat metric, by changing variables to τ ~ , r ~ such that

which is achieved by choosing f(χ) such that f′(χ) = f/a, f(0) = 0. In the de Sitter case, this means

where C is an arbitrary constant - we can choose it to be 1. This co-ordinate system is obtained from the O(4) symmetric co-ordinates via

One then transforms back to real space exactly as in flat space, via τ ~ = i t . The co-ordinate χ is then related to t ~ and r ~ via

It should be noted that t ~ , r ~ as defined only cover the r ~ > t ~ portion of de Sitter space. Because the metric is manifestly conformally flat in these co-ordinates, we can see that this corresponds to the portion of de Sitter space outside the light-cone, which lies at r ~ = ± t ~ .

Doing this for de Sitter yields the real space metric

which at first glance, is not obviously de Sitter space. However, the transformation

can be readily shown to yield Equation (4.61), thus this is indeed a valid analytic continuation of the Euclidean 4-sphere back to de Sitter space.

To describe the subsequent evolution of the bubble, it is argued in Burda et al. (2016) that ϕ(r, t) = ϕB(χ(r, t)) matches the symmetry of the O(4) symmetric bounce, just as in flat space, with χ(r, t) defined by Equation (4.68). As mentioned before, this describes only the evolution of the scalar field outside the light-cone. For r ~ < t ~ , it is necessary to solve the Euclidean equations directly. That calculation demonstrates explicitly that the formation of a singularity in the negative-potential region is inevitable (Burda et al., 2016), confirming previous calculations using the thin wall approximation in Coleman and De Luccia (1980).

As for the evolution outside the light-cone, it can be seen that, much as in flat space, a point of constant field value φ0 corresponding to χ0 where φ0 = φ(χ0), satisfies

which means that it rapidly approaches the speed of light as t ~ → ∞ . Thus, just as in flat space, the bubble expands outwards at the speed of light.

Even if the bubble wall moves outward at the speed of light, it does not necessarily grow to fill the whole Universe, if it is trapped behind an event horizon. Scenarios in which bubbles of true vacuum form primordial black holes have been discussed (Hook et al., 2015 Kearney et al., 2015 Espinosa et al., 2018a,b). This can happen if inflation ends before the space inside the bubble hits the singularity. When the Universe reheats, thermal corrections (4.28) stabilize the Higgs potential, preventing the collapse. The bubble then collapses into a black hole, and the primordial black holes produced in this way could potentially constitute part or all of the dark matter in the Universe (Espinosa et al., 2018a). This scenario requires fine tuning to avoid the singularity or new heavy degrees of freedom that modify the potential at high field values (Espinosa et al., 2018b). The same scenario can also produce potentially observable gravitational waves (Espinosa et al., 2018).

How Essential is the Vacuum Energy to Our Present Model of the Expanding Universe? - Astronomy

Hawaii Institute for Unified Physics, Kailua Kona, HI, USA

Copyright © 2019 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

Received: February 9, 2019 Accepted: March 10, 2019 Published: March 13, 2019

122 orders of magnitude discrepancy between the vacuum energy density at the cosmological scale and the vacuum density predicted by quantum field theory. This disagreement is known as the cosmological constant problem or the “vacuum catastrophe”. Utilizing a generalized holographic model, we consider the total mass-energy density in the geometry of a spherical shell universe (as a first order approximation) and find an exact solution for the currently observed critical density of the universe. We discuss the validity of such an approach and consider its implications to cosmogenesis and universal evolution.

Cosmological Constant, Critical Density, Dark Matter, Holographic Mass Solution, Vacuum Energy

The vacuum energy density predicted by quantum field theory disagrees with cosmological observation by approximately 122 orders of magnitude. It is one of the biggest disagreements between theory, experiment and observation and is known as the vacuum catastrophe [1] . To resolve this discrepancy, we first review the fundamental nature of the vacuum energy density and its relationship to the cosmological constant.

The Einstein field equations of general relativity include a constant Λ known as the cosmological constant. Originally included to allow for static homogenous solutions to Einstein’s equations, it was subsequently removed when the expansion of the universe was discovered [2] . However, since then the universe was found to be accelerating [3] and many cosmological models have been put forward with a nonzero Λ e.g. de Sitter, steady state and the Lemaitre models, where Λ acts as an additional expanding (dark energy) force.

With the inclusion of the cosmological constant, Einstein’s field equations are:

R μ ν − 1 2 R g μ ν + Λ g μ ν = 8 π G c 4 T μ ν (1)

where R μ ν is the Ricci curvature tensor, g μ ν is the metric tensor, r is the scalar curvature and T μ ν is the stress-energy tensor, which is modeled as a perfect fluid such that:

T μ ν = ( ρ + P / c 2 ) U μ U ν + P g μ ν (2)

The Robertson-Walker solution, which states that the rest frame of the fluid must be the same as the co-moving observer, reduces the Einstein equations to two Friedman equations:

( a ˙ a ) 2 = 8 π G ρ 3 + Λ 3 − c 2 k a 2 R o 2 (3)

a ¨ a = − 4 π G 3 ( ρ + 3 P / c 2 ) (4)

where a is the scaling factor, k is the curvature constant and R o is the radius of the observable universe (i.e. R t = a t r , where r is the co-moving radius).

Based on astronomical observations the current cosmological model states that we live in a flat, Λ dominated, homogeneous and isotropic universe, composed of radiation, baryonic matter and non-baryonic dark matter [3] - [8] .

The Friedman equation for a flat universe (i.e. k = 0 ) is thus given in the form:

H 2 = ( a ˙ a ) 2 = 8 π G ρ 3 + Λ 3 (5)

If we then take the assumption that the universe is pervaded by a form of energy (i.e. dark energy), which is the current consensus in both cosmology and particle physics [9] [10] [11] then the cosmological constant can be interpreted as an energy density [12] [13] and given in terms of the dark energy density, Λ = 8 π G ρ Λ . Note, this result can also be found by assuming a static universe (i.e. a ˙ = 0 ).

In either case the Friedman equation thus takes the form:

H 2 = ( a ˙ a ) 2 = 8 π G 3 ( ρ + ρ Λ ) (6)

Friedman’s solutions suggest that there is a critical density at which the universe must be flat, where the ratio of the total mass-energy density to the critical density is known as the density parameter Ω = ρ ρ c r i t and is currently measured as Ω ∼ 1 [6] [8] [14] .

The contributions to this density parameter come from: the vacuum density (dark energy), Ω Λ = 0.683 the dark matter, Ω d = 0.268 and the baryonic matter, Ω b = 0.049 , totaling to Ω T = 1 [14] .

The Friedman equation thus takes the form of an Einstein-de Sitter model in which the cosmological constant is coupled to the density:

( a ˙ a ) 2 = 8 π G 3 ( ρ b + ρ d + ρ Λ ) = 8 π G 3 ( 0.049 ρ c r i t + 0.268 ρ c r i t + 0.683 ρ c r i t ) = 8 π G 3 ρ c r i t (7)

where ρ b is the density due to baryonic matter ρ d is the density due to dark matter ρ Λ is the density due to dark energy and ρ c r i t = 3 H o 2 8 π G .

Using the current value of H o = 67.4 ± 0.5 km ⋅ s − 1 ⋅ Mpc − 1 for Hubble’s constant [14] , gives the critical density at the present time as, ρ c r i t = 8.53 × 10 − 30 g / cm 3 and thus ρ b = 0.049 ρ c r i t = 4.18 × 10 − 31 g / cm 3 , ρ d = 0.268 ρ c r i t = 2.29 × 10 − 30 g / cm 3 and ρ Λ = 0.683 ρ c r i t = 5.83 × 10 − 30 g / cm 3 . The vacuum energy density at the cosmological scale is thus of the order 10 − 30 g / cm 3 .

However, quantum field theory determines the vacuum energy density by summing the energies ℏ ω / 2 over all oscillatory modes. See reference [1] for a more detailed overview. As quantum fluctuations predict infinite oscillatory modes [15] [16] this yields an infinite result unless renormalized at the Planck cutoff. In utilizing such a cutoff value, the vacuum energy density is found to be:

ρ v a c = c 5 ℏ G 2 = m l l 3 = 5.16 × 10 93 g / cm 3 (8)

where m l = 2.18 × 10 − 5 g is the Planck mass and l = 1.616 × 10 − 33 cm is the Planck length. This value is well supported by both theory and experimental results [17] - [23] .

The cosmological vacuum energy density determined from observations, ρ v a c = 5.83 × 10 − 30 g / cm 3 , is therefore in disagreement with the vacuum energy density at the Planck cutoff, predicted by quantum field theory, ρ v a c = 5.16 × 10 93 g / cm 3 . This discrepancy is a significant 122 orders of magnitude and is thus known as the “vacuum catastrophe”.

Possible attempts to solve this discrepancy, as reviewed by Weinberg [24] , include introducing a scalar field coupled to gravity in such a way that ρ v a c is automatically cancelled when the scalar field reaches equilibrium [25] . A second approach imagines a deep symmetry that isn’t apparent in the effective field theory but nevertheless constrains parameters of this effective theory so that ρ v a c is zero or small [26] . Then there is the idea of quintessence which states that the cosmological constant is small because the universe is old and thus imagines a scalar field that rolls down a potential governed by a field equation [27] [28] [29] . When such a slowly varying scalar field is minimally coupled to gravity it can lead to the observed acceleration of the Universe [30] . This idea of quintessence has further been supported by the recent conjecture offered by Obied [31] to explain why string theory has not been able to construct a meta-stable de Sitter vacuum. They found that the resulting “allowed” universe points to an expanding universe in which the vacuum energy decreases at a rate above a specific lower limit i.e. a quintessent universe [31] [32] [33] .

Finally, anthropic considerations apply an anthropic bound on +ve ρ v a c by setting the requirement that it should not be so large as to prevent the formation of galaxies [34] . Using a simple spherical in-fall model of Peebles [35] the upper bound gives ρ v as being no larger than the cosmic mass density at the time of earliest galaxy formation (z = 5), which is approximately 200 times the present mass density and thus a big improvement from the 122 orders of magnitude. Therefore, as yet, the “vacuum catastrophe” is unresolved.

2. The Generalized Holographic Model

In previous work [36] [37] , a quantized solution to gravity is given in terms of Planck Spherical Units (PSU) in a generalized holographic approach. A brief description of this solution is given below.

Following the holographic principle of ‘t Hooft [38] , based on the Bekenstein-Hawking formulae for the entropy of a black hole [39] [40] , the surface and volume entropy of a spherical system is explored. The holographic bit of information is defined as an oscillating Planck spherical unit (PSU), given as,

These PSUs, or Planck “voxels”, tile along the area of a spherical surface horizon, producing a holographic relationship with the interior information mass-energy density (see Figure 1).

In this generalized holographic approach, it is therefore suggested that the information/entropy of a spherical surface horizon should be calculated in spherical bits and thus defines the surface information/entropy in terms of PSUs, such that,

where the Planck area, taken as one unit of information/entropy, is the equatorial disk of a Planck spherical unit, π r l 2 and A is the surface area of a spherical system. We note that in this definition, the entropy is slightly greater (

5 times) than that set by the Bekenstein bound, and the proportionality constant is taken to be unity (instead of 1/4 as in the Bekenstein-hawking entropy). It has been previously suggested that the quantum entropy of a black hole may not exactly equal A/4 [41] . To differentiate between models, the information/entropy S, encoded on the surface boundary in the generalized holographic model is termed, η ≡ S .

As first proposed by ‘t Hooft the holographic principle states that the description of a Volume of space can be encoded on its surface boundary, with one discrete degree of freedom per Planck area, which can be described as Boolean variables evolving with time [42] .

Following the definition for surface information η , the information/entropy within a volume of space is similarly defined in terms of PSU as,

R = V 4 3 π r l 3 = r 3 r l 3 (11)

where V is the volume of the spherical entity and r is its radius.

In previous work [36] [37] , it was demonstrated that the holographic relationship between the transfer energy potential of the surface information and the volume information, equates to the gravitational mass of the system. It was thus found that for any black hole of Schwarzschild radius r S the mass m S can be given as,

where η is the number of PSU on the spherical surface horizon and R is the number of PSU within the spherical volume. Hence, a holographic gravitational mass equivalence to the Schwarzschild solution is obtained in terms of a discrete granular structure of spacetime at the Planck scale, giving a quantized solution to gravity in terms of Planck spherical units (PSUs). It should be noted that this view of the interior structure of the black hole in terms of PSUs, is supported by the concept of black hole molecules and their relevant number densities as proposed by Miao and Xu [43] and Wei and Lui [44] . As well, the relationship between the interior structure in terms of “voxels” and the connecting horizon pixels is discussed in the work of Nicolini [45] .

Of course, these considerations lead to the exploration of the clustering of the structure of spacetime at the nucleonic scale, where it was found that a precise value for the mass m p and charge radius r p of a proton can be given as,

m p = 2 η R m l = 2 ϕ m l (13)

r p = 4 l m l m p = 0.841236 ( 28 ) × 10 − 13 cm (14)

where ϕ = η R is defined as a fundamental holographic ratio. Significantly, this value is within an 1 σ agreement with the latest muonic measurements of the charge radius of the proton [36] [37] , relative to a 7 σ variance in the standard approach [46] .

3. Resolving the Vacuum Catastrophe

To resolve the vacuum catastrophe, we must first understand where the value for the vacuum energy density at the Planck scale is coming from. As was previously defined [36] [37] , and summarized above, the physical structure and thus energy density at this scale is more appropriately represented in terms of PSUs, such that the vacuum energy density at the Planck scale ρ l , can be given as,

ρ l = m l P S U = 9.86 × 10 93 g / cm 3 .

The vacuum energy density at the quantum scale is thus ρ l = 9.86 × 10 93 g / cm 3 instead of the value ρ v a c = 5.16 × 10 93 g / cm 3 given in Equation (8).

The generalized holographic model describes how any spherical body can be considered in terms of its PSU packing, or volume entropy, R. The mass-energy M R , in terms of PSU, can therefore be given as M R = R m l and the mass-energy density is given as, ρ R = M R V .

In the case of the proton, the mass-energy in terms of Planck mass was calculated as M R = R m l = 2.45 × 10 55 g , which is equivalent to the mass of the observable universe (i.e. M u = 136 × 2 256 × m p = N E d d m p = 2.63 × 10 55 g in terms of the Eddington number and M u ≈ 3.63 × 10 55 g from density measurements). Since these values for the mass of the observable universe are just approximations, we will take the mass of the observable universe to be the mass-energy of the proton, as calculated above. The mass-energy density of the universe can thus be defined in terms of the mass-energy density of the proton. Thus, at the cosmological scale the mass-energy density, or vacuum energy density, is calculated to be,

ρ u = ρ R = M R V U = R m l V U = 2.26 × 10 − 30 g / cm 3 = 0.265 ρ c r i t (15)

where V U = 1.08 × 10 85 cm 3 and was found by taking r U as the Hubble radius r H = c / H o = 1.37 × 10 28 cm . Thus, when the vacuum energy density of the Universe is considered in terms of the proton density and the protons PSU packing (i.e. its volume entropy, R) we find the density scales by a factor of 10 122 . As well, it should be noted that this value for the mass-energy density is found to be equivalent to the dark matter density, ρ d = 0.268 ρ c r i t .

Similarly, the vacuum energy density can be considered in terms of the PSU surface tiling (i.e. its surface entropy, η ), as the radius expands from the Planck scale ρ l to the cosmological scale. The vacuum density at the cosmological scale is thus given as,

ρ u = ρ l η = 8.53 × 10 − 30 g / cm 3 ( = ρ c r i t ) (16)

where η is found by assuming a spherical shell Universe of radius r U = r H . The resulting change in density, from the vacuum density at the Planck scale to that at the cosmological scale yields an exact equivalent to the currently observed critical density of the universe, ρ c r i t . Thus, when we consider the generalized holographic approach, which describes how any spherical body can be considered in terms of its PSU packing, we show the scale relationship between the PSUs and a spherical shell universe and resolve the 122 orders of magnitude discrepancy between the vacuum energy density at the Planck scale and the vacuum energy density at the cosmological scale.

The solution presented here is in line with the ideas of quintessence in which the mass-energy density is governed by the scale factor η φ − 1 , such that ρ φ = ρ l η φ for η φ > η l . Following this approach, the Friedman equation can then be written in the form:

H φ 2 = 8 π G 3 ρ φ = 8 π G 3 ρ l η φ (17)

which can also be given in terms of the varying radius, such that ρ φ = ρ l 4 ( r l r φ ) 2 for r φ > r l and the Friedman equation becomes:

H φ 2 = 8 π G 3 ρ φ = 8 π G 3 ρ l 4 ( r l r φ ) 2 = 2 π G 3 ρ l ( r l r φ ) 2 (18)

These findings are in agreement with those of Ali and Das [47] who, in an attempt to resolve the current problems of cosmology, interpret one of the quantum correction terms in the second order Friedman equation as dark energy. From the quantum corrected Raychaudhuri equations they find the first correction term Λ Q = 1 / L 0 2 where L 0 is identified as the current linear dimension of our observable universe, such that λ Q = 10 − 123 in planck units.

Essentially, they are adding the correction term Λ Q = r l 2 / L 0 2 whereas we include the scale factor r l 2 / r φ 2 . However, their solution describes a purely quantum mechanical description of the universe assuming quantum gravity affects are practically absent, whereas the results described here show how, as the density changes with radius we have a scaler field that is coupled to gravity and thus rolls down a potential governed by a generalized holographic quantized solution to gravity [36] .

Similar scale-invariant models have also been proposed by Maeder [48] [49] [50] who much like Milgrom’s modified Newtonian dynamics (MOND) [51] [52] [53] defines a limit where scale invariance is applicable at large scales (i.e. low accelerations in MOND). In his model Maeder utilizes a new co-ordinate system, derived from scale invariant tensor analysis, and much like Milgrom and Verlinde [54] he finds an additional factor κ v that opposes gravity. Interestingly, and in line with our findings, Maeder notes that with this new co-ordinate system, both the pressure and density are not scale invariant.

It should as well be noted that the equivalence found between the critical density and that found from the surface entropy (Equation (16)) yields a critical mass that obeys the Schwarzschild solution for a universe with a radius of the Hubble radius,

M c r i t = ρ l η V u = m l ϕ = 9.24 × 10 55 g ( ≡ r s c 2 2 G ) (19)

The idea that the observable universe is the interior of a black hole was originally put forward by Pathria [55] and Good [56] and more recently by Poplowski [57] . If such a solution holds true, then this would give us the prefect opportunity to study the interior of a black hole.

Previous attempts to resolve the vacuum catastrophe include large quantum corrections (e.g. [47] [58] ). However such theories offer no physical explanation and although solutions such as Zlatev [59] [60] do not depend on any fine tuning of the initial conditions, fine-tuning is still required to set the energy density of the scalar field to equal the energy density of matter and radiation at the present time i.e. at the cross-over from matter dominated to scalar field (or vacuum) dominated. This was the weak point of Hoyle’s [12] steady state universe, as although he was able to show expansion properties with the introduction of the space-time vector C, no physical explanation was proposed.

The solution described in this paper utilizes the generalized holographic approach [36] , offering a physical explanation which is thus inherent within the equations of general relativity such that no correction terms are necessary. Renormalization still occurs, where the cutoff for renormalization is the Planck unit (PSU) which is based on the fundamental constants of nature (within our universe at least).

Similarly, Huang [61] who presents a super-fluid model of the universe attempts to solve the fine-tuning problem by assuming a self-interacting complex scalar field that emerges with the big bang. The potential (defined as the Halpern-Huang potential) then grows from zero as the length scale expands (i.e. it should be asymptotically free) and the cosmological constant, in terms of a high-energy cut-off decreases with the expanding universe.

The nature of the fundamental constants and the large dimensionless numbers resulting from their relationships has been a long-standing puzzle (e.g. [62] - [69] ), and concepts such as a variable G [66] [67] [68] [70] and continuous matter creation have been introduced [66] . The relationship between the number of particles in the universe and Weyls ratio [62] [71] showed that the number of particles in the universe should be increasing proportionally to the square of the age of the universe and therefore matter must be continually created. Steady state cosmology, previously suggested by Hoyle [12] and Einstein [72] , offered such a concept, but with a constant G, as oppose to Dirac and his variable G. In previous work [73] this was resolved by suggesting that it is the mass-energy density that is changing and not G. In this paper we show that the mass-energy density decreases with the increasing size of the universe, so although the number of particles in the universe is increasing, with continuous matter creation the energy/information is conserved i.e. particles passing out of the observable universe are compensated by the creation of new particles where it is only through the creation of matter that an expanding universe can be consistent with the conservation of mass within the observable universe.

Figure 1 . Schematic to illustrate the Planck Spherical Units (PSU) packed within a spherical volume.

The standard model of the universe (i.e. concordance ΛCDM) explains the accelerated expansion of the universe in terms of a negative pressure generated by the so-called dark energy. However, although in good agreement with CMB, large scale structure and SNeIa data, it is not yet able to explain the coincidence (fine-tuning) or the cosmological problem. As noted by Corda (2009) [74] extended theories of gravity (e.g. theories of gravity where the Lagrangian is modified by adding high-order terms in the curvature invariants or terms with scalar fields non-minimally coupled to geometry) generate inflationary frameworks which solve many of the problems, including the accelerated expansion. This is in agreement with the theory presented here where the acceleration of the universe can be explained in terms of a pressure gradient due to the information transfer potential at the horizon. The details of this are beyond the scope of this paper and will be addressed in a follow-up paper.

In summary we have shown how the generalized holographic model resolves the 122 orders of magnitude discrepancy between the vacuum energy density at the Planck scale and the vacuum energy density at the cosmological scale. Thus, not only resolving this long-standing problem in physics but also validating this geometrical approach. The details in terms of matter creation and the expansion rate are beyond the scope of this paper and will be addressed in a forth coming paper. The results presented here have profound implications for astrophysics, cosmogenesis, universal evolution and quantum cosmology giving incentive to further exploration and developments.

The authors would like to thank Dr. Elizabeth Rauscher, Dr. Michael Hyson, Professor Bernard Carr and Dr. Ines Urdaneta for their helpful notes and discussions, Marshall Lefferts and Andy Day for the use of their diagram (Figure 1) and as well the Royal Society for presenting the research in its preliminary stages at the 2015 satellite discussion meeting.

The authors declare no conflicts of interest regarding the publication of this paper.

Haramein, N. and Val Baker, A. (2019) Resolving the Vacuum Catastrophe: A Generalized Holographic Approach. Journal of High Energy Physics, Gravitation and Cosmology, 5, 412-424.

    Adler, R. J., Casey, B. and Jacob, O.C. (1995) Vacuum Catastrophe: An Elementary Exposition of the Cosmological Constant problem. American Journal of Physics, 63, 620-626.
    Hubble, E.P. (1929) A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae. USA, (1929). Proceedings of the National Academy of Sciences of the United States of America, 15, 168-173
    Reiss, A.G., Filippenko, A.V., Challis, P., Clocchiattia, A. and Diercks, A. (1998) Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant. The Astronomical Journal, 116, 1009-1038.
    Schmidt, B.P., Suntzeff, N.B., Phillips, M.M., Schommer, R.A. and Clocchiatti, A. (1998) The High-Z Supernova Search: Measuring Cosmic Deceleration and Global Curvature of the Universe Using Type Ia Supernovae. APJ, 507, 46-63.
    Perlmutter, G., Aldering, G., Goldhaber, R.A., Knop, P. and Nugent, P.G. (1999) Measurements of Ω and Λ from 42 High-Redshift Supernovae. APJ, 517, 565-586.
    Spergel, D.N., Verde, L., Peiris, H.V., Komatsu, E., Nolta, M.R., Bennett, C.L., Halpern, M., Hinshaw, G., Jarosik, N., Kogut, A., Limon, M., Meyer, S.S., Page, L., Tucker, G.S., Weiland, J.L., Wollack, E. and Wright, E.L. (2003) First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Determination of Cosmological Parameters. APJ, 148, 175-194.
    Eisenstein, D.J., Zehavi, I., Hogg, D.W., Scoccimarrow, R. and Blanton, M.R. (2005) Detection of the Baryon Acoustic Peak in the Large-Scale Correlation Function of SDSS Luminous Red Galaxies. APJ, 633, 560-574.
    Hinshaw, G., Larson, D., Komatsu, E., Spergel, D.N. and Bennett, C.L. (2013) Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. APJ, 208, 19H.
    Zel’dovich, Y.B. (1968) The Cosmological Constant and the Theory of Elementary Particles. Soviet Physics Uspekhi, 11, 381-393.
    Bludman, S.A. and Ruderman, M.A. (1977) Induced Cosmological Constant Expected above the Phase Transition Restoring the Broken Symmetry. Physical Review Letters, 38, 255-257.
    Carroll, S.M. (2001) The Cosmological Constant. Living Reviews in Relativity, 4, 1.
    Hoyle, F. (1948) A New Model for the Expanding Universe. MNRAS, 108, 372-382.
    Guth, A.H. (1981) Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems. Physical Review, 23, 347-356.
    Aghanim, N., Akrami, Y., Ashdown, M., Aumont, J., et al. Planck 2018 Results. VI. Cosmological Parameters. Planck Collaboration, arXiv:1807.06209v1.
    Sparnaay, M.J. (1958) Measurements of Attractive Forces between Flat Plates. Physica, 24, 751-764.
    Wheeler, J.A. (1962) Geometrodynamics. Academic Press, New York and London.
    Sabisky, E.S. and Anderson, C.H. (1973) Verification of the Lifshitz Theory of the van der Waals Potential Using Liquid-Helium Films. Physical Review A, 7, 790-806.
    Eberlein, C. (1996) Sonoluminescence as Quantum Vacuum Radiation. Physical Review Letters, 76, 3842-3845.
    Lamoreaux, S.K. (1997) Demonstration of the Casimir Force in the 0.6 to 6 μm Range. Physical Review Letters, 78, 5-8.
    Bordag, M., Mohideen, U. and Mostepanenko, V.M. (2001) New Developments in the Casimir Effect. Physics Reports, 353, 1-205.
    Becka, C. and Mackey, M.C. (2007) Measurability of Vacuum Fluctuations and Dark Energy. Physica A: Statistical Mechanics and Its Applications, 379, 101-110.
    Capasso, F., Munday, J.N. and Parsegian, V.A. (2009) Measured Long-Range Repulsive Casimir-Lifshitz Forces. Nature, 457, 170-173.
    Wilson, C.M., Johansson, G., Pourkabirian, A., Simoen, M., Johansson, J.R., Duty, T., Nori, F. and Delsing, P. (2011) Observation of the Dynamical Casimir Effect in a Superconducting Circuit. Nature, 479, 376-379.
    Weinberg, S. (2000) Approaches to the Cosmological Constant Problem. 4th International Symposium on Sources and Detection of Dark Matter/Energy in the Universe, Los Angeles.
    Weinberg, S. (1989) The Cosmological Constant Problem. Reviews of Modern Physics, 61, 1.
    Witten, E. (1995) Is Supersymmetry Really Broken? International Journal of Modern Physics, 10, 1247-1248.
    Peebles, P.J.E. and Ratra, B. (1988) Cosmology with a Time-Variable Cosmological Constant. The Astrophysical Journal, 325, L17-L20.
    Ratra, B. and Peebles, P.J.E. (1988) Cosmological Consequences of a Rolling Homogeneous Scalar Field. Physical Review D, 37, 3406.
    Wetterich, C. (1988) Cosmology and the Fate of Dilatation Symmetry. Nuclear Physics B, 302, 668-696.
    Tsujikawa, S. (2013) Quintessence: A Review. Classical and Quantum Gravity, 30, 214003.
    Obied, G., Ooguri, H., Spodneiko, L. and Vafa, C. De Sitter Space and the Swampland. arXiv:1806.08362.
    Denef, F., Hebecker, A. and Wrase, T. (2018) de Sitter Swampland Conjecture and the Higgs Potential. Physical Review D, 98, Article ID: 086004.
    Agrawal, P., Obied, G., Steinhardt, P. and Vafa, C. (2018) On the Cosmological Implications of the String Swampland. Physics Letters B, 784, 271-276.
    Weinberg, S. (1987) Anthropic Bound on the Cosmological Constant. Physical Review Letters, 59, 2607.
    Peebles, P.J.E. (1967) The Gravitational Instability of the Universe. The Astrophysical Journal, 147, 859.
    Haramein, N. (2013) Quantum Gravity and the Holographic Mass. Physical Review & Research International, 3, 270-292.
    Haramein, N. (2013) Addendum to “Quantum Gravity and the Holographic Mass in View of the 2013 Muonic Proton Charge Radius Measurement.
    ‘t Hooft, G. (2000) The Holographic Principle. in Basics and Highlights in Fundamental Physics. Proceedings of the International School of Subnuclear Physics, Erice, 1-15.
    Bekenstein, J. (1973) Black Holes and Entropy. Physical Review D, 7, 2333-2346.
    Hawking, S. (1975) Particle Creation by Black Holes. Communications in Mathematical Physics, 43, 199-220.
    Dabholkar, A. (2005) Exact Counting of Black Hole Microstates.
    ‘t Hooft, G. (1993) Dimensional Reduction in Quantum Gravity. arXiv:gr- qc/9310026
    Miao, Y.-G. and Xu, Z.-M. (2019) Interaction Potential and Thermo-Correction to the Equation of State for Thermally Stable Schwarzschild Anti-de Sitter Black Holes. Science China Physics, Mechanics and Astronomy, 62, 10412.
    Wei, S.-W. and Liu, Y.-X. (2015) Insight into the Microscopic Structure of an AdS Black Hole from Thermodynamical Phase Transition. Physical Review Letters, 115, Article ID: 111302.
    Nicolini, P. and Singleton, D. (2014) Connecting Horizon Pixels and Interior Voxels of a Black Hole. Physics Letters B, 738, 213-217.
    Antognini, A., Nez, F., Schuhmann, K., Amaro, F.D., et al. (2013) Proton Structure from the Measurement of 2S-2P Transition Frequencies of Muonic Hydrogen. Science, 339, 417-420.
    Ali, F.A. and Das, S. (2015) Cosmology from Quantum Potential. Physics Letters B, 741, 276-279.
    Maeder, A. (2017) An Alternative to the Lambda CDM Model: The Case of Scale Invariance. APJ, 834.
    Maeder, A. (2017) Scale Invariant Cosmology and CMB Temperatures as a Function of Redshifts. APJ, 847,
    Maeder, A. (2017) Dynamical Effects of the Scale Invariance of the Empty Space: The Fall of Dark Matter? APJ, 849.
    Milgrom, M. (1983) A modification of the Newtonian Dynamics as a Possible Alternative to the Hidden Mass Hypothesis. APJ, 270.
    Milgrom, M. (1983) A Modification of the Newtonian Dynamics—Implications for Galaxies. APJ, 270.
    Milgrom, M. (1983) Emergent Gravity and the Dark Universe. APJ, 270, 384, 270.
    Verlinde, E.P. (2017) Emergent Gravity and the Dark Universe. SciPost: SciPost Physics, 2.
    Pathria, R.K. (1972) The Universe as a Black Hole. Nature, 240, 298-299.
    Good, I.J. (1972) Chinese Universes. Physics Today, 25, 15.
    Poplawski, N.J. (2010) Radial Motion into an Einstein-Rosen Bridge. Physics Letters B, 687, 110-113.
    Ashtekar, A., Pawlowski, T. and Singh, P. (2006) Quantum Nature of the Big Bang. Physical Review Letters, 96, Article ID: 141301.
    Zlatev, I., Wang, L. and Steinhardt, P.J. (1999) Quintessence, Cosmic Coincidence, and the Cosmological Constant. Physical Review Letters, 82, 896.
    Zlatev, I., Wang, L. and Steinhardt, P.J. (1999) Cosmological Tracking Solution. Physical Review D, 59.
    Huang, K. (2013) Dark Energy and Dark Matter in a Superfluid Universe. International Journal of Modern Physics A, 28, Article ID: 1330049.
    Weyl, H. (1917) On the Theory of Gravitation. Annals of Physics, 54, 117.
    Eddington, A. (1931) Preliminary Note on the Masses of the Electron, the Proton, and the Universe. Mathematical Proceedings of the Cambridge Philosophical Society, 27, 15-19.
    Eddington, A. (1936) Relativity Theory of Protons and Electrons. Cambridge University Press, Cambridge.
    Eddington, A. (1946) Fundamental Theory. Cambridge University Press, Cambridge.
    Dirac, P.A.M. (1937) The Cosmological Constants. Nature, 139, 323.
    Dirac, P.A.M. (1938) A New Basis for Cosmology. Proceedings of the Royal Society A, 165, 199-208.
    Dirac, P.A.M. (1974) Cosmological Models and the Large Numbers Hypothesis. Proceedings of the Royal Society A, 338, 439-446.
    Funkhouser, S. (2006) The Large Number Coincidence, the Cosmic Coincidence and the Critical Acceleration. Proceedings of the Royal Society A, 462, 3657-3661.
    Milne, E.A. (1935) Relativity, Gravity and World Structure. Oxford University Press, Oxford.
    Weyl, H. (1919) New Extension of Relativity Theory. Annalen der Physik, 59.
    O’Raifeartaigh, C., McCann, B., Nahm, W. and Mitton, S. (2014) Einstein’s Steady-State Theory: An Abandoned Model of the Cosmos. The European Physical Journal H, 39, 353-367.
    Haramein, N., Rauscher, E.A. and Hyson, M. (2008) Scale Unification: A Universal Scaling Law. Proceedings of the Unified Theories Conference, Budapest, 2008.
    Corda, C. (2009) Interferometric Detection of Gravitational Waves: The Definitive Test for General Relativity. International Journal of Modern Physics D, 18, 2275-2282.

*Presented at the Royal Society satellite discussion meeting―Particle, condensed matter and quantum physics: links via Maxwell’s equations, 18th-19th November 2015, Chicheley Hall, Buckinghamshire, UK.

Question Edgeless universe?

You will probably get a lot of variety in replies.
I don't think you will meet yourself coming back.

I will kick off with one idea I had long ago. If you got to the edge of the Universe (whatever that may mean) there is nothing beyond. If you set off away from the Universe you would become the edge of the Universe. In other words you need a frame of reference. The 'edge' of the Universe must be defined by something. A star ? ? ? whatever. When you set off beyond, you become the new defining point.

You will get more sophisticated replies (maybe including one from me) but I thought I would kick off with a simple one.

Enjoy your visits here. Look through the topics.



What Shape Is the Universe? A New Study Suggests We’ve Got It All Wrong


Approaching asteroid? Is this THE one?


What Shape Is the Universe? A New Study Suggests We’ve Got It All Wrong



"Don't criticize what you can't understand. "

Er, actually, we do not it yet. According to Hubble, everything is moving away from us in the speed of the Hubble Constant. And, today's scientist have got to know what's moving that away: Dark Energy. The energy that makes up the most of the Universe (e=mc^2). That is why the universe is expanding and expanding and the amount of Dark Energy seems to be constant. And, that's why the universe is not as large as it's age, it's more large than that.


Approaching asteroid? Is this THE one?

Look at the heading: Edgeless universe?

You started this thread. You cannot just shut me up.
The words (the opposite of polite) (small dog) come to mind.


Approaching asteroid? Is this THE one?

"The point is that the Universe is complete. By definition there is nothing beyond."

Now take that on board please


Approaching asteroid? Is this THE one?



Approaching asteroid? Is this THE one?

" Right now I’m looking for an answer to my question. If I travel towards a galaxy faster than it is moving away from me I will eventually get there."

The answer is, like a car, you will overtake it unless you are getting close to c.

You will have the problem that your premise may be invalid. If you and your chosen galaxy are moving apart proportionally to the distance you had better pull your socks up (Hubble).

I bet you haven't even got your spacesuit ready?


Approaching asteroid? Is this THE one?

If I start out in the opposite direction eventually I will also get to the galaxy.

To put it politely. this is "blowing in the wind".

The Universe is a large place. You don't have a compass. You don't have a spacesuit. You don't have FLT. You are not immortal.

Your premise is leaking badly.


Approaching asteroid? Is this THE one?

To be serious, if we are going to posit totally impossible questions, involving such as travelling FLT, non-defined positions., and lack of PPE, we are entering total imagination of the worst anthropomorphic kind.

If you treat exiting the Universe like going to the bathroom you might find that you cannot flush away any galaxies in your way



Approaching asteroid? Is this THE one?


Approaching asteroid? Is this THE one?



Approaching asteroid? Is this THE one?

Good. I can be my nice friendly self

Sorry, but I still have the same problem. An edgeless Universe is directly analogous to the seamless surface on a sphere. As you say, if you accept this analogy, you start at a point, say Quito which is virtually on the Equator. Set off in a straight line, which is actually the curved Equator, and you get back to Quito. A "straight" line from Quito can now be to the Moon, but this would be an extra dimension for the flatlander confined to the surface of the sphere. Do not be confused by my analogy of the Earth. I assume we can agree all that?
Now if the Universe is the sphere in the analogy, as the Universe expands the surface of the sphere expands. We, as flatlanders, cannot get off the surface of the sphere = Universe. The distance between us increases but we are still on the "surface".
The problem is we are trying to relate mere humans to the whole Universe, about which we know next to nothing. What the Universe is expanding into, is a non-question. The big problem I have with the Expanding Universe" is that we (objects) are not expanding with it. If ALL were expanding we would not know it, since our rulers would be expanding too. That, I believe, is a flaw in the Expansion theory. By analogy, the surface of the sphere = galaxy and the "into" bit relates to the expansion of the surface. No, you will say, it is expanding "outwards". You are correct, but that is a different dimension unknown to the flatlander. That dimension is unknown to us and it is not the "into" in your question: "Into what is the Universe expanding. Your answer is: "The Universe, as a surface, is expanding. Expansion perpendicular to that surface is in a dimension we cannot detect. It is not in a space dimension familiar to us. Any attempt o make it so is anthropomorphic.

Faith and the Expanding Universe of Georges Lemaître

O n October 29 th of 2018, the International Astronomical Union (IAU) voted to recommend renaming Hubble’s Law the “Hubble-Lemaître Law.” That such a vote would take place today—during a time when science and faith are portrayed in the media as implacable foes—speaks to the remarkable character of Lemaître himself, the Belgian monsignor and astronomer who made a number of fundamental contributions to the science of cosmic structure and origins. His dual career as priest and scientist puzzled many in science and in the public at large when he was alive, and his struggles to defend his “Big Bang” model of the origin of the universe against those who accused him of being religiously motivated epitomizes the growing tension between science and organized religion in post-war Europe and the US.

The story we will tell about Lemaître will of necessity be selective in the details of his life, which was complex and rich enough to merit multiple biographies,[1] [2] as well as numerous articles. I want to emphasize those aspects of his career that merited the decision of the IAU, a body of over 10,000 professional astronomers, along with some other contributions that are less well-known but also deserving of recognition. Lemaître’s religious views are equally extensive and complex, and I will focus only on those that connect to his scientific work and the debates that emanated therefrom.

Georges Henri Joseph Édouard Lemaître was born July 17, 1894 in Charleroi, Belgium. At an early age he felt called to become a priest, but did not pursue ordination until after he completed his scientific education at the Catholic University of Louvain. Initially setting out to study civil engineering, he left the university to fight in World War I as an artillery officer for the Belgian army, for which he was awarded the Belgian War Cross. Returning to his education in 1918, he obtained a Docteur en Sciences (equivalent to a Ph.D.) from Louvain in 1920, with a thesis on pure mathematics. He then was ordained a priest in 1923, but having become aware of new developments in astronomy, he sought and obtained permission from his superiors to become a research associate at Cambridge University (U.K.) under the famous astronomer Sir Arthur Eddington. A year later he went on to the other Cambridge—Cambridge, MA—where he worked at Harvard College Observatory with Harlow Shapley and in 1927 was awarded a Ph.D. from MIT[3] with a thesis on the behavior of gravitational fields under general relativity.[4] In 1925 he returned to Louvain to take up a faculty position. The three short years Lemaître was abroad equipped him with the tools of general relativity and an understanding of the astronomical data of the time, by which he would quickly revolutionize contemporary understanding of the history of the universe.

To appreciate Lemaître’s contribution requires that we recognize how different the astronomy of the early 20 th century was from today. In 1917 Albert Einstein published his theory of “general relativity,” in which gravity is the geometry of the space and time we exist in. The size and structure of the universe was poorly known then. It was understood that the solar system—Sun, Earth, and other planets—is located in a large assemblage of billions of stars called the Milky Way Galaxy. However, the argument of the day was whether the Milky Way was in fact the entire universe. Up through the first decade of the 20 th century, telescopes were not powerful enough to resolve the true nature of spiral-shaped “nebulae” (Latin for mists or clouds) as other galaxies like the Milky Way.[5] Thus when Einstein conceived his theory of general relativity a decade earlier, the simplest assumption was that the universe is static, unchanging over countless eons of time.[6] But this posed a serious problem for Einstein, because his theory of gravity required that matter distort space in such a way that a static universe—all matter, and space itself—would simply collapse upon itself. He therefore introduced an arbitrary “cosmological constant” into his equations governing the geometry of space-time that provided a repulsive force to balance the mutual attraction of all matter to preserve a space which he envisioned as static in time.[7]

An alternative model of a static cosmos was developed in 1917 by the Dutch physicist Willem de Sitter. De Sitter solved the problem of a collapsing universe by postulating that space was empty—devoid of matter. As unrealistic as this may seem, the de Sitter universe was interesting in two ways. First, if two small bits of matter were introduced (say, two galaxies in an otherwise empty universe) they would tend to move away from each other. Second, the space de Sitter considered was flat—a departure from Einstein’s model, in which matter imposed an overall positive curvature on space such that the latter resembled the surface of a ball. The actual universe seems to be very nearly flat and after an enormity of time will come to resemble a de Sitter universe.[8]

Lemaître wrestled with the problems of the de Sitter model while pursuing his Ph.D. By that time, 1924, astronomical observers using more powerful telescopes had succeeded in finding distance indicators that established the spiral nebulae as galaxies like the Milky Way. Hence the universe was not 100,000 light years across (the approximate diameter of the Milky Way), but rather billions of light years in size. More significantly, observers found that the light of the more distant galaxies seemed shifted toward the red end of the color spectrum relative to nearby galaxies. Various explanation for this red-shift were offered.[9]

Lemaître’s time at Harvard enabled him to be engaged with the astronomical data, and by 1927 he understood how to interpret the galactic red shifts galaxies were moving away from each other, but not by their own motion through fixed space. His was not the static universe of Einstein, or the empty cosmos of de Sitter, but rather a universe in which space itself was expanding, in which massive galaxies embedded in that space were carried into a future in which the cosmos became ever more dilute. Galaxies appear reddened not because of the classical Doppler Effect but because the light waves themselves are stretched out by the expansion of the space through which they travel. And unlike the original de Sitter model, no observer is in a special position, no galaxy occupies a “center.”[10]

Lemaître was not the first to propose that an expanding universe would satisfy the equations of general relativity and eliminate the need for a cosmological constant. Alexander Friedmann, a Russian mathematician, published a similar solution in German journals in 1922 and 1924. However, his was a purely theoretical exercise, as he did not have access to the data. While Einstein was aware of Friedmann’s work, Lemaître—who was just finishing his thesis work when Friedmann died in 1925 of typhoid fever, was not.[11] Dissemination of journals was far more difficult then, and Lemaître would soon find himself on the other side of the same problem.

In 1927 Lemaître published his seminal paper on the expanding universe.[12] His expanding cosmos filled with matter combined the best of both Einstein’s and deSitter’s cosmologies, directly confronted the astronomical data at hand, and did not require a cosmological constant. In his universe, the velocity of recession of a galaxy would be proportional to the distance to that galaxy. He used the available astronomical data on galactic distances and redshifts to compute the constant of proportionality.[13]

Lemaître’s paper was virtually unknown and unread. The Annales of the Scientific Society of Brussels, published in French in Belgium, was simply not on the list of prominent scientific journals, nor was French a dominant language in astronomy. Two years later, in 1929, the American astronomer Edwin Hubble published in the prominent Proceedings of the (US) National Academy of Sciences,[14] in which he used the much larger body of data on galactic distances and velocities then available to show empirically that there was a linear relationship between the recessional velocity and distance of a galaxy. The velocity-distance relationship he derived by plotting data on a chart became known as Hubble’s law, and the constant of proportionality the Hubble constant.

Hubble interpreted the recessional velocities of galaxies by appealing to de Sitter’s cosmology, in which particles would fly apart in a fixed space. He also invoked what became known as “light fatigue”—light waves would lose energy and increase in wavelength as they traveled from source to observer. Neither is correct: de Sitter’s model did not apply to the universe in which we live, and light does not lose energy as it travels through the vacuum of space. It was Lemaître’s expansion of space itself that provided a natural mechanism for the ever-greater reddening of galaxies with distance. But Hubble was unaware of Lemaître’s 1927 paper, and in any event never accepted the idea of a universe in which space itself was expanding. As late as the 1940’s Hubble gave interviews in which he asserted the data to be consistent with a static cosmos[15]—an opinion now well established to be erroneous. Ironically, the man for whom the fundamental yardstick of cosmic expansion was named never accepted the idea that space was expanding.

The story would end here were it not for another consequence of Lemaître’s publishing in an obscure journal. In 1930 Arthur Eddington produced an expanding universe model virtually identical to Lemaître’s, and began to lecture on it. Upon learning from colleagues of his old Cambridge mentor’s reinvention, Lemaître reminded Eddington that he had sent him a copy of the 1927 paper. The gracious Eddington realized immediately that his former student’s choice of journal had doomed the work to obscurity and arranged for the editor of the Monthly Notices of the Royal Astronomical Society, a distinguished journal informally known to astronomers as MNRAS, to publish an English translation.[16]

The 1931 English translation of the 1927 seminal paper did nothing to establish Lemaître’s priority in deriving “Hubble’s Law,” because the key paragraph setting forth the relationship between the recession speed of galaxies and their distance, and the constant relating them, was missing. For decades intrigue swirled around this omission theories ranged from anti-religious motives to Hubble himself intervening to save his own priority. In 2011 astronomer Mario Livio solved the mystery after combing the archives of the Royal Astronomical Society, where he discovered a cover letter enclosed with the translated manuscript to the editor of MNRAS.[17] The letter establishes that Lemaître had translated his own 1927 paper into English, and decided to omit the material on the galaxy velocity-distance relationship.

Why would Lemaître do such a thing? He knew well that by 1929, when Hubble wrote his paper, there were more data of higher accuracy that established the linear nature of the velocity-distance relationship than he had access to in 1927. When Lemaître wrote out the relationship in his original paper, he had derived it from his cosmological model, in effect predicting what better data would show two years later.[18]

By omitting the key paragraph, Lemaître lost the opportunity to have his name attributed to the famous and now fundamentally important cosmological relation. Although it was easy to go back to the original 1927 paper to see what Lemaître had done, few apparently did. Further, Hubble had a big personality and was in charge of what was the largest telescope at the time—the Mount Wilson 100-inch reflector as a public figure he easily overshadowed low-key Belgian priest-professor.

The story of Lemaître’s contributions to cosmology does not end there. By 1931 he had thought through the implications of his model of the cosmos, and realized that the expansion implied a beginning—a point in time at which space and all the matter within it was so compressed that the physical laws which govern the behavior of everything might not have applied. In four short paragraphs in the journal Nature, Lemaître set forth the case for a universe with a finite age, whose expansion when reversed implied a starting point so alien to the conditions found in the laboratory that normal physics would fail to describe it.[19] What came to be known as the “Big Bang” model[20] for the origin of the cosmos remains with us today, and Lemaître is universally acknowledged to be its inventor. Much has changed from the original idea Lemaître favored a cold starting state and interpreted the then recently discovered cosmic rays to be a signature of the beginning. Today, we know that the starting state was very hot and the signature of the Big Bang is not cosmic rays, but rather a background field of mostly radio energy at a very low and almost uniform temperature—the “cosmic microwave background” or CMB. When the CMB was discovered in 1964, Lemaître was literally on his deathbed where he learned of the validation of his model from friends.

While the public was fascinated with the idea of Lemaître’s model, and even further, that it was invented by a Catholic priest-scientist, many of Lemaître’s colleagues were less charmed. That the universe would have a beginning was scientifically unattractive, since it meant that some state of reality might not be accessible to scientific investigation. And it smacked of religion—a kind of scientific version of Genesis. Lemaître, who firmly asserted that his primeval atom model was a scientific hypothesis,[21] found himself at the center of a firestorm when in 1951 Pope Pius XII opened a meeting of the Pontifical Academy of Science by asserting that the primeval atom model demonstrated the existence of a Creator. The so-called “Fiat Lux” speech so mortified Lemaître that, learning of the Pope’s plans to read it again at the opening of the much larger IAU assembly in Rome, he traveled to the Vatican to plead (successfully) with the Holy Father to omit the offending portion.[22]

However, the damage had been done, which was to confirm the assumption by some of its opponents that what was by now called the “Big Bang” model had been religiously motivated. Several years prior to the 1951 Fiat Lux speech, three physicist-astronomers—Thomas Gold, Hermann Bondi, and Fred Hoyle—proposed an alternative “Steady State” model of an eternally expanding universe in which matter was being continuously created to compensate for the dilution associated with the expansion of space.[23] While some argued that the Steady State model returned a more respectable age for the cosmos,[24] the required continuous creation of matter had no compelling mechanism. Though the Steady State model was discredited by the discovery of the CMB, astronomers still seek ways to avoid what remains for many a philosophically unpleasant idea that the cosmos might have had a beginning.[25]

Over the years Lemaître produced a wide range of quotable statements on the relationship of science and faith in which he carefully circumscribed both the practice of science and the applicability of the Bible to topics beyond salvation history.[26] Nonetheless, a closer examination of Lemaître’s 1931 Nature paper reveals a somewhat more permeable boundary between these two sides of the scientist-priest. An archived early draft of the 1931 manuscript includes a final, additional paragraph, crossed out in pen. The paragraph reads “I think that every one who believes in a supreme being supporting every being and every acting [sic.], believes also that God is essentially hidden and may be glad to see how present physics provides a veil hiding the creation.”[27] Lemaître’s motive for including this statement in an early draft of a paper to be sent to a scientific journal is unclear, but it is fully consistent with his views expressed elsewhere, that God is hidden in, and operates through, the physical laws of the cosmos.

More intriguing is that much of the second paragraph of the 1931 Nature paper echoes very closely the musings of St. Augustine on the nature of time. Here Lemaître writes,

The statement is as much philosophical as it is physical—how can one define space or the progression of time, if there is but a single thing that does not interact with anything else? In The City of God, written 15 centuries before Lemaître’s paper, Augustine of Hippo wrote:

Taking the definition of creature as some thing that interacts with other things in the cosmos, the two ideas are essentially identical and phrased quite similarly. Lemaître then goes on to say “If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time,”[29] while St. Augustine wrote “Then assuredly the world was made, not in time, but simultaneously with time”. The two ideas are identical “a little before” is only trivially different from “simultaneous” in this context.[30]

One must imagine that Lemaître’s classical education, perhaps his formation as a priest, provided him with a knowledge of St. Augustine’s writings. However, there is no citation of St. Augustine’s work in the Nature paper, and if the close correspondence with the text in The City of God was unintentional, it surely indicates that at some point in Lemaître’s life Augustine’s musings on time had made a big impression on him. It also bears noting that the ideas expressed in those two sentences are not essential to the main idea of the paper: that the expansion of a matter-filled cosmos implies an ultra-dense beginning a finite amount of time ago. But whatever the reason for the inclusion of these sentences, they provide a striking connection between modern cosmology and 5 th century Catholic theology.

Lemaître’s contributions to cosmology did not stop with the 1931 Nature paper. Up until World War II, he published a number of important papers that demonstrated again and again his ability to engage observational data with his rigorous solutions to the equations of general relativity. For example, grappling with the cosmological constant that Einstein disavowed, he proposed in a rigorous mathematical treatment that it might be a kind of vacuum energy, exerting a negative pressure that would accelerate the expansion of the cosmos. This presaged quite closely the idea of dark energy.[31]

Why then is Lemaître’s name not as well known as Hubble, or even Einstein? By the end of World War II, the center of action in cosmology and the elaboration of the Big Bang model had moved from general relativity to nuclear physics, a field which simply did not interest Lemaître.[32] The contentious atmosphere surrounding the Fiat Lux speech and the Steady State model soured Lemaître further. He remained a dedicated professor, pioneering high performance computing in Belgium, but in the end produced few students in cosmology as his legacy. By the 1970’s most of Lemaître’s peers had died, and his contributions in large part became undervalued in publications from then until about a decade ago when interest in his life was rekindled.[33]

The case for renaming Hubble’s law the Hubble-Lemaître law rests upon both the timing of the 1927 paper and Lemaître’s unique ability to provide the mathematically sound cosmologies while engaging directly with the astronomical data. Friedmann first published an expanding universe model, but did not explore the implications for the relationship of galactic recession velocity to distance which bears Hubble’s name. Hubble fitted astronomical data to obtain that relationship, but did not know how to derive it from general relativity. Others applied general relativity to the shape and evolution of the cosmos but either used the wrong model or did not grapple with the data. Had Lemaître published his 1927 paper in a major English language journal, one that would have been widely read by the astronomers of the day, the combination of his expanding mass-filled universe with his explicit derivation of the velocity-distance relation might have been much more widely recognized.

While few discoveries in science are correctly attributed to their discoverers[34], I would argue that this case is special, and that Lemaître really was undervalued despite awards earned in his lifetime. Lemaître’s religious identity is relevant here—at every talk I give on this subject audience members express surprise, even amazement, that a Catholic priest could be a scientist, let alone such a prominent one. Appropriately recognizing Lemaître’s name in the history of astronomy, by accepting the recommendation of the IAU to use the term “Hubble-Lemaître law”, will benefit scientist-believers and scientist-atheists alike. For the former, it strengthens our case that science and faith are compatible. And for the latter, it might just help remove the suspicion that Lemaître has been treated differently from his peers, both in his lifetime and thereafter, because of the collar he wore.[35]

[1] Dominique Lambert, The Atom of the Universe: The Life and Works of Georges Lemaitre (Krakow, Copernicus Center Press, 2016).

[2] John Farrell, The Day without Yesterday: Lemaître, Einstein, and the Birth of Modern Cosmology (New York, Basic Books, 2005).

[3] Graduate study in astronomy at Harvard did not officially begin until 1928 ( Thus, Lemaître, who arrived at Harvard in 1924, had to matriculate at nearby MIT in order to obtain his Ph.D.

[4] Georges H.J.E. Lemaître, (1) The gravitational field in a fluid sphere of uniform invariant density according to the theory of relativity (2) Note on de Sitter’ Universe (3) Note on the theory of pulsating stars. Ph.D. Dissertation, MIT, 1925, available from [email protected] at The notes “on de Sitter’s universe” and “on pulsating stars” were not included in the thesis copy deposited in the library the all-important first of these two was however published by Lemaître in the Journal of Mathematics and Physics, vol. IV, no. 3, May 1925.

[5] Ideas ranged from smaller systems of stars to individual solar systems in the process of formation see Robert W. Smith “Cosmology 1900-1931” in Cosmology: Historical, Literary, Philosophical, Religious, and Scientific Perspectives, ed. Norriss S. Hertherington (New York, Garland Publishing, 1993), 329-345.

[6] By 1913 telescopic observations showed that the Andromeda nebula, soon to be revealed definitely as a spiral galaxy, was rushing toward us at a very high speed, while within a few years other galaxies would be shown to be receding. But the sparsity of data prevented inference of a general expansion of the cosmos until Lemaître and Edwin Hubble came on the scene a decade later. See Robert. W. Smith, op. cit.

[7] Albert Einstein, “Kosmologische Betrachtungen zur allgemeinen Relativitätstheorie”, Prus. Akad. Wiss. Vol. 142, 1917, 142-152.

[8] See, for example, Lawrence M. Krauss and Robert J. Scherrer, “The return of a static universe and the end of cosmology”, General Relativity and Gravitation, Vol. 39, No. 10, 2007, 1545-1550.

[9] The light of galaxies was spread out according to wavelength at the telescope through the use of “spectrometers”. Because galaxies are made up in large part of stars, whose atmospheres contain atoms that absorb light at definite wavelengths, astronomers could see the same pattern of dark lines in the spectrum from one galaxy to another, but in many cases shifted to the red relative to the pattern one would see in the laboratory. It is possible in this way to measure very precisely the amount of the so-called “red shift” for a given galaxy.

[10] It is difficult to imagine space without a center after all, if the galaxies are receding, what are they receding from? The easiest way to visualize such a reality is to consider the surface of a balloon as a two-dimensional analog to three-dimensional space. Inflate the balloon, and draw dots all over the resulting surface. Note that no dot is at the center, every dot is at rest in its local spot on the balloon’s surface, and yet as you inflate the balloon, the perspective from every dot is that all other dots are moving away from it. (The dots themselves, drawn by pen, get bigger, but the real galaxies do not). By using a ruler, you can also show that the further one dot is from another, the faster it seems to recede—by just the proportionality law Lemaître inferred for his model. The difficulty with this analogy is that inevitably one fixates on the space outside and inside of the balloon—an extra spatial dimension that has no correspondence with anything in most models of the actual expanding universe.

[11] Helge Kragh and Robert W. Smith “Who discovered the expanding universe” History of Science, vol. 41, no. 2, 2003, 141-162. In 1958 Lemaître stated that he was made aware of Friedmann’s papers in a meeting with Einstein in late October 1927, months after his own paper (note 12) appeared. In view of the subsequent events regarding Hubble’s work, described in this article, there is little reason to disbelieve Lemaître.

[12] G. Lemaître, “Un univers homogene de masse constante et de rayon crossant, rendant compte de la vitesse radiale des nebuleuses extra-galactiques”, Annales de la Societe Scien- tifique de Bruxelles A, vol. 47, 1927, 49-59.

[13] A megaparsec is the conventional unit of distance used by extragalactic astronomers. One parsec is 3.26 light-years, and a megaparsec is a million parsecs, or roughly thirty million trillion kilometers.

[14] Edwin Hubble, “A relation between distance and radial velocity among extra-galactic nebulae”, Proc. National Academy of Sciences, Vol. 15, 1929, 168-173.

[15] See, for example, “Savant refutes theory of exploding universe”, The Los Angeles Times”, December 31, 1941, 10.

[16] Georges Lemaître, “A homogenous universe of constant mass and increasing radius accounting for the radial velocity of Extra-galactic nebulae”, Monthly Notices Royal Astronomical Society, vol. 91, 1931, 483-490.

[17] Mario Livio, “Mystery of the missing text solved”, Nature, Vol. 479, 171-173.

[18] In his 1927 paper, Lemaître averaged the data on galactic distances and velocities to obtain his constant, rather than fitting the data to a straight line. Given the limitations in the amount and precision of the data at the time, this was the right thing to do, since Lemaître knew that his model of the universe—the primary point of the paper—determined the form of the velocity-distance relationship.

[19] Georges Lemaître, “The beginning of the world from the point of view of quantum theory”, Nature, Vol. 127, 1931, 706.

[20] Lemaître never used this term it was a derogatory nickname for the model coined by one of its most prominent opponents, the astronomer Sir Fred Hoyle. See [2].

[21] “As far as I can see, such a theory [the Big Bang] remains entirely outside any metaphysical or religious question. It leaves the materialist free to deny any transcendental Being,” quoted in M. Godart and M. Heller, Cosmology of Lemaître (Tucson, AZ, Pachart Publishing House, 1985).

[22] The details of Lemaître’s intervention differ in various accounts, in particular, whether he spoke directly to the Pope, and if not, then who actually intervened with the Holy Father. The English language version of the original Fiat Lux speech can be found at

[23] Hermann Bondi and Thomas Gold, “The Steady-State Theory of the Expanding Universe”, Monthly Notices Royal Astronomical Society, 108, 1948, 252-270 Fred Hoyle, “A new model for the expanding universe”. Monthly Notices Royal Astronomical Society, 108, 372-382.

[24] This problem was solved in the 1950’s and 1960’s by improved measurement of distances to galaxies, which lowered Hubble’s constant and increased the time since the Big Bang.

[25] See, for example, Roger Penrose, Cycles of Time: An Extraordinary New View of the Universe (New York, Vintage Press, 2012) Alan H. Guth, “Eternal inflation and its implications, Journal of Physics, A40, 2007, 6811-6826.

[26] The Christian researcher’s faith “has directly nothing in common with his scientific activity. After all, a Christian does not act differently from any non-believer as far as walking, or running, or swimming is concerned.” Quoted in Godart and Heller, op cit.

[27] Jean-Pierre Luminet, “Editorial note to: Georges Lemaître, The beginning of the world from the point of view of quantum theory”, General Relativity and Gravitation, 43, 2011, 2911-2928.

[28] Georges Lemaître, “The beginning of the world from the point of view of quantum theory”, Nature, Vol. 127, 1931, 706.

[30] All Civ. Dei quotes from: Augustine of Hippo, The City of God, Marcus Dods, translator, in Augustine, (Great Books of the Western World, Vol. 18, Chicago, Encyclopedia Britannica, 1952, 325.

[31] George Lemaître, “Evolution of the expanding universe”, Proceedings of the National Academy of Sciences, vol. 20, 1934, 12-17. The Harvard astronomer Robert Kirshner wrote “In 1934, Lemaître associated a negative pressure with the energy density of the vacuum and said, ‘This is essentially the meaning of the cosmological constant.’ That is exactly the way we talk about dark energy today. Robert Kirshner “Review of The Day We Found the Universe Discovering the Expanding Universe”, Physics Today, vol. 62, no. 12, 2009, 51.

[32] Ralph Alpher, Hans Bethe and George Gamow., “The origin of the chemical elements”, Physical Review 73, 1948, 803-804.

[33] I believe that John Farrell’s book [2] was influential in this regard, as was the collection of papers in Rodney Holder and Simon Mitton, eds. Georges Lemaître: Life, Science and Legacy (Heidelberg, Springer, 2012), along with other articles and books published in the last 15 years.

[34] Stigler’s law of eponymy states that “no scientific discovery is named after its discoverer.” Stephen M. Stigler, “Stigler’s law of eponymy”, Proceedings New York Academy of Sciences, vol. 39, no. 1, series II, 1980, 147-157.

[35] Lemaître was recognized in his lifetime with the Francqui Prize (he was nominated by Einstein).

Watch the video: Алиса, запускай! Полный обзор робота-пылесоса XIAOMI Mi Robot Vacuum Mop Essential (September 2021).