We are searching data for your request:
Upon completion, a link will appear to access the found materials.
What about its value roughly 9 billion years after the Big Bang, when dark energy started to 'take over' and accelerate the expansion of the universe?
Is there a timeline or chart somewhere that shows approximate, theoretical values of Hubble's not-so-constant constant throughout the lifetime of the Universe?
This answer to the question “Is the Hubble constant dependent on redshift?” gives the formula (a form of the Friedmann equation) for the Hubble parameter $H(z)$ as a function of redshift $z$:
$$ H(z)^2 = H_0^2 left[ (1+z)^4 Omega_r + (1+z)^3 Omega_M + (1+z)^2 Omega_k + Omega_Lambda ight] $$
where the $Omega$ terms are the fractional densities in radiation, matter, curvature, and dark energy, respectively.
Using that, plus the knowledge that the redshift of the CMB is $z simeq 1100$, you can plug in values for the densities (I used WMAP values quoted here) and get that $H$ at the time of the CMB was about 22,000 times larger than the current value.
That answer also gives a graph of the value of the Hubble parameter as a function of time.
Questions about the cosmic microwave background radition and its discovery
I have few questions about cosmic microwave background radiation (CMBD) and trying to find simple answer at a basic level. I really appreciate your help and time!
The universe is almost 13.799 billion years old and currently has radius of around 46.5 billion light-years.
It is said that CMDB originated when the universe was 3,79,000 years old and almost 41 million light years and the temperature was around 3000 K. Every point in space became of source of radiation in all directions. The radiation spectrum was that of a blackbody at 3000 K. The blackbody spectrum at 3000 K is shown below.
Because the universe has expanded and is still expanding, the original spectrum has red-shifted and now corresponds to that of a blackbody at 2.74 K as shown below.
Please note that the first spectrum at 3000 K has 'wavelength' scale in nm and the one at 2.74 K has it in mm.
I think that before the discovery of CMDB in 1964, the cosmologists already understood that at what temperature recombination took place and the spectrum of CMDB radiation at the time of recombination. But I'm not able to understand that how the cosmologists knew that they must be looking out for microwaves as the excerpt below, Source #1, from Wikipedia tells and not for, say, EM waves in infrared region. How did they know that the original CMDB had been shifted to microwave region of spectrum? It'd would have been possible for them to estimate the region of red-shifted CMDB spectrum if they had an approximate idea of the age of universe. But the age of universe was not well established at that time. The closest estimate for age of universe could have been obtained using the estimate given by Sandage, Source #2, but he himself wasn't really sure of his estimate, Source #3. Anyway, using Sandage's Hubble constant value of '75' instead of currently known value of '67.8', gives the age of universe to be almost 13 billion years. Perhaps, they just had a rough idea that the shifted spectrum should be in radio region (microwaves are subset of radio waves).
What do you say about this?
"In the early 1960s, work on Brans–Dicke theory led Dicke to think about the early Universe, and with Jim Peebles he re-derived the prediction of a cosmic microwave background (having allegedly forgotten the earlier prediction of George Gamow and co-workers). Dicke, with David Todd Wilkinson and Peter G. Roll, immediately set about building a Dicke radiometer to search for the radiation, but they were scooped by the accidental detection made by Arno Penzias and Robert Woodrow Wilson (also using a Dicke radiometer), who were working at Bell Labs just a few miles from Princeton." - https://en.wikipedia.org/wiki/Robert_H._Dicke
"Sandage began working at the Palomar Observatory. In 1958 he published the first good estimate for the Hubble constant, revising Hubble's value of 250 down to 75 km/s/Mpc, which is close to today's accepted value." - https://en.wikipedia.org/wiki/Allan_Sandage
"However Sandage, like Einstein, did not believe his own results at the time of discovery. His value for the age of the universe[further explanation needed] was too short to reconcile with the 25-billion-year age estimated at that time for the oldest known stars. Sandage and other astronomers repeated these measurements numerous times, attempting to reduce the Hubble constant and thus increase the resulting age for the universe." - https://en.wikipedia.org/wiki/Hubble's_law#Hubble_time
It is also said that the the CMB photon that reaches us today has traveled almost 13.8 billion light years in an infinite universe.
I have been trying to understand the statement above. Please have a look on this attachment below. CMDB has always been with us and it will theoretically will be always be there but as time passes its spectrum would become more red-shifted and its intensity would also decrease. The figure on left shows the visible universe when the recombination took place. The CMDB photons from locations A, B, and C have already been received by the earth. Since the locations D and E have expanded to the distance of 46.5 billion light years over the time of almost 13.7 billion years, therefore photons from locations D and E are being received now.
Since the distance became 46.5 billion light years from 41 million light years over the period of almost 13.7 billion years (I think we would need to subtract the age of universe at the time recombination). The light had to travel almost 1134 times more distance to reach us now.
New approach refines the Hubble's constant and age of universe
Credit: CC0 Public Domain
Using known distances of 50 galaxies from Earth to refine calculations in Hubble's constant, a research team led by a University of Oregon astronomer estimates the age of the universe at 12.6 billion years.
Approaches to date the Big Bang, which gave birth to the universe, rely on mathematics and computational modeling, using distance estimates of the oldest stars, the behavior of galaxies and the rate of the universe's expansion. The idea is to compute how long it would take all objects to return to the beginning.
A key calculation for dating is the Hubble's constant, named after Edwin Hubble who first calculated the universe's expansion rate in 1929. Another recent technique uses observations of leftover radiation from the Big Bang. It maps bumps and wiggles in spacetime—the cosmic microwave background, or CMB—and reflects conditions in the early universe as set by Hubble's constant.
However, the methods reach different conclusions, said James Schombert, a professor of physics at the UO. In a paper published July 17 in the Astronomical Journal, he and colleagues unveil a new approach that recalibrates a distance-measuring tool known as the baryonic Tully-Fisher relation independently of Hubble's constant.
"The distance scale problem, as it is known, is incredibly difficult because the distances to galaxies are vast and the signposts for their distances are faint and hard to calibrate," Schombert said.
Schombert's team recalculated the Tully-Fisher approach, using accurately defined distances in a linear computation of the 50 galaxies as guides for measuring the distances of 95 other galaxies. The universe, he noted, is ruled by a series of mathematical patterns expressed in equations. The new approach more accurately accounts for the mass and rotational curves of galaxies to turn those equations into numbers like age and expansion rate.
His team's approach determines the Hubble's constant—the universe's expansion rate—at 75.1 kilometers per second per megaparsec, give or take 2.3. A megaparsec, a common unit of space-related measurements, is equal to one million parsecs. A parsec is about 3.3 light years.
All Hubble's constant values lower than 70, his team wrote, can be ruled out with 95 percent degree of confidence.
Traditionally used measuring techniques over the past 50 years, Schombert said, have set the value at 75, but CMB computes a rate of 67. The CMB technique, while using different assumptions and computer simulations, should still arrive at the same estimate, he said.
"The tension in the field occurs from the fact that it does not," Schombert said. "This difference is well outside the observational errors and produced a great deal of friction in the cosmological community."
Calculations drawn from observations of NASA's Wilkinson Microwave Anisotropy Probe in 2013 put the age of the universe at 13.77 billion years, which, for the moment, represents the standard model of Big Bang cosmology. The differing Hubble's constant values from the various techniques generally estimate the universe's age at between 12 billion and 14.5 billion years.
The new study, based in part on observations made with the Spitzer Space Telescope, adds a new element to how calculations to reach Hubble's constant can be set, by introducing a purely empirical method, using direct observations, to determine the distance to galaxies, Schombert said.
"Our resulting value is on the high side of the different schools of cosmology, signaling that our understanding of the physics of the universe is incomplete with the hope of new physics in the future," he said.
What was the the value of the Hubble constant at the time of the CMB's 'release' (i.e., 379,000 years after Big Bang)? - Astronomy
Discovered accidentally in 1964 by Penzias and Wilson (Nobel Prize, 1978), the CMB is a remnant of the hot, dense phase of the universe that followed the Big Bang. For several hundred thousand years after the Big Bang, the universe was hot enough for its matter (predominantly hydrogen) to remain ionized, and therefore opaque (like the bulk of the sun) to radiation. During this period, matter and light were in thermal equilibrium and the radiation is therefore expected to obey the classic blackbody laws (Planck, Wien, Stefan).
The existence of the CMB is regarded as one of three experimental pillars that point to a Big Bang start to the universe. (The other two pieces of evidence that indicate that our universe began with a Bang are the linearity of the Hubble expansion law and the universal cosmic abundances of the light element isotopes, such as helium, deuterium, and lithium.)
At some point about 400,000 years after the Bang, the universe had cooled to the point where the matter became neutral, at which point the universe's matter also became transparent to the radiation. (Completely ionized matter can absorb any wavelength radiation neutral matter can only absorb the relatively few wavelengths that carry the exact energy that match energy differences between electron energy levels.) The temperature at which this transition from ionized to neutral (called the "moment of decoupling") occurred was roughly 3000 K.
The spectrum as measured by the COBE satellite looks like
It indeed had the blackbody spectral shape predicted, but the peak in the microwave spectrum indicated a temperature of 2.726 K. Although this temperature is clearly insufficient to ionize hydrogen, the entire spectrum has been redshifted from that at the moment of decoupling (when the temperature was 3000 K) by the expansion of the universe. As space expands, the wavelengths of the CMB expand by the same factor. Wien's blackbody law says that the wavelength peak of the CMB spectrum is inversely proportional to the temperature of the CMB. Therefore, the drop in the CMB temperature by a factor of 1100 (= 3000 K/2.73 K) indicates an expansion of the universe by a factor of 1100 from the moment of decoupling until now.
What it can tell us
In addition to measuring the temperature of the overall CMB, anisotropies in the CMB are capable of telling us the Earth's motion with respect to the CMB, the geometry (or curvature) of the universe, the baryon content of the universe, the dark matter and dark energy content of the universe, the value of the Hubble constant, whether inflation incurred in the early universe, and more.
What various groups are measuring is usually presented in a format such as
BOOMERandG in April 2001 WMAP February 2003
What it means
The above diagrams plot the CMB power as a function of harmonic number. These diagrams are very much like that for a complex musical instrument note, which is also the sum of the amplitudes (or "power") of various frequencies or harmonics. For example, in the diagram below, 6 harmonics (top picture: each is a sinusoidal wave with an integral multiple of the fundamental frequency) are added together to produce the complex-shape wave shown in the middle picture. The bottom picture shows the relative amplitude contribution of each of the harmonics.
The CMB power spectra similarly plot the relative contribution of each spatial frequency (instead of temporal frequency).
The math and physics of anisotropies
If the CMB had precisely the same temperature in every direction in the sky, the sky would have the same brightness in every direction. Astronomers often use a false coloring scheme to represent brightness (different brightnesses are represented by different colors) especially when the radiation is being emitted in a part of the spectrum that is not visible to the human eye. A uniformly bright CMB would therefore be represented by a single color. This power is called the "l = 1" contribution to the power spectrum. If we could see the CMB with our eyes, the sky would look uniformly the same, as in the figure at the left.
(In this and subsequent diagrams, the entire sky is represented by a Mercator projection, the same technique often employed to portray the entire earth. The equator (latitude 0 for earth) is a horizontal line in the middle of the oval, with northern latitudes above and southern latitudes below. The Greenwich meridian (longitude 0 on Earth) is a vertical line through the middle, with western longitudes to the left and eastern longitudes to the right. In a similar manner, the galactic equator or plane (latitude 0) is a line running through the middle of the sky pictures. The galactic center (galactic longitude 0) is at the center of the diagram.
In reality, however, not all directions in the sky appear to have the same CMB brightness. The earth is moving with respect to the matter that last emitted the CMB, and therefore the CMB spectrum looks bluest (and, by Wien's law, therefore hottest) in that direction and reddest (and coolest) opposite to that direction. This effect would contribute to the CMB power spectrum at a spatial frequency of l = 2. The "l = 2" contribution is often called a dipole contribution, because the brightness distribution over the sky has 2 poles (one hot, one cool) in it. If we were somehow able to see ONLY this dipole contribution [the brightness amplitude of which is far less than the that of the dominant "l=1" contribution] by removing the average brightness (or temperature) from the preceding diagram and amplify the contrast by approximately a thousand, the sky now looks like the figure at the right.
By measuring the amount of the dipole anisotropy (the bluest part of the sky is .0033 K hotter than average), we can determine the magnitude of the earth's motion with respect to the CMB: the earth is moving at a speed of 370 km/s in the direction of the constellation Virgo.
If the dipole contribution due to Earth's motion is now subtracted out, the sky looks like the figure at the left.
The temperature differences that remain are a composite of two things: a contribution from our galaxy and the true anisotropies in the CMB that were present at the moment of decoupling, hundreds of thousands of years after the Big Bang.
The galaxy is bright at microwave wavelengths due to emission by molecules (particularly CO), dust,
The anisotropies present at the moment of decoupling represent random noise present in the very early universe that was amplified by inflation to cosmic-sized scales. The anisotropies present at the moment of decoupling are of the appropriate magnitude to account for how the large-scale structures that we see today (from galaxies to superclusters of galaxies) formed under the influence of gravity.
It is possible to remove the contribution of the galaxy's emission by measuring
Once the galactic contribution is removed, COBE saw this:
This diagram is the sum of the amplitude (or power) contributions of all spatial frequency harmonics (but with those of l = 1 and l = 2 removed). It is the equivalent of the complex wave musical instrument wave shape above, which was formed by the sum of the amplitude (or audio power) contributions of several temporal (or harmonic) frequencies. The difference is that the CMB diagram shows the power as a function of position in the sky (i.e., as a function of galactic latitude and longitude), whereas the musical instrument wave shape shows the power as a function of the single dimension of time.
The goal for the CMB researchers is to decompose the CMB diagram into its harmonic components. And fortunately the relative amounts of the harmonic components are determined by intrinsic properties of the universe (such as the Hubble constant, the amount of dark matter, and the value of the cosmological constant, the age of the universe, and the amount of dark eenergy).
Who is measuring this
COBE (Cosmic Background Explorer, launched in 1989) was the first satellite launched to measure the CMB properties outside Earth's atmosphere. COBE established the precise blackbody character of the radiation and measured the temperature as 2.726 K, measured the earth's velocity relative to the matter that last emiited the radiation, and eventually detected anisotropies in the background at the level of 1 part in 10 5 .
BOOMERanG measures CMB properties by launching balloon-borne instruments at the South Pole. Here is their latest version of the anisotropy of a piece of the sky
NASA/WMAP Science Team
MAP (launched 6/30/01) will measure the individual properties of the universe (e.g., the Hubble constant, the baryon density, the cosmological constant value) to within 5%. The first MAP pictures (feb 2003) is on the left with the COBE result from 5 years earlier for comparison. Note that the MAP resolution is significantly better than the COBE resolution.
MAP found the following values (2003) for cosmological parameters:
geometry of the universe: consistent with flat: omega total = 1.02 + 0.02
omega (dark energy) = 0.73
omega (dark matter) = 0.23
omega (baryons) = 0.044 + 0.004
omega (neutrinos) < 0.0005
omega (radiation) = 0.0001
The content of the universe:
epoch of first star formation (end of the dark ages): 200 Myr after the Bang
History of speed of the expansion of the universe?
In the absence of dark energy, the Hubble Constant would be inversely proportional to cosmic age. With dark energy, it initially falls that way but settles down to a constant value in the far future so the maths is more complex.
There is an applet called "CosmoCalc" in a number of varieties that tells you lots of interesting parameters. This one from Ned Wright tells you redshift from lookback time:
This one takes redshift and tells you lots of stuff including the Hubble Constant
They need to use the same assumptions for current values so set H0 to 70.4 and Omega_M to 0.272 in Wright's. Put in say 5 for the light travel time in Gyr and press the "Flat" button. You get 0.492 for the redshift.
Now put that value in for redshift in the second applet and it will tell you everything you want to know (and more).
The "speed" of the expansion of distances is proportional to the size of the distance. A distance twice as big increases at twice the "speed". Expansion rate is really not any one particular "mph" or "km/s". It is a percentage growth rate of distance.
The current rate distances are growing is 1/140 percent per million years.
In the past the percentage growth rate was considerably larger. Here is a sample history:
Past epochs are labeled by how much distances and wavelengths have been elongated since that time (the "stretch" factor). The table goes from S=10 to S=1 (the present moment).
You can read off the percentage growth rates from the "Hubble time" column (the fourth column).
In that column 14.0 Gy corresponds to the present rate of 1/140 percent per million years.
And 0.8 Gy corresponds to the rate of 1/8 percent per million years.
That was the rate back in year 600 million (i.e. in year 0.6 Gy) as you can see from the table.
The same calculator will easily tell you the distance growth rate at S=1090, the moment of clearing or transparency that you asked about. Around year 380,000 when the ancient CMB light originated. That light has been "stretched" by a factor of 1090 so you just have to put that number in the upper limit box, instead the number 10, which I put in to make this table.
If instead of a percent growth rate, what you want is a km/s speed of some benchmark distance, I would suggest a million lightyears. Most people have some mental association with that distance---having heard the distance to a neighbor galaxy like Andromeda expressed in those terms. It is on the order of a million lightyears from us and its light takes on the order of a million years to get here.
When distances are growing at rate of 1/140% per million years, then a distance of 1 million lightyears is growing at a "speed" of 3000/140 km/s.
All you have to do, to convert, is multiply the percent rate 1/140 by 3000. That gives the km/s.
The Institute for Creation Research
Cosmology is the study of the origin and structure of the universe, and the Big Bang is the dominant secular cosmological model. Some Christians say God used the Big Bang to create the universe, but that model contradicts Scripture at multiple points. 1 There have been some recent developments involving the Big Bang model, nearly all of which are bad news for Big Bang proponents.
According to the Big Bang model, the universe was once very dense and hot. Supposedly, the universe began expanding rapidly about 14 billion years ago and is still expanding today. This expansion, inferred from clues within light from distant galaxies, is one of three main arguments for the model. 2 A second argument is that the Big Bang does a good job of accounting for the light chemical elements hydrogen and helium. A third is the existence of faint cosmic microwave background (CMB) radiation coming to us from all directions in space (Figure 1). Big Bang proponents interpret the CMB as an &ldquoafterglow&rdquo from a time about 400,000 years after the Big Bang occurred.
Despite these apparent successes, the Big Bang model has serious scientific problems. One enormous difficulty is that Big Bang proponents have concluded that about 95% of the &ldquostuff&rdquo in the universe is composed of mysterious entities called dark matter and dark energy, but they don&rsquot know what these things are. How can Big Bang theorists claim to understand the process that supposedly brought the universe into existence when, by their own admission, 95% of the universe&rsquos contents are unknown? 3
As a creation ministry, ICR wants people to be up-to-date on the current version of the Big Bang model, not one that was popular decades ago. For instance, Big Bang cosmologists used to say the universe went through an enormous &ldquogrowth spurt&rdquo called inflation shortly after the Big Bang. However, most theorists today claim that inflation happened first and caused the Big Bang. 4
Hubble Constant Contradiction Persists
Most astronomers think the universe is expanding, causing galaxies to move away from each other. Scientists use a number called the Hubble constant, denoted by the symbol H0, to characterize this expansion. They use two different methods to calculate H0. One way is to calculate the value directly, using estimated distances and speeds of distant galaxies. A second way is to infer this number by looking at details of the CMB radiation. The values calculated from these two methods conflict with each other, and a recent study hasn&rsquot resolved the issue. 5-7
When Big Bang proponents use the CMB to infer a value for H0, they are assuming the Big Bang model is correct. Naturally, if the model is wrong, there&rsquos no reason to expect this method to yield an accurate result. Creationists aren&rsquot surprised these two different methods yield contradictory results. And even though the CMB is arguably the strongest argument for the Big Bang, there are details about this radiation that do not align with the Big Bang model. 8 For instance, Cambridge astrophysicist George Efstathiou commented on how the CMB doesn&rsquot match the expectations of inflation theory:
The theory of inflation predicts that today&rsquos universe should appear uniform at the largest scales in all directions&hellip.That uniformity should also characterize the distribution of [temperature] fluctuations at the largest scales within the CMB. But these anomalies, which [the] Planck [satellite] confirmed, such as the cold spot, suggest that this isn&rsquot the case&hellip.This is very strange&hellip..And I think that if there really is anything to this, you have to question how that fits in with inflation&hellip..It&rsquos really puzzling. 9
Missing Baryonic Matter Found?
Heavy subatomic particles like protons and neutrons are called baryons. Because protons and neutrons comprise nearly all the mass of an atom, the normal atomic matter we interact with in our everyday experiences is called baryonic matter.
As mentioned earlier, one of the three main arguments for the Big Bang is that it can account for the observed abundances of hydrogen and helium in the universe. However, this is because the model has an adjustable parameter, like a tuning dial on a radio. 10 Big Bang scientists choose a value for this parameter to ensure that the model matches the observed abundances of hydrogen and helium. 11
So, contrary to popular perception, the Big Bang does not successfully predict the abundances of hydrogen and helium. Rather, the model&rsquos proponents choose a value for this parameter to make sure the model gives the right answer. 12-14 Nevertheless, secular scientists consider the model&rsquos ability to match the observed abundances of hydrogen and helium to be a major success.
Once Big Bang scientists choose their value for this parameter, the model indicates how much baryonic matter should exist in the universe. 15 When one adds up the different forms of matter thought to exist, the amount of baryonic matter predicted by the Big Bang is only 20% of the total (Figure 2). Big Bang astronomers think the other 80% is an exotic form of invisible dark matter, discussed in the next section. Previous observations indicated that visible stars and gas could only account for half this predicted baryonic matter, and scientists couldn&rsquot account for the other half.
Last year, astronomers claimed to have solved this problem. 16 (Interestingly, another scientist claimed to have solved it one year before that. 17 ) Theorists think the missing baryonic matter should reside in thin, hot strings of ionized hydrogen located between galaxies. Astronomers didn&rsquot detect the hydrogen per se but rather ionized oxygen that they think is associated with the hydrogen. Naturally, Big Bang proponents will see this as good news for their model. However, it&rsquos important to realize that the missing matter hasn&rsquot actually been found directly. Rather, oxygen was found that secular scientists think, based on their models, should be associated with the missing hydrogen.
It&rsquos worth noting that the Wikipedia entry for &ldquoMissing baryon problem&rdquo has been flagged for possibly making too strong a claim about the problem being solved, despite the obvious anti-creation bias found in Wikipedia articles touching on the creation-evolution controversy. 18
Dark Matter Still Undetected
As mentioned earlier, many astronomers think 80% of all the matter in the universe is invisible dark matter. Although astronomers deduced the existence of dark matter apart from the Big Bang model, this substance has become very important to secular cosmologists. They recognize the enormous problems in their theories of star and galaxy formation. Many claim dark matter is the &ldquomissing ingredient&rdquo that can somehow enable their theories to work. 19 This is very convenient for theorists. Since no one knows what dark matter is&mdashor even if it really exists&mdashno one can demonstrate that their theories are wrong! 20
Because the Big Bang model only allows for 20% of all matter to be baryonic (made of atoms), its proponents must assume that dark matter is something else. Other forms of matter (i.e., free electrons, neutrinos, etc.) do exist but have generally been ruled out as dark matter candidates. The scientists have no choice but to insist that dark matter is some exotic, never-before-observed substance.
So, how is the hunt for this exotic matter going? Not well. Repeated searches have come up empty, 21 and theorists are becoming increasingly nervous, if not desperate.
Dark Matter Before the Big Bang?
How desperate? One theorist recently suggested that perhaps dark matter somehow existed before the Big Bang. 22,23 How is that possible? Haven&rsquot we been led to believe that the Big Bang was the origin of everything?
This theorist said dark matter came from something called a scalar field that supposedly was present before the Big Bang. A problem with this idea is that only one scalar field is known to exist, and that&rsquos the field associated with the famous Higgs boson. All other scalar fields are hypothetical.
By the way, this should give pause to Christians who say God used the Big Bang to create the universe. If the supposed &ldquobang&rdquo was God&rsquos initial creative act, then according to this reasoning dark matter existed before Genesis 1:1. If 80% of all existing matter had an existence before then, did God actually create it prior to Genesis 1:1? If so, why doesn&rsquot the Bible tell us? If not, is dark matter simply eternal? And if it&rsquos eternal, what does that do to Christian theology?
Time Before the Big Bang?
This raises another point. Big Bang scientists had long insisted that speaking of time before the Big Bang was as nonsensical as asking the question &ldquoWhat is north of the North Pole?&rdquo Well, apparently the question wasn&rsquot as nonsensical as we were led to believe, because they now routinely talk about time &ldquobefore&rdquo the Big Bang. In fact, inflation theorists now claim the inflation process that supposedly triggered the Big Bang could have been going on for eons by the time the Big Bang supposedly created our universe. This has led to the idea that our universe is only one of an infinite number of universes in a supposed &ldquomultiverse.&rdquo 24
This should demonstrate just how &ldquosquishy&rdquo Big Bang theories are. Secular scientists simply won&rsquot allow data to falsify them, even if it means tacking on additional hypotheses or accepting concepts that they themselves dismissed as nonsense decades ago, such as time before the Big Bang.
Instead of attempting to harmonize the inerrant Word of God with a flimsy scientific model, Christians would do far better to simply take God&rsquos Word at face value. The universe came into existence not through a Big Bang but by the omnipotent Word of God.
Two competing forces – the pull of gravity and the outwards push of radiation – played a cosmic tug of war with the universe in its infancy
Over a century since Hubble's first estimate for the rate of cosmic expansion, that number has been revised downwards time and time again. Today's estimates put it at somewhere between 67 and 74km/s/Mpc (42-46 miles/s/Mpc).
Part of the problem is that the Hubble Constant can be different depending on how you measure it.
Most descriptions of the Hubble Constant discrepancy say there are two ways of measuring its value – one looks at how fast nearby galaxies are moving away from us while the second uses the cosmic microwave background (CMB), the first light that escaped after the Big Bang.
We can still see this light today, but because of the distant parts of the universe zooming away from us the light has been stretched into radio waves. These radio signals, first discovered by accident in the 1960s, give us the earliest possible insight into what the Universe looked like.
Two competing forces – the pull of gravity and the outwards push of radiation – played a cosmic tug of war with the universe in its infancy, which created disturbances that can still be seen within the cosmic microwave background as tiny differences in temperature.
Using these disturbances, it is then possible to measure how fast the Universe was expanding shortly after the Big Bang and this can then be applied to the Standard Model of Cosmology to infer the expansion rate today. This Standard Model is one of the best explanations we have for how the Universe began, what it is made of and what we see around us today.
Tiny disturbances in early universe can be seen in fluctuations in the oldest light in the Universe – the cosmic microwave background (Credit: Nasa/JPL/ESA-Planck)
But there is a problem. When astronomers try to measure the Hubble Constant by looking at how nearby galaxies are moving away from us, they get a different figure.
"If the [standard] model is correct, then you would imagine that the two values – what you measure today locally and the value that you infer from the early observations would agree," says Freedman. "And they don't."
When the European Space Agency (ESA)'s Planck satellite measured discrepancies in the CMB, first in 2014 then again in 2018, the value that comes out for the Hubble constant is 67.4km (41.9 miles)/s/Mpc. But this is around 9% less than the value astronomers like Freedman have measured when looking at nearby galaxies.
Further measurements of the CMB in 2020 using the Atacama Cosmology Telescope correlated with the data from Planck. "This helps to rule out that there was a systematic problem with Planck from a couple of sources" says Beaton. If the CMB measurements were correct – it left one of two possibilities: either the techniques using light from nearby galaxies were off, or the Standard Model of Cosmology needs to be changed.
The technique used by Freedman and her colleagues takes advantage of a specific type of star called a Cepheid variable. Discovered around 100 years ago by an astronomer called Henrietta Leavitt, these stars change their brightness, pulsing fainter and brighter over days or weeks. Leavitt discovered the brighter the star is, the longer it takes to brighten, then dim and then brighten again. Now, astronomers can tell exactly how bright a star really is by studying these pulses in brightness. By measuring how bright it appears to us on Earth, and knowing light dims as a function of distance, it provides a precise way of measuring the distance to stars. (Read more about how Henrietta Leavitt changed our view of the Universe.)
Hubble Time: Hubble Time is comparable to the current age of the universe.
Inflationary Cosmology: It states that the universe (the space) went through exponential expansion very early after the big bang.
Hubble Time - An estimate of the age of the universe obtained by taking the inverse of Hubble's constant. The estimate is only valid if there has been no acceleration or deceleration of the expansion of the universe .
Hubble Time is an estimate of the age of the universe it is the inverse of the Hubble constant.
If we had a movie of the expanding universe and ran the film backward, what would we see? The galaxies, instead of moving apart, would move together in our movie-getting closer and closer all the time.
(b) The inverse of the Hubble constant and a crude measure of the universe's age.
, also called the Hubble age or the Hubble period, provides an estimate for the age of the universe by presuming that the universe has always expanded at the same rate as it is expanding today.
Numerically the inverse of the Hubble constant it represents, in order of magnitude, the age of the universe. Hydrogen Alpha Also called H-alpha. Light emitted at a wavelength of 6563 ⋭ from an atomic transition in hydrogen.
overestimates the age of the universe.
Because the Universe was once so hot and dense that even neutrinos interacted many times during the
1/H, there once was a thermal background of neutrinos in equilibrium with the thermal background of photons that is the CMBR.
On the horizontal axis is time, but the units aren't seconds or years or gigayears instead, they are in "
" is simply the reciprocal of the current value of the Hubble parameter.
From the relationship to = 1/Ho, the age of the Universe (or the
, to) can be estimated to be 14 billion years, consistent with the most accurate current value of 13.7 +/- 0.2 Gyr determined from the combined measurements of the CMB anisotropy and the accelerating expansion of the Universe.
for merging and relaxed galaxy clusters p. 813
W. Kapferer, T. Kronberger, J. Weratschnig, S. Schindler, W. Domainko, E. van Kampen, S. Kimeswenger, M. Mair and M. Ruffert
Dwarf galaxies with gas fractions and star formation rates on the order of giant spiral galaxies (implying the gas will be consumed in less than a
), but low metallicity. It may be that galactic winds carry away heavy elements formed in the galaxy out of its shallow potential well. See e.g. arXiv:1103.1116.
When these two quantities, velocity and distance, were plotted against each other, the result was an almost perfectly linear fit - the slope of the line is the HUBBLE CONSTANT (H0), and its reciprocal, the
, is the age of the Universe since the Big Bang.
As mass has a relatively weak effect on the expansion rate, the age of such a universe is greater than two‐thirds of the
Precise New Measurements From Hubble Confirm the Accelerating Expansion of the Universe. Still no Idea Why it’s Happening
In the 1920s, Edwin Hubble made the groundbreaking revelation that the Universe was in a state of expansion. Originally predicted as a consequence of Einstein’s Theory of General Relativity, this confirmation led to what came to be known as Hubble’s Constant. In the ensuring decades, and thanks to the deployment of next-generation telescopes – like the aptly-named Hubble Space Telescope (HST) – scientists have been forced to revise this law.
In short, in the past few decades, the ability to see farther into space (and deeper into time) has allowed astronomers to make more accurate measurements about how rapidly the early Universe expanded. And thanks to a new survey performed using Hubble, an international team of astronomers has been able to conduct the most precise measurements of the expansion rate of the Universe to date.
This survey was conducted by the Supernova H0 for the Equation of State (SH0ES) team, an international group of astronomers that has been on a quest to refine the accuracy of the Hubble Constant since 2005. The group is led by Adam Reiss of the Space Telescope Science Institute (STScI) and Johns Hopkins University, and includes members from the American Museum of Natural History, the Neils Bohr Institute, the National Optical Astronomy Observatory, and many prestigious universities and research institutions.
The study which describes their findings recently appeared in The Astrophysical Journal under the title “Type Ia Supernova Distances at Redshift >1.5 from the Hubble Space Telescope Multi-cycle Treasury Programs: The Early Expansion Rate“. For the sake of their study, and consistent with their long term goals, the team sought to construct a new and more accurate “distance ladder”.
This tool is how astronomers have traditionally measured distances in the Universe, which consists of relying on distance markers like Cepheid variables – pulsating stars whose distances can be inferred by comparing their intrinsic brightness with their apparent brightness. These measurements are then compared to the way light from distance galaxies is redshifted to determine how fast the space between galaxies is expanding.
From this, the Hubble Constant is derived. To build their distant ladder, Riess and his team conducted parallax measurements using Hubble’s Wide Field Camera 3 (WFC3) of eight newly-analyzed Cepheid variable stars in the Milky Way. These stars are about 10 times farther away than any studied previously – between 6,000 and 12,000 light-year from Earth – and pulsate at longer intervals.
To ensure accuracy that would account for the wobbles of these stars, the team also developed a new method where Hubble would measure a star’s position a thousand times a minute every six months for four years. The team then compared the brightness of these eight stars with more distant Cepheids to ensure that they could calculate the distances to other galaxies with more precision.
Illustration showing three steps astronomers used to measure the universe’s expansion rate (Hubble constant) to an unprecedented accuracy, reducing the total uncertainty to 2.3 percent. Credits: NASA/ESA/A. Feild (STScI)/and A. Riess (STScI/JHU)
Using the new technique, Hubble was able to capture the change in position of these stars relative to others, which simplified things immensely. As Riess explained in a NASA press release:
“This method allows for repeated opportunities to measure the extremely tiny displacements due to parallax. You’re measuring the separation between two stars, not just in one place on the camera, but over and over thousands of times, reducing the errors in measurement.”
Compared to previous surveys, the team was able to extend the number of stars analyzed to distances up to 10 times farther. However, their results also contradicted those obtained by the European Space Agency’s (ESA) Planck satellite, which has been measuring the Cosmic Microwave Background (CMB) – the leftover radiation created by the Big Bang – since it was deployed in 2009.
By mapping the CMB, Planck has been able to trace the expansion of the cosmos during the early Universe – circa. 378,000 years after the Big Bang. Planck’s result predicted that the Hubble constant value should now be 67 kilometers per second per megaparsec (3.3 million light-years), and could be no higher than 69 kilometers per second per megaparsec.
The Big Bang timeline of the Universe. Cosmic neutrinos affect the CMB at the time it was emitted, and physics takes care of the rest of their evolution until today. Credit: NASA/JPL-Caltech/A. Kashlinsky (GSFC).
Based on their sruvey, Riess’s team obtained a value of 73 kilometers per second per megaparsec, a discrepancy of 9%. Essentially, their results indicate that galaxies are moving at a faster rate than that implied by observations of the early Universe. Because the Hubble data was so precise, astronomers cannot dismiss the gap between the two results as errors in any single measurement or method. As Reiss explained:
“The community is really grappling with understanding the meaning of this discrepancy… Both results have been tested multiple ways, so barring a series of unrelated mistakes. it is increasingly likely that this is not a bug but a feature of the universe.”
These latest results therefore suggest that some previously unknown force or some new physics might be at work in the Universe. In terms of explanations, Reiss and his team have offered three possibilities, all of which have to do with the 95% of the Universe that we cannot see (i.e. dark matter and dark energy). In 2011, Reiss and two other scientists were awarded the Nobel Prize in Physics for their 1998 discovery that the Universe was in an accelerated rate of expansion.
Consistent with that, they suggest that Dark Energy could be pushing galaxies apart with increasing strength. Another possibility is that there is an undiscovered subatomic particle out there that is similar to a neutrino, but interacts with normal matter by gravity instead of subatomic forces. These “sterile neutrinos” would travel at close to the speed of light and could collectively be known as “dark radiation”.
This illustration shows the evolution of the Universe, from the Big Bang on the left, to modern times on the right. Credit: NASA
Any of these possibilities would mean that the contents of the early Universe were different, thus forcing a rethink of our cosmological models. At present, Riess and colleagues don’t have any answers, but plan to continue fine-tuning their measurements. So far, the SHoES team has decreased the uncertainty of the Hubble Constant to 2.3%.
This is in keeping with one of the central goals of the Hubble Space Telescope, which was to help reduce the uncertainty value in Hubble’s Constant, for which estimates once varied by a factor of 2.
So while this discrepancy opens the door to new and challenging questions, it also reduces our uncertainty substantially when it comes to measuring the Universe. Ultimately, this will improve our understanding of how the Universe evolved after it was created in a fiery cataclysm 13.8 billion years ago.
Deconstruction of Big Bang model (III)
Radio telescopes can see through the dust and observe the rare, bright starburst galaxies, but until now have not been sensitive enough to detect the signals from distant Milky Way-like galaxies that are responsible for most of the star formation in the universe. These are distant galaxies like our own that have never been observed in radio light before.
The problem (of Big Bang theory) is, galaxies need at least three billions years for development of their flat shape - so how populous such a mature galaxies should be in allegedly one billion year old Universe?
"Dark Ages" of the Universe relates to potential time anomaly of the recent cosmology . Between the decoupling of CMB radiation from matter and the formation of stars there should have been a "Dark Ages" during which there was only neutral hydrogen. Star formation generated radiation at energies high enough to ionize hydrogen and the ionized interstellar gas started to produce radiation.
The 21 cm line of neutral hydrogen serves as a signature of neutral hydrogen. This line is redshifted and from the lower bound for the redshift one can deduce the time when "Dark Ages" ended. Recent study using Murchison Widefield Array (MWA) radio telescope by Jonathan Pober and collaborators gave an unexpected result. Only a new lower upper bound for this redshift emerged: the upper bound corresponds to about 2 meters. The conclusion of the experiments is still optimistic: soon the upper bound for the redshift should be brought to light.
In dense aether model the Universe is steady state, all indicia for dark ages should thus remain unobservable.
Plastic microbeads dropped into a container of salt water topped with less dense fresh water are pulled down by the force of gravity and thrust upward by buoyancy. As they hang suspended, the interplay between buoyancy and diffusion -- acting to balance out the concentration gradient of salt -- creates flows around the microbeads, causing them to slowly move. Rather than moving randomly, however, they clump together, solving their own jigsaw-like puzzles. As the clusters grow, the fluid force increases.
Like so many discoveries, this one began accidentally. A graduate student intended to show a favorite parlor trick -- how spheres dumped into a tank of salt water will "bounce" on their way to the bottom, as long as the fluid is uniformly stratified by density. But the student in charge of the experiment made an error in setting up the density of the lower fluid. The spheres bounced and then hung there, submerged but not sinking to the bottom.
Original study Interesting mechanism and study, but seriously doubt that this mechanism could apply in wider extent across termohaline gradient of sea, where the gradients of salts remain rather low and turbulence large. BTW Because dark matter also behaves like fluid in certain extent, its gradients could promote planetogenesis from interstellar gas. See also:
Planet Formation? It’s a Drag The way worlds form from dust may also explain other phenomena throughout the universe—and right here on Earth
New study sheds new light on planet formation planets might form much faster than previously thought or, alternatively, that stars harboring planets could be far more numerous.
Both existing planet, both galaxy formation models are currently based on accretion paradigm, i.e. top to bottom model (planetesimals accretion model in particular) - but there is rising evidence for time reversed bottom to top scenario of gradual condensation of sparse dust clouds to gradually bigger particles similar to flocculation of sediments. BTW the similar paradigm shift based on horizontal gene transfer rather than top-to bottom phylogenesis is also lurking in evolutionary sciences: actually the more, the more organisms get primitive 1, 2, 3, 4.
Rather than moving randomly, however, they clump together, solving their own jigsaw-like puzzles. As the clusters grow, the fluid force increases.
My ideas here are condensing from widespread information basis under gradient of gradually growing body of evidence and they gradually getting coherent like pieces of jigsaw-like puzzle as well. I'm just retyping them here again and again and polishing their logical structure each time during this. So maybe the above fluid based mechanism isn't so different from the way, in which theories of protoscience gradually condense from widespread ideas and seemingly unrelated facts.
In the times of information explosion where pieces of information are subtle but abundant this bottom-up approach can get more effective, than waiting for reliable evidence, as mainstream science is practising right now because of its occupational driven attitude (which has nowhere to hurry, until money are going). Note also that this approach favours elderly persons, who already have wider life-experience basis rather than youngsters who are still forced to rely on established paradigms and thus they paradoxically become more conservative than elderly chaps.
Massive Gas Disk Raises Questions about Planet Formation Theory The star, called 49 Ceti, is 40 million years old and conventional theories of planet formation predict that the gas should have disappeared by that age. The enigmatically large amount of gas requests a reconsideration of our current understanding of planet formation.
Mainstream astronomy adheres on determinist "top to bottom" model of formation of massive bodies, starting with Universe (Lamaitre's "primordial atom"), formation of galaxies by accretion and finally formation of planets by gradual accretion of material from protoplanetary disk to planetesimals. But there is growing body of evidence for time reversed bottom to top scenario, which merely resembles gradual coalescence of sparse clouds into more dense ones. This mechanism can explain easier Titius Bode law, tilt of planets and another geometric aspects of both galaxies both planetary systems. See also:
Deconstruction of Big Bang model 1, 2, 3
No Dark Energy? No Chance, Cosmologists Contend A recent study claimed to find no evidence of dark energy. Then a rebuttal appeared. Then a rebuttal of the rebuttal, but that was met by general dismissal. Resume: Cosmologists still think dark energy exists. It's worth to note, that "confirmation" of dark energy got Nobel prize relatively recently in 2011. It's refusal would also imply one of fastest emerging Nobel prize controversies. See also:
In dense aether model Universe is random and steady-state, Hubble red shift is the result of scattering of light on quantum fluctuations of vacuum. This scattering is non-linear though, because scattered light is long-wavelength and prone to scattering even more. It leads to the avalanche-like absorption of light at sufficient distance from any observer of Universe, which is currently known as a particle horizon of Universe and its dual analogy of event horizon of black holes. From this reason dark energy should be observable even in dense aether model, because the dark energy is currently interpreted as this accelerated scattering ("accelerated expansion of space-time").
For measurement of the speed of Universe expansion currently two methods are employed, measurements of frequency of microwave background of Universe (CMBR) and red shift observed with supernovae and these two values differ each other, because on scattering of light participates also dark matter around all massive objects, including these supernovae. Universe looks in their light as expanding faster than in light of microwave background. At distance the long-wavelength portion of light applies more, which renders dark matter more transparent and its effect less pronounced (dark matter is relatively "missing" in distant "early" Universe), which makes acceleration of red shift measured by supernovae less prominent.
In the light of CMBR Universe appears expanding slower than in the light of supernovae, but its expansion accelerates faster and vice-versa. Both type of observations thus have their truth and because they're both quantum fluctuations based, they can also serve as an example of multiple-histories interpretation of quantum mechanics, albeit very subtle.
New evidence shows that the key assumption made in the discovery of dark energy is in error Last month a new analysis of the supernova data showed they can be explained without dark energy. However, that new analysis of the supernova data was swiftly criticized by another group. This criticism did not make much sense because they picked on the use of the coordinate system, which was basically the whole point of the original analysis. There was another paper just a few days ago that claimed that actually supernovae are not very good standards for standard candles, and that indeed their luminosity might just depend on the average age of the star that goes supernova. In any case, the authors of the original paper then debunked the criticism. And that is still the status today:
The most direct and strongest evidence for the accelerating universe with dark energy is provided by the distance measurements using type Ia supernovae (SN Ia) for the galaxies at high redshift. This result is based on the assumption that the corrected luminosity of SN Ia through the empirical standardization would not evolve with redshift. New observations and analysis made by a team of astronomers at Yonsei University (Seoul, South Korea), together with their collaborators at Lyon University and KASI, show, however, that this key assumption is most likely in error. The team has performed very high-quality (signal-to-noise ratio
175) spectroscopic observations to cover most of the reported nearby early-type host galaxies of SN Ia, from which they obtained the most direct and reliable measurements of population ages for these host galaxies.
They find a significant correlation between SN luminosity and stellar population age at a 99.5 percent confidence level. As such, this is the most direct and stringent test ever made for the luminosity evolution of SN Ia. Since SN progenitors in host galaxies are getting younger with redshift (look-back time), this result inevitably indicates a serious systematic bias with redshift in SN cosmology. Taken at face values, the luminosity evolution of SN is significant enough to question the very existence of dark energy. When the luminosity evolution of SN is properly taken into account, the team found that the evidence for the existence of dark energy simply goes away (see Figure 1).
Note that dark energy observation got Nobel Prize in 2011. Commenting on the result, Prof. Young-Wook Lee (Yonsei Univ., Seoul), who led the project said,
"Quoting Carl Sagan, extraordinary claims require extraordinary evidence, but I am not sure we have such extraordinary evidence for dark energy. Our result illustrates that dark energy from SN cosmology, which led to the 2011 Nobel Prize in Physics, might be an artifact of a fragile and false assumption." See also:
Deconstruction of Big Bang model 1, 2, 3
In dense aether model dark energy observations are real and consistent with Friedman's models based on general relativity (which is why they're non-critically pushed and awarded by mainstream science agenda). But because Universe is static, it doesn't manifest by accelerated red shift of massive bodies, only by increased rate of CMBR scattering: light gets scattered to a longer wavelengths, which are susceptible to further scattering even more until avalanche like breakdown occurs at particle horizon of Universe.
Being only optical effect of vacuum environment, massive bodies and their perceived expansion and location (as measured by their relative luminosity) shouldn't get affected with dark energy. Actually in static aether Universe model the more distant objects should get gradually brighter with distance, because their distant images get also blurred with light scattering. In this way (Tolmann's surface brightness test) both expanding Universe model, both steady state one can be easilly distinguished and falsified against each other. After all the years, Edwin's Hubble doubt about the reality of (Universe) expansion remains as valid as Sandage's certainty expressed in a series of papers in the last decade.
As I explained many times here 1, 2, 3, 4, 5, 6, 7, Hubble constant discrepancy could be solved easily by considering light dispersion on dark matter widespread in cosmic space - but it would imply return to tired light hypothesis, which contemporary cosmology avoids like devil the cross.. See also:
Deconstruction of Big Bang model 1, 2, 3
An equation describing a one-dimensional model for the freezing of lakes is shown to be formally analogous to the Friedmann equation of cosmology. The analogy is developed and used to speculate on the change between two hypothetical “spacetime phases” in the early universe.
Concepts of false vacuum and its cosmological phase transform is remarkably "dense aetherish": Inflation can be really interpreted from extrinsic perspective of hyperfast space-time expansion like fast freezing of false vacuum, which would prominently slow-down the energy/light spreading across it thus making it "expanded" for intrinsic observers of this transform.
Except that this perspective is actually stationary: if we observe fast expansion in distant areas of universe, it just means we are living inside stationary black hole and particle horizon forms its outer surface of it. After then the false vacuum would simply form static exterior of our local Universe. The scientists still have understand geometric perspective of their formal models, in which they're alternating intrinsic and extrinsic perspectives arbitrarily.
From implicate topology actually follows such a logical confusion of observational perspectives is actually necessary for to have quantitative predicative power of theory: the formal and nonformal logics are thus in 1-1/N entropic duality (formalism of math is based on congruent validity of multitude logical postulates). One cannot remain exact and logically consistent at the same moment once formal derivations depend on finite number of axioms (Peano algebra, etc.).
New Wrinkle Added to Cosmology’s Hubble Crisis When cosmologists extrapolate data from the early universe to predict what the cosmos should be like now, they predict a relatively slow cosmic expansion rate. When they directly measure the speed at which astronomical objects are hurtling away from us, they find that space is expanding about 9% faster than the prediction. Two independent measurements of the universe’s expansion give incompatible answers.
Now a third method, advanced by an astronomy pioneer, appears to bridge the divide. A new line of evidence, first announced last summer, suggests that the cosmic expansion rate may fall much closer to the rate predicted by early-universe measurements and the standard theory of cosmology. Using these “tip of the red giant branch” (TRGB) stars, Wendy Freedman and her team arrived at a significantly lower Hubble rate than other observers.
Although Freedman is known for her careful and innovative work, some researchers pushed back on her methods after she introduced the result last summer. They argued that her team used outdated data for part of their analysis and an unfamiliar calibration technique. The critics thought that if Freedman’s team used newer data, their Hubble value would increase and come in line with other astronomical probes.
It did not. In a paper posted online on February 5 and accepted for publication in The Astrophysical Journal, Freedman’s team described their analysis of TRGB stars in detail, summarized their consistency checks, and responded to critiques. The new paper reports an even slower cosmic expansion rate than last summer’s result, a tad closer to the early-universe rate. The more up-to-date data that critics thought would increase Freedman’s Hubble value had the opposite effect. “It made it go down,” she said.
Tip of the red-giant branch (TRGB) is a primary distance indicator used in astronomy. It uses the luminosity of the brightest red-giant-branch stars in a galaxy as a standard candle to gauge the distance to that galaxy. TRGB stars on Hertzsprung–Russell diagram are stars that have just run out of hydrogen and started to burn helium. For a star with less than 1.8 times the mass of the Sun, this may occur in a process called the helium flash and establishing a new thermal equilibrium. The result is a sharp discontinuity in the evolutionary track of the star on the HR diagram called the tip of the red-giant branch. All stars that reach this point have an identical helium core mass of almost 0.5 M☉, and very similar stellar luminosity and temperature, especially in infrared spectrum insensitive to heavier elements.
TRGB stars come most frequently in large, diffuse and unusually luminous globular clusters, which also exhibit very low dark matter content. And this is IMO just the explanation of their low Hubble constant mystery. Dark matter actually suppresses population of TRGB stars (within galactic bulges for example) quite effectively as it slows down burning of hydrogen up to level, helium flash never occurs for stars, as they radiate most their matter well before it.
Mainstream cosmology ignored tired light theory from ideological reasons long time and now it faces uncomfortable fact, that at least substantial portion of red shift is caused with interstellar dark matter. It will be interesting to watch, how its propaganda will cope with this situation by now. See also: