# Relationship between Dark Energy and Dark Matter

I have encountered two seemingly contradictory theories on this(I don't know which one of them is correct, or where I am wrong):-

1. From Wikipedia:

Measurements of cosmic microwave background (CMB) anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass/energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the CMB spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%."

The above statement, if correct, I suppose indicates that there is a critical density for which mass/energy has to account for. This means that if suppose there was no Dark Matter, our calculations/assumptions regarding the amount of Dark Energy would have been much higher(because then Dark Energy would have to account for a greater share of critical density).

2. The universe is accelerating in expanding. Dark Energy is supposed to cause this acceleration in expansion. While ordinary matter plus Dark Matter, is supposed to reduce this acceleration/expansion through gravity. So, according to this theory, if there would have been no Dark Matter, then there would be less Dark Energy required to account for the observed acceleration in expansion, so our calculations/assumptions regarding the amount of Dark Energy would have been lower.

First of all, if there were no dark matter (DM), you wouldn't ask this question, since structures - including galaxies, stars, planets, and you - wouldn't have had the time to form in the early Universe before it had expanded too much for gravitational collapse to occur. But let's use magic and make the galaxies anyway:

1. The CMB (specifically the power spectrum of the CMB) shows that the total density $ho_mathrm{tot}$ of mass/energy in the Universe is extremely close to the critical density $ho_mathrm{c}$. That is, $$frac{ ho_mathrm{tot}}{ ho_mathrm{c}} equiv Omega_mathrm{tot} simeq 1.$$ The "$Omega$" is a common way to express densities; as a ratio to the critical density. The CMB also gives some constraints on the total amount of mass (DM + "normal" matter, i.e. baryons), but it is better at constraining the ratio of baryons-to-DM. Together with the matter density $Omega_mathrm{M}=Omega_mathrm{b}+Omega_mathrm{DM}$ obtained from observations of supernovae and, in particular, baryonic acoustic oscillations, we then obtain the amount of DM. This fraction is roughly $Omega_mathrm{DM}=0.26$ (Planck Collaboration et al. 2016). If there were no DM, then $Omega_mathrm{tot}$ just wouldn't be $1$, but rather $Omega_mathrm{tot} - Omega_mathrm{DM} simeq 0.74$.

2. Similarly, if there were no DM, there would be less matter to counteract the expansion of the Universe. That means that we wouldn't observe the same relation between the brightness and the distance of supernovae, from which we infer the presence of dark energy (DE). Instead, the brightnesses would be somewhat lower, because the supernovae would be farther away due to the faster expansion.

In other words, you are right that if there were no DM, and if we observed the same thing as we do, then there would be a contradiction. But if there were no DM, we wouldn't see what we see. Therein lies the resolution.

One explanation I found helpful went as follows: consider a volume of space which expands with the universe by a factor of $$a$$. The radiation density in it varies by a factor of $$a^{-4}$$ ($$a^{-3}$$ because the photons get more spread out, and a further $$a^{-1}$$ due to the cosmological redshift (each photon becomes less energetic)). The matter (dark and baryonic both) density changes by $$a^{-3}$$ (particles become more spread out). When we plug this into models for the expansion of the universe we find that a third term is needed whose energy density is constant. With enough data we can fit the curve

$$ho(a) = pa^{-4} + qa^{-3} + r$$

to the observed density and find $$p$$, $$q$$ and $$r$$. We can also try adding a few terms for other powers of $$a$$, but they don't seem to help. There's no simple tradeoff between them, although there may be more uncertainty about some combinations than others.

The division of $$p$$ between regular and dark matter can be determined from other measurements.

## A Connection Between Dark Energy and Dark Matter?

In the last few decades, scientists have discovered that there is a lot more to the universe than meets the eye: The cosmos appears to be filled with not just one, but two invisible constituents-dark matter and dark energy-whose existence has been proposed based solely on their gravitational effects on ordinary matter and energy.

Now, theoretical physicist Robert J. Scherrer has come up with a model that could cut the mystery in half by explaining dark matter and dark energy as two aspects of a single unknown force. His model is described in a paper titled “Purely Kinetic k Essence as Unified Dark Matter” published online by Physical Review Letters on June 30 and available online at http://arxiv.org/abs/astro-ph/0402316.

“One way to think of this is that the universe is filled with an invisible fluid that exerts pressure on ordinary matter and changes the way that the universe expands,” says Scherrer, a professor of physics at Vanderbilt University.

According to Scherrer, his model is extremely simple and avoids the major problems that have characterized previous efforts to unify dark matter and dark energy.

In the 1970s, astrophysicists postulated the existence of invisible particles called dark matter in order to explain the motion of galaxies. Based on these observations, they estimate that there must be about 10 times as much dark matter in the universe as ordinary matter. One possible explanation for dark matter is that it is made up of a new type of particle (dubbed Weakly Interacting Massive Particles, or WIMPs) that don’t emit light and barely interact with ordinary matter. A number of experiments are searching for evidence of these particles.

As if that weren’t enough, in the 1990s along came dark energy, which produces a repulsive force that appears to be ripping the universe apart. Scientists invoked dark energy to explain the surprise discovery that the rate at which the universe is expanding is not slowing, as most cosmologists had thought, but is accelerating instead. According to the latest estimates, dark energy makes up 75 percent of the universe and dark matter accounts for another 23 percent, leaving ordinary matter and energy with a distinctly minority role of only 2 percent.

Scherrer’s unifying idea is an exotic form of energy with well-defined but complicated properties called a scalar field. In this context, a field is a physical quantity possessing energy and pressure that is spread throughout space. Cosmologists first invoked scalar fields to explain cosmic inflation, a period shortly after the Big Bang when the universe appears to have undergone an episode of hyper-expansion, inflating billions upon billions of times in less than a second.

Specifically, Scherrer uses a second-generation scalar field, known as a k-essence, in his model. K-essence fields have been advanced by Paul Steinhardt at Princeton University and others as an explanation for dark energy, but Scherrer is the first to point out that one simple type of k-essence field can also produce the effects attributed to dark matter.

Scientists differentiate between dark matter and dark energy because they seem to behave differently. Dark matter appears to have mass and to form giant clumps. In fact, cosmologists calculate that the gravitational attraction of these clumps played a key role in causing ordinary matter to form galaxies. Dark energy, by contrast, appears to be without mass and spreads uniformly throughout space where it acts as a kind of anti-gravity, a repulsive force that is pushing the universe apart.

K-essence fields can change their behavior over time. When investigating a very simple type of k-essence field-one in which potential energy is a constant-Scherrer discovered that as the field evolves, it passes through a phase where it can clump and mimic the effect of invisible particles followed by a phase when it spreads uniformly throughout space and takes on the characteristics of dark energy.

“The model naturally evolves into a state where it looks like dark matter for a while and then it looks like dark energy,” Scherrer says. “When I realized this, I thought, ‘This is compelling, let’s see what we can do with it.'”

When he examined the model in more detail, Scherrer found that it avoids many of the problems that have plagued previous theories that attempt to unify dark matter and dark energy.

The earliest model for dark energy was made by modifying the general theory of relativity to include a term called the cosmological constant. This was a term that Einstein originally included to balance the force of gravity in order to form a static universe. But he cheerfully dropped the constant when astronomical observations of the day found it was not needed. Recent models reintroducing the cosmological constant do a good job of reproducing the effects of dark energy but do not explain dark matter.

One attempt to unify dark matter and dark energy, called the Chaplygin gas model, is based on work by a Russian physicist in the 1930s. It produces an initial dark matter-like stage followed by a dark energy-like evolution, but it has trouble explaining the process of galaxy formation.

Scherrer’s formulation has some similarities to a unified theory proposed earlier this year by Nima Arkani-Hamed at Harvard University and his colleagues, who attempt to explain dark matter and dark energy as arising from the behavior of an invisible and omnipresent fluid that they call a “ghost condensate.”

Although Scherrer’s model has a number of positive features, it also has some drawbacks. For one thing, it requires some extreme “fine-tuning” to work. The physicist also cautions that more study will be required to determine if the model’s behavior is consistent with other observations. In addition, it cannot answer the coincidence problem: Why we live at the only time in the history of the universe when the densities calculated for dark matter and dark energy are comparable. Scientists are suspicious of this because it suggests that there is something special about the present era.

## The Big Crunch?

If the average density of matter in the Universe is greater than a certain value, known as the critical density, then its expansion will eventually stop altogether and it will start contracting, eventually ending in a big crunch in the far distant future.

At the Big Crunch the Universe ends in a singularity where space and time comes to an end – the reverse of the Big Bang.

Before the late 1990’s many cosmologists believed the Universe would end in a Big Crunch.

Measuring the rate of expansion of the Universe

When we look at the spectrum of any star, we see a number of dark or bright lines. The dark lines are known as absorption lines and the bright lines emission lines. Each line is due to an energy transition in a particular type of atom or molecule and always occurs at the same wavelength. For example, the hydrogen alpha spectral line occurs at a wavelength of 656.3 nanometres.

The hydrogen alpha line occurs when an electron moves from energy level 2 to energy level 3 in a hydrogen atom.

However, if a light source is moving away from the observer then all its spectral lines are shifted towards longer wavelengths. This is known as a red shift. If the light source is moving towards the observer, its spectral lines are shifted towards shorter wavelengths. This is known as a blue shift.

The Doppler shift is the fractional change in wavelength due to the motion of the light source and is often given the symbol z.

• If a light source is moving away from us, so a spectral line at wavelength of 500 nanometres is red shifted to a wavelength of 505 nanometres, then the change in wavelength is +5 nanometres and the Doppler shift (z) is:
• If an object is moving towards us, so a spectral line at wavelength of 600 nanometres is blue shifted to a wavelength of 597 nanometres, then the change in wavelength is -3 nanometres and the Doppler shift (z) is:

If the light source is moving at a small fraction of the speed of light, then the velocity (v) it is moving away from or towards us is given by:

Where c is speed of light.

If we measure the velocity that galaxies are moving with respect to us, calculated from the Doppler shift of their spectral lines, and plot it against their distance, then we get the relationship shown below.

Note: Nearby galaxies in the Local Group which are gravitationally bound to the Milky Way have been excluded from the diagram.

As you can see all the galaxies are moving away from us, showing the Universe is expanding, and. there is a clear relationship between the recessional velocity and the distance of a galaxy. This relationship, called Hubble’s Law named after the American astronomer Edwin Hubble (1889-1953) who discovered it in 1929, is:

• v is the velocity an object is moving away from us
• D is the object’s distance and
• Ho is a constant known as the Hubble constant. If v is measured in km/s and D is in megaparsecs (1 Mpc =3.26 million light years) then Ho is approximately 70 km/s per Mpc. The Hubble constant measures how fast the Universe is expanding. (This is more accurately called the Hubble parameter because it can vary over time)

## Dark Matter and Dark Energy: The Mystery Explained (Infographic)

Most of the universe is made up of dark energy, a mysterious force that drives the accelerating expansion of the universe. The next largest ingredient is dark matter, which only interacts with the rest of the universe through its gravity. Normal matter, including all the visible stars, planets and galaxies, makes up less than 5 percent of the total mass of the universe.

Astronomers cannot see dark matter directly, but can study its effects. They can see light bent from the gravity of invisible objects (called gravitational lensing). They can also measure that stars are orbiting around in their galaxies faster than they should be.

This can all be accounted for if there were a large amount of invisible matter tied up in each galaxy, contributing to its overall mass and rotation rate.

Astronomers know more about what dark matter is not than what it is.

Dark matter is dark: It emits no light and cannot be seen directly, so it cannot be stars or planets.

Dark matter is not clouds of normal matter: Normal matter particles are called baryons. If dark matter were composed of baryons it would be detectable through reflected light. [Gallery: Dark Matter Throughout the Universe]

Dark matter is not antimatter: Antimatter annihilates matter on contact, producing gamma rays. Astronomers do not detect them.

Dark matter is not black holes: Black holes are gravity lenses that bend light. Astronomers do not see enough lensing events to account for the amount of dark matter that must exist.

Structure in the universe formed on the smallest scales first. It is believed that dark matter condensed first to form a “scaffolding,” with normal matter in the form of galaxies and clusters following the dark matter concentrations.

Scientists are using a variety of techniques across the disciplines of astronomy and physics to hunt for dark matter:

## The Fly in the Ointment

Dark energy quickly became widely accepted and adapted to the big bang model. But now a new study has called into question the reality of dark energy. These researchers found evidence that other factors could cause type Ia supernovae to appear fainter than expected. They argue that the peak brightness of type Ia supernovae depended upon properties of the galaxies that the supernovae are in. Factors included the morphology (shape), mass, and inferred star formation rate of the host galaxies. The paper suggests that this implies a link between type Ia supernovae brightness and stellar population.

Astronomers recognize two stellar populations: population I and population II, but with gradations within and between the two bins. Population II stars are believed to be older, and the population I stars are thought to be younger. If astronomical distances represent look-back time, then galaxies that are billions of light-years away must be younger than nearby galaxies. Hence, when we sample distant and nearby galaxies, we are seeing two different stellar populations. The recent study suggests that very distant type Ia supernovae appear fainter than expected because they are fainter than expected, not because the expansion of the universe is accelerating.

There is precedent for this sort of thing. For a given temperature, Population II main sequence stars are fainter than population I main sequence stars. This is explained by the higher metal content (which to astronomers means any elements heavier than hydrogen or helium) of population I stars providing more opacity in stellar envelopes.

From the beginning of Hubble’s work nearly a century ago, Cepheid variables have played a key role in establishing the Hubble constant, the measure of how rapidly the universe is expanding. In the 1950s, astronomers came to realize that there were two types of Cepheid variables related to stellar population that had different brightnesses. This recognition played a major role in greatly reducing the Hubble constant in the 1950s. Therefore, it shouldn’t be a surprise that the peak brightness of type Ia supernovae is related to stellar population.

## Dark energy: new experiment may solve one of the universe's greatest mysteries

Star trails take shape around the story Mayall Telescop dome in Arizona. Credit: P. Marenfeld and NOAO/AURA/NSF

As an astronomer, there is no better feeling than achieving "first light" with a new instrument or telescope. It is the culmination of years of preparations and construction of new hardware, which for the first time collects light particles from an astronomical object. This is usually followed by a sigh of relief and then the excitement of all the new science that is now possible.

On October 22, the Dark Energy Spectroscopic Instrument (DESI) on the Mayall Telescope in Arizona, US, achieved first light. This is a huge leap in our ability to measure galaxy distances—enabling a new era of mapping the structures in the universe. As its name indicates, it may also be key to solving one of the biggest questions in physics: what is the mysterious force dubbed "dark energy" that makes up the 70 percent of the universe?

The cosmos is clumpy. Galaxies live together in groups of a few to tens of galaxies. There are also clusters of a few hundreds to thousands of galaxies and superclusters that contain many such clusters.

This hierarchy of the universe has been known from the first maps of the universe, which looked like a "stickman" in graphs by the pioneering Centre for Astrophysics (CfA) Redshift Survey. These striking images were the first glimpse of large-scale structures in the universe, some spanning hundreds of millions of light years across.

The CfA survey was laboriously constructed one galaxy at a time. This involved measuring the spectrum of the galaxy light—a splitting of the light by wavelength, or colour—and identifying the fingerprints of certain chemical elements (mostly hydrogen, nitrogen and oxygen).

These chemical signatures are systematically shifted to longer redder wavelengths due to the expansion of the universe. This "red shift" was first detected by the astronomer Vesto Slipher and gave rise to the now famous Hubble's Law—the observation that more distant galaxies appear to be moving away at a faster rate. This means that galaxies that are close by appear to be moving away relatively slowly by comparison—they are less redshifted than galaxies far away. Therefore, measuring the redshift of a galaxy is a way to measure its distance.

SDSS map. Each dot is a galaxy. Credit: M. Blanton and SDSS, CC BY-SA

Crucially, the exact relationship between redshift and distance depends on the expansion history of the Universe which can be calculated theoretically using our theory of gravity and our assumptions of the matter and energy density of the universe.

All these assumptions were ultimately tested at the turn of the century with the combination of new observations of the universe, including new 3-D maps from larger redshift surveys. In particular, the Sloan Digital Sky Survey (SDSS) was the first dedicated redshift survey telescope to measure over a million galaxy redshifts, mapping the large scale structure in the universe to unprecedented detail.

The SDSS maps included hundreds of superclusters and filaments and helped make an unexpected discovery—dark energy. They showed that the matter density of the universe was much less than expected from the Cosmic Microwave Background, which is the light left over from the Big Bang. That meant there must be an unknown substance, dubbed dark energy, driving an accelerated expansion of the Universe and become increasingly devoid of matter.

The combination of all these observations heralded a new era of cosmological understanding with a universe consisting of 30 percent matter and 70 percent dark energy. But despite the fact that most physicists have now accepted that there is such a thing as dark energy, we still do not know its exact form.

There are several possibilities though. Many researchers believe that the energy of the vacuum simply has some particular value, dubbed a "cosmological constant". Other options include the possibility that Einstein's hugely successful theory of gravity is incomplete when applied on the huge scale of the entire universe.

A team at a vendor in Santa Rosa Calif poses behind a DESI lens. Credit: VIAVI Solutions

New instruments like DESI will help take the next step in resolving the mystery. It will measure tens of millions of galaxy redshifts, spanning a huge volume of the universe up to ten billion light years from Earth. Such an amazing, detailed map should be able to answer a few key questions about dark energy and the creation of the large scale structures in the universe.

For example, it should be able to tell us if dark energy is just a cosmological constant. To do this it will measure the ratio of pressure that dark energy puts on the universe to the energy per unit volume. If dark energy is a cosmological constant, this ratio should be constant in both cosmic time and location. For other explanations, however, this ratio would vary. Any indication that it is not a constant would be revolutionary and spark intense theoretical work.

DESI should also be able to constrain, and even kill, many theories of modified gravity, possibly providing an emphatic confirmation of Einstein's Theory of General Relativity on the largest scales. Or the opposite—and again that would spark a revolution in theoretical physics.

Another important theory that will be tested with DESI is Inflation, which predicts that tiny random quantum fluctuations of energy density in the primordial universe were exponentially expanded during a short period of intense growth to become the seeds of the large scale structures we see today.

DESI is only one of several next generation dark energy missions and experiments coming in the next decade, so there's certainly reason to be optimistic that we could soon solve the mystery of dark energy. New satellite missions like Euclid, and massive ground based observatories like the Large Synoptic Survey Telescope, will also offer insights.

There will also be other redshift instruments like DESI including 4MOST at the European Southern Observatory. Together, these will provide hundreds of millions of redshifts across the whole sky leading to an unimaginable map of our cosmos.

It seems a long time ago now when I wrote my Ph.D. thesis based on just 700 galaxy redshifts. It really goes to show it's an exciting time to be an astronomer.

## Further mathematical detail

### The deceleration parameter

As discussed previously, the distance d(t) at a time t of an object moving away from us due to the expansion of the Universe, is given by:

where do is the distance of the object at the current age of the Universe (to) and a(t) is the cosmic scale factor.

The deceleration parameter is a dimensionless number and is defined as:

Where a’(t) is the first derivative of a(t), a’’(t) is the second derivative. The minus sign means that if the second derivative is negative, which means the rate of increase of a(t) is slowing down, then q(t) will be positive.

Over the last sixty years or so, most models of the Universe have had a deceleration parameter between -1 (which is a rapidly increasing exponential acceleration) and +3 which is a rapid deceleration. If the deceleration parameter is lower than -1 it cause some interesting challenges because it would mean that there is a faster than exponential expansion.

The Hubble parameter H(t) is equal to the recessional velocity divided by the distance. If we have an object a distance d(t) away then

The recessional velocity v(t) at a distance d(t) is given simply by differentiating which gives

Therefore, the Hubble parameter is given by.

If we differentiate the above expression, using the product rule, with u = a’(t) and v= 1 / a(t), then we get

Multiplying both sides of equation 3 by the following factor

Using the definitions of q(t) and H(t) given in equations 1 and 2 gives

A simple example

If we have a Universe, with an accelerating expansion in which the scale factor increases as the time squared:

a(t) = (t/to) 2 (In reality the scale factor will be a more complex function of time than this).

Then from equation 1 the deceleration parameter q(t) is:

Which simplifies to q(t) = – ½ . The value is negative indicating an accelerating expansion

The Hubble parameter H(t) from equation 2 is:

Which simplifies to H(t) = 2/t. As the value of the Hubble parameter is inversely dependent on t, its values decreases with time.

## I don't like Dark Matter or Dark Energy

In the article "MOND: time for a change of mind?" by Mordehai Milgrom look at Fig 3 which gives three rotation curves for NGC 1560. One is for Newtonian dynamics, one is for MOND, and one is for Newtonian dynamics and cold dark matter. These curves are compared to actual observations.

### #52 astrotrf

So my question: is it possible that the cumulative gravitational effects of this complicated situation may account for the flattening of the rotational speed in the outer sections of a galaxy in a way that either has not yet been addressed or understood?

The short answer to your question is "no" if you use strictly Newtonian gravity. You need either more matter or a modified theory of gravity to explain the rotational speed curve of galaxies.

It's worth noting, again, that the problem is larger than this. You also need more matter at intergalactic distance scales to account for clusters of galaxies, and you need more matter at cosmological distance scales to account for the rapid coalescence of galaxies after the Big Bang. As the Milgrom article Joel referred to points out, MOND addresses some of this, but not all -- even MOND requires some additional matter at intergalactic scale, and cannot (yet) account for the cosmological scale.

As rosy a picture as Milgrom paints for MOND, you're left to decide between MOND plus some dark matter anyway plus something else at cosmological scale, or a pure dark matter solution at all scales. To be sure, MOND reduces the amount of dark matter necessary, and speculates that it may simply be otherwise-invisible normal matter this is arguably more satisfying than an exotic form of matter nobody has yet seen (but stay tuned). (One could wish that Milgrom had chosen to address the Bullet Cluster in this article . )

### #53 Joel F.

I am on my way out so I cannot check the exact reference for you.

You may be interested in the article "Can MOND take a bullet" by Gary Angus.

### #54 astrotrf

You may be interested in the article "Can MOND take a bullet" by Gary Angus.

Thanks, Joel that paper would appear to be here.

And here Milgrom states that, with MOND, you only need about as much dark matter as normal matter, instead of the five times as much required by pure dark matter (and up to 10x as much in cluster concentrations).

So the choice boils down to "Newton and a lot of dark matter" or "MOND and a little dark matter" .

### #55 deSitter

So my question: is it possible that the cumulative gravitational effects of this complicated situation may account for the flattening of the rotational speed in the outer sections of a galaxy in a way that either has not yet been addressed or understood?

And I'll again point out the reality, reported by yours truly who can actually solve systems of equations and understand their meaning, after many years of study extending over decades - Cooperstock solved this problem and no new gravity theory is needed, only to use the existing one correctly. You CANNOT approximate away the non-linearity of general relativity. To do so throws out the baby with the bathwater.

You are entitled to your opinion and you are entitled to report on the work of others, but you are not entitled to speak authoritatively unless you have done the actual work of learning and using general relativity.

And I'll again point out the reality, reported by yours truly who can actually solve systems of equations and understand their meaning, after many years of study extending over decades - Cooperstock solved this problem and no new gravity theory is needed, only to use the existing one correctly. You CANNOT approximate away the non-linearity of general relativity. To do so throws out the baby with the bathwater.

How would Cooperstock's work apply to lensing? Would his method of calculation change the mass measurements by lensing effects? How about the Bullet Cluster, where the lensing is indicating mass moving ahead of the visible galaxies?

You are entitled to your opinion and you are entitled to report on the work of others, but you are not entitled to speak authoritatively unless you have done the actual work of learning and using general relativity.

I think both Terry and I have been going out of our way to acknowledge that you think Cooperstock has solved the problem. And we have repeatedly said that you may turn out to be right. We have acknowledged that we are not skilled enough at the GR math to determine whether or not Cooperstock it correct, so we will wait for the other experts in the field to come to a consensus on it.

In the meantime, we are also acknowledging the other indicators of missing mass (i.e. lensing). Those may turn out to be wrong, or maybe Cooperstock's calculations can be applied to lensing as well, but for the moment they appear to be contradictory to his explanation.

For moment, I think we are presenting both hypotheses as possible, pending further confirmation. We have been trying to present the evidence for both as we understand it. I think that's a reasonable position. I understand that you find Cooperstock's explanation to be both compelling and sufficient, but I think you should at least acknowledge that it doesn't seem to explain the Bullet Cluster observations.

### #57 astrotrf

. Cooperstock solved this problem and no new gravity theory is needed, only to use the existing one correctly.

I think the difference here is just one of semantics. When "using the existing one correctly" means "using it in a nonstandard way", especially one that almost everybody thinks is wrong, I'd say that qualifies as modifying the existing accepted theory of gravity. I can see how your viewpoint on that may be different, though.

It's worth pointing out, too, in this context that it's called "modified *Newtonian* dynamics" (MOND) because the standard application is that general relativity reduces to Newton's gravity in the regime under question. I do take your point that Cooperstock's application of GR differs from that.

There are novice and uninitiated readers here I think, therefore, that it's important not to leave them with the impression that Cooperstock's work is accepted scientific fact (as much as anything in science is "accepted fact"), but rather to put it in its proper place as a proposal that has largely been rejected, for cause, by the relevant scientific community. That is not to say that it could not someday ultimately prove to be correct, but rather that it is currently judged, after examination, to be highly unlikely.

### #58 deSitter

. Cooperstock solved this problem and no new gravity theory is needed, only to use the existing one correctly.

No, there is nothing in real physics that is a matter of "semantics", that dusty drawer of failed attempts to understand - it is a matter of making a correct approximation when solving the complete problem is out of the question. The approximation consists is assuming a smeared-out distribution of matter and then keeping only enough terms to have a weak non-linear field. If you throw out the non-linearity then you have crippled the equations and they will not tell you about reality. The exact same situation arises in fluid flow, in which the wrong linear approximation would have airplanes that cannot fly realistically, plumbing that doesn't work, blood that does not nourish the brain.

From what I have been able to find, it is quite evident that very few people within the world of physics think that Cooperstock has solved the problem. This may be evidence of a conspiracy (a remarkably successful one), and it also may be evidence that Cooperstock's alternative may be good math but not persuasive in light of other problems that his and Tieu's calculations do not address.

As I say, if it is the result of a conspiracy, it is quite a conspiracy. Fred Cooperstock (an emeritus professor) has much less of an Internet footprint than I do (much less), and I am not exactly a famous man.

### #60 deSitter

Joad, what you say is correct, and it never fails to astonish me that something so unequivocally true could be so widely ignored. Even as jaded as I am, I can hardly believe that there is so little interest in reality. As it has developed, the Internet and digital media amount to a bowl of concentrated acid hurled into the face of good science.

Of course there is no conspiracy - it's worse than that - it's plain old failure to understand, failure to communicate, and more than that even, a failure to respect the moral outlines of progress.

### #61 deSitter

From what I have been able to find, it is quite evident that very few people within the world of physics think that Cooperstock has solved the problem. This may be evidence of a conspiracy (a remarkably successful one), and it also may be evidence that Cooperstock's alternative may be good math but not persuasive in light of other problems that his and Tieu's calculations do not address.

As I say, if it is the result of a conspiracy, it is quite a conspiracy. Fred Cooperstock (an emeritus professor) has much less of an Internet footprint than I do (much less), and I am not exactly a famous man.

We can trace at least some of the confusion to the repeated attempts to evict geometry from its rightful place as the prime mover of gravitation. The spin-2 graviton fiction has become so ingrained in the collective consciousness that people have come to take it for gospel. This presents a nearly impenetrable barrier to correct analysis of gravitational problems. The cosmological and field-theoretic accoutrements that have been willy-nilly tacked onto general relativity have rendered it unrecognizable. That is probably why people will not see a classical problem in its actual form.

Just as people fail to understand Cooperstock, they also have complete faith in say Hawking radiation, something that has never been seen and for which there is utterly no evidence. So the Cooperstockian inversion is not unique.

### #64 7331Peg

Thanks to everyone who answered my earlier question on galaxy rotation and the complexity of accounting for the mass of individual stars vs. solar system rotation. I gather that it still boils down to averaging out the effects of all those stars.

I don't know where all of this will come out in the end. My instinct is that there is no dark matter - at least not in the sense that there is enough of it to account for gravitational anomalies. I still lean toward the idea that we have missed something critical in our understanding of gravity, and the answer, when it comes, will most likely be a whole lot less obscure than a dark something or other that we can't see.

### #65 deSitter

You should never feel bad because you lack a detailed understanding. It is morally incumbent on those of us who do understand, to tell the truth in an unvarnished way. We are the ones who should feel embarrassment before truth.

What you say is correct - one cannot dice up a galaxy and treat it as so many test particles in an external field. That is the essence of non-linear behavior - the whole matters and cannot be chopped into independent pieces. This is a critically important thing to understand. With linear systems one can adopt a piecemeal solution, because the contributions from the individual pieces can be added later to get the complete solution. You cannot do that with non-linear systems. Even in a perturbative solution, meaning a step-by-step iterative approach that gets closer on each iteration, the entire system as a whole must be considered at all stages beyond the first, linear terms. A naive analysis says that these later terms can be ignored, but it is PRECISELY the existence of the non-Keplerian rotation curve that shows their essential nature and that one CANNOT ignore them! So what we have is an extremely curious inversion of observation and theory. The actual curve is direct evidence of the essential non-linearity of GR, and by implication, its intimate relation with geometry, but is treated as a foreign influence on a theory wrongly assumed to be linear in this regime. Such an elementary blunder is unprecedented in physics at the highest levels.

Oh I feel so ignorant right now, talk about layman. I looked up this Cooperstock guy and I think I would like to go fishing with him. OK, help me with this.
The rotational curve doesn't decay like it "should" given the observable amount of matter in a galaxy. Special Relativity deals with time differences based on speed, speed of light equals you ain't movin through time relative to something else. Our GPS sytem has to use GR and SR to account for differences in gravity and speed. From internet sources the stars at the outer edge of our galaxy are moving around the center at somthing like 220km/sec. If our slow satellites can be thrown off by such a small amount of gravity/velocity difference, doesn't that imply that the stars at the edge of the galaxy are impacted in a much larger way relative to the center of the galaxy? Or is this already part of the equations that make dark matter attractive? Please understand I am not trying to look foolish, you can't type a question like this into google and get any clarification.

### #66 Charlie B

There are novice and uninitiated readers here I think, therefore, that it's important not to leave them with the impression that Cooperstock's work is accepted scientific fact (as much as anything in science is "accepted fact"), but rather to put it in its proper place as a proposal that has largely been rejected, for cause, by the relevant scientific community. That is not to say that it could not someday ultimately prove to be correct, but rather that it is currently judged, after examination, to be highly unlikely.

I've looked at Cooperstock & Tieu's four papers in this area and found their arguments to be compelling. The original 2005 paper was less clear, but the rebuttal to some of the critiques was better. The 2006 "New Developments" paper was the best, giving plots of their models of rotation curves and mass density profiles compared with observation. In addition, their 2008 letter showing how a spherical mass could collapse under GR to match the Galactic density profiles was especially interesting. I'm looking forward to see if they can extrapolate their results from galaxies to galaxy clusters. I would not say that their results have largely been rejected, but that the jury is still out.

### #67 deSitter

Charlie B, agree completely here - and one should point out the "rebuttal" of C&T was given by a green graduate student without (literally) a tenth of C's academic experience or depth understanding of GR. "Any counter-argument will do" seemed to be the general response, even though C&T explicitly answered the rebuttal.

Of perhaps greatest interest in this context is that globular clusters show exactly the same non-Keplerian behavior, only now the affected parameter is the radial velocity profile. That is a case in which DM is explicitly ruled out, because GCs are not gravitationally bound (they are thinned by tidal forces in the host galaxy).

Terry,

There are novice and uninitiated readers here I think, therefore, that it's important not to leave them with the impression that Cooperstock's work is accepted scientific fact (as much as anything in science is "accepted fact"), but rather to put it in its proper place as a proposal that has largely been rejected, for cause, by the relevant scientific community. That is not to say that it could not someday ultimately prove to be correct, but rather that it is currently judged, after examination, to be highly unlikely.

I've looked at Cooperstock & Tieu's four papers in this area and found their arguments to be compelling. The original 2005 paper was less clear, but the rebuttal to some of the critiques was better. The 2006 "New Developments" paper was the best, giving plots of their models of rotation curves and mass density profiles compared with observation. In addition, their 2008 letter showing how a spherical mass could collapse under GR to match the Galactic density profiles was especially interesting. I'm looking forward to see if they can extrapolate their results from galaxies to galaxy clusters. I would not say that their results have largely been rejected, but that the jury is still out.

### #68 groz

You are entitled to your opinion and you are entitled to report on the work of others, but you are not entitled to speak authoritatively unless you have done the actual work of learning and using general relativity.

The exact same situation arises in fluid flow, in which the wrong linear approximation would have airplanes that cannot fly realistically, plumbing that doesn't work, blood that does not nourish the brain.

Fluids is a field in which I have 'more than a smattering' of exposure to. I remember very clearly sitting in early semester class on the subject, learning about co-efficients of lift, power to weight ratios, yadda yadda. Then later in the term, we did an exercise which the professor felt was relavent. We calculated wing planforms, airflow dynamics, power to weight ratios, and a lot of other numbers, and finally came to the conclusion, mathematically, with current theory in vogue at the the time, a bumblebee cannot fly. And the professor made a very strong point out of that exercise, because, we all know, the bumblebee flies quite well. The point he made was very strait forward, it's absolutely WRONG to take a whole lot of measurements in one regime (large scale planform fixed wing sections), then extrapolate those into a completely different regime (small scale high speed flapping wings). The math may work, but, the theory isn't valid in that regime.

By the time we graduated, we had learned many theories, and many forumla. Interesting to remember, the formula used to solve a supersonic problem, bears no resemblance to the formula used to solve a low speed problem, even if you are dealing with the exact same surface configuration. Bottom line, those formula are all littered with constants that have been derived empirically, and, we were able to show that the constants dont necessarily remain so over different regimes.

And this is how I've always viewed the 'dark stuff' issue. All of the models etc, that need 'dark stuff' inserted to make the mass values actually fit the observations, ultimately degrade to being dependant on one single constant, and that's the value we assign to G for the gravitational constant.

Within our ability to measure G directly, it appears to be constant. But we dont know that it's constant, we only know that the measured value comes up the same, no matter what method we devise to measure directly. The problem is, all of our methods of direct measurement are within a tiny realm, right here on planet earth, and/or very close (stuff we've put into orbit). Every other measurement we take, uses G as a constant, then allows M to vary according to observation. G is constant over distance and time scales we can measure directly. Does it remain constant at universal distances and timeframes ? That is a basic supposition on which pretty much all of this kind of research / analysis is based.

Personally, I dont think it's necessarily a valid assumption, yet it's the premise on which pretty much all of cosmology and much of astronomy is based. But, as you alluded to in the first quote, I dont (yet) have the background to comment on the subject definitively, but I do have ways and means of investigating.

This is why I like finding papers like the MOND paper listed earlier in the thread, over time, I'm slowly collecting as much raw data as I can in this area, but, using it slightly differently. Instead of assuming G=constant, and trying to fit a new M into the equations by inventing stuff, I'll list out the properties on the target, then assume mass estimates via other means (luminosity or what have you) are correct. From that, we solve for G. Then look at relationships, if any, that come out of the data.

Like my professor once told me in school, when the calculated results dont agree with what you see, then look first to the constants in your equations, and question them. I've never seen anybody seriously questioning derivation of the gravitational constant, yet, so much hinges on extrapolating that value WAY out past the realms in which it was measured.

find a handle on what (something) is, and, the whole field ends up turning on its head, because pretty much every calculation ever done for estimation, which was based on G, becomes invalid. Find a relationship between G and (something) which fits mass estimated gained from means independant of G, and we get to re-write everything. It may be completely way off in left field for a concept, it may not, but that's the beauty of a modern information age, it's within reach to actually explore the idea, and do so outside of the mainstream.

When the math starts failing, the first thing you have to do, is look carefully at your constants, and ask where they came from. If they came from empirical measurements over time, are you extrapolating the use of those constants to a regime well beyond the regime in which it was measured. When you start looking at the details, and exclude all things like mass estimates that came about because of dynamics equations based on G, thats when you realize just how much of a foundation that constant truely is in this field. Even the simplest of things, like mass estimates for the planets in our solar system, are based on calculations that involve G. Finding relatively reliable mass measures in astronomy, that come about from things other than dynamic equations that ultimately depend on G is not actually trivial.

I dunno where this path will lead me, but, I really do come from the camp of 'I think we got gravity wrong', and, since it's not a full time endeavor, I pluck away at math / concepts as I see fit. Maybe some day it'll amount to a revelation, maybe not. But in all honesty, I think there's more potential for understanding gravity thru modifying the understanding of G, than there is inventing unobservable stuff to satisfy equations that are based on it. G is an empirical constant, derived from empirical measurements, within a small realm. Extrapolating it out to other realms way beyond that in which it was measured, may be valid, or, it may be like trying to solve the problems of lift on the wing of a bumblebee, using constants derived by measuring supersonic flow in a high speed wind tunnel, ie, totally invalid.

If indeed G is not constant outside our small regime where we can measure it directly, and we can find a bit of a hint as to a relationship, then surely some smart cookie will develop a new theory based on that, one which will send 'dark stuff' out to join the aether, and probably relegate the big bang into the same category as the 'flat earth', something that was very obvious, measuring over a very small regime, but just didn't work anymore when you stepped out to global scales.

### #70 astrotrf

We plowed this ground a couple of months ago in another thread here.

. one should point out the "rebuttal" of C&T was given by a green graduate student without (literally) a tenth of C's academic experience or depth understanding of GR. "Any counter-argument will do" seemed to be the general response, even though C&T explicitly answered the rebuttal.

First, let me point out that the comments about the graduate student are irrelevant all that matters is the argument made. Most folks were convinced by his criticism and not by C&T. In addition to that argument, the original Cooperstock argument is at odds with perturbation theory, which is another reason it's not accepted.

It is wrong to think that Cooperstock is the victim of a conspiracy and that his arguments were simply ignored they were examined, and the objections to them carry greater weight with most of the workers in the field. That does not, of course, prove that he's wrong, just that he's not convincing anyone.

Of perhaps greatest interest in this context is that globular clusters show exactly the same non-Keplerian behavior, only now the affected parameter is the radial velocity profile. That is a case in which DM is explicitly ruled out, because GCs are not gravitationally bound (they are thinned by tidal forces in the host galaxy).

The contraindications here are that galaxy rotation curves *are* Keplerian at the center, at scales larger than globular cluster sizes, and also that the globular cluster profiles in question have not been confirmed by other researchers, so their reality is open to question.

The bottom line is that there are a number of unresolved issues surrounding dark matter, and more work is necessary to get answers.

### #71 deSitter

Wow, an old time aerodynamicist - did you read Prandtl and all that stuff? Lamb?

I actually have a theory where G can change - it's basically the strength ratio of light to gravity, which is now variable (it's fixed in Riemannian geometry but not in Weyl geometry). Because there is, so to speak, an equation of state between light and gravity, one immediately has another source of wavelength stretching and so a ready-made explanation for physical intrinsic redshifts. Alas I need a mathematician to analyze the characteristics of the resulting ultra-hyperbolic equations. You see! We are still on the track of all those crazy parameters!

You are entitled to your opinion and you are entitled to report on the work of others, but you are not entitled to speak authoritatively unless you have done the actual work of learning and using general relativity.

The exact same situation arises in fluid flow, in which the wrong linear approximation would have airplanes that cannot fly realistically, plumbing that doesn't work, blood that does not nourish the brain.

Fluids is a field in which I have 'more than a smattering' of exposure to. I remember very clearly sitting in early semester class on the subject, learning about co-efficients of lift, power to weight ratios, yadda yadda. Then later in the term, we did an exercise which the professor felt was relavent. We calculated wing planforms, airflow dynamics, power to weight ratios, and a lot of other numbers, and finally came to the conclusion, mathematically, with current theory in vogue at the the time, a bumblebee cannot fly. And the professor made a very strong point out of that exercise, because, we all know, the bumblebee flies quite well. The point he made was very strait forward, it's absolutely WRONG to take a whole lot of measurements in one regime (large scale planform fixed wing sections), then extrapolate those into a completely different regime (small scale high speed flapping wings). The math may work, but, the theory isn't valid in that regime.

By the time we graduated, we had learned many theories, and many forumla. Interesting to remember, the formula used to solve a supersonic problem, bears no resemblance to the formula used to solve a low speed problem, even if you are dealing with the exact same surface configuration. Bottom line, those formula are all littered with constants that have been derived empirically, and, we were able to show that the constants dont necessarily remain so over different regimes.

And this is how I've always viewed the 'dark stuff' issue. All of the models etc, that need 'dark stuff' inserted to make the mass values actually fit the observations, ultimately degrade to being dependant on one single constant, and that's the value we assign to G for the gravitational constant.

Within our ability to measure G directly, it appears to be constant. But we dont know that it's constant, we only know that the measured value comes up the same, no matter what method we devise to measure directly. The problem is, all of our methods of direct measurement are within a tiny realm, right here on planet earth, and/or very close (stuff we've put into orbit). Every other measurement we take, uses G as a constant, then allows M to vary according to observation. G is constant over distance and time scales we can measure directly. Does it remain constant at universal distances and timeframes ? That is a basic supposition on which pretty much all of this kind of research / analysis is based.

Personally, I dont think it's necessarily a valid assumption, yet it's the premise on which pretty much all of cosmology and much of astronomy is based. But, as you alluded to in the first quote, I dont (yet) have the background to comment on the subject definitively, but I do have ways and means of investigating.

This is why I like finding papers like the MOND paper listed earlier in the thread, over time, I'm slowly collecting as much raw data as I can in this area, but, using it slightly differently. Instead of assuming G=constant, and trying to fit a new M into the equations by inventing stuff, I'll list out the properties on the target, then assume mass estimates via other means (luminosity or what have you) are correct. From that, we solve for G. Then look at relationships, if any, that come out of the data.

Like my professor once told me in school, when the calculated results dont agree with what you see, then look first to the constants in your equations, and question them. I've never seen anybody seriously questioning derivation of the gravitational constant, yet, so much hinges on extrapolating that value WAY out past the realms in which it was measured.

find a handle on what (something) is, and, the whole field ends up turning on its head, because pretty much every calculation ever done for estimation, which was based on G, becomes invalid. Find a relationship between G and (something) which fits mass estimated gained from means independant of G, and we get to re-write everything. It may be completely way off in left field for a concept, it may not, but that's the beauty of a modern information age, it's within reach to actually explore the idea, and do so outside of the mainstream.

When the math starts failing, the first thing you have to do, is look carefully at your constants, and ask where they came from. If they came from empirical measurements over time, are you extrapolating the use of those constants to a regime well beyond the regime in which it was measured. When you start looking at the details, and exclude all things like mass estimates that came about because of dynamics equations based on G, thats when you realize just how much of a foundation that constant truely is in this field. Even the simplest of things, like mass estimates for the planets in our solar system, are based on calculations that involve G. Finding relatively reliable mass measures in astronomy, that come about from things other than dynamic equations that ultimately depend on G is not actually trivial.

I dunno where this path will lead me, but, I really do come from the camp of 'I think we got gravity wrong', and, since it's not a full time endeavor, I pluck away at math / concepts as I see fit. Maybe some day it'll amount to a revelation, maybe not. But in all honesty, I think there's more potential for understanding gravity thru modifying the understanding of G, than there is inventing unobservable stuff to satisfy equations that are based on it. G is an empirical constant, derived from empirical measurements, within a small realm. Extrapolating it out to other realms way beyond that in which it was measured, may be valid, or, it may be like trying to solve the problems of lift on the wing of a bumblebee, using constants derived by measuring supersonic flow in a high speed wind tunnel, ie, totally invalid.

If indeed G is not constant outside our small regime where we can measure it directly, and we can find a bit of a hint as to a relationship, then surely some smart cookie will develop a new theory based on that, one which will send 'dark stuff' out to join the aether, and probably relegate the big bang into the same category as the 'flat earth', something that was very obvious, measuring over a very small regime, but just didn't work anymore when you stepped out to global scales.

### #73 groz

Wow, an old time aerodynamicist - did you read Prandtl and all that stuff? Lamb?

Most of Prandtl's stuff is hard to read, language barriers, but there are no shortage of translated varieties. If you like that kind of stuff, you'll like my library. But, in the 30+ years since I sat in the formal classroom environment, my focus has always been 'practical application', and not so much the theoretical aspects.

I actually have a theory where G can change - it's basically the strength ratio of light to gravity, which is now variable (it's fixed in Riemannian geometry but not in Weyl geometry). Because there is, so to speak, an equation of state between light and gravity, one immediately has another source of wavelength stretching and so a ready-made explanation for physical intrinsic redshifts. Alas I need a mathematician to analyze the characteristics of the resulting ultra-hyperbolic equations. You see! We are still on the track of all those crazy parameters!

I'm not a theoretical mathematician, but rather, look at things more from the perspective of 'what measurements are taken', then ask 'what sources of error are in the measurements'. It's also not a full time endeavor, but, as we head on in life, priorities change, time availability changes, and most importantly, resources available tend to change. I have reached the point where the resources I have at my disposal are 'good enough' to start taking some of the measurements myself, and looking very carefully at error sources.

The premise on which I started, inspired by the experience in fluids, which is a well quantified, but very poorly understood field, is to look at the 'dark stuff' problem as a fluids type problem. Equations with constants, derived empirically. In the fluids area, as a simple example, it's easy to devise equations and constants that work in the low speed, steady state conditions. Apply those same equations to low speed scenarios where state is not so steady, and they fall apart very quickly. The aerodynamic stall is just such an example. Now take the exact same set of conditions, but instead of varying velocity to alter the flow state, alter pressure. Instead of a wing section problem, you are now dealing with a 'flow in a pipe' problem. Again, things fall apart rather quickly. Now to really make it complicated, factor in compressibility. For a simple aerodynamics 'classic' example, consider the problem of the bullet shot from a gun, and solve for where it will hit. The high school variation says 'ingnore air resistance'. The first year fluids course says 'ignore compressibility'. The 4th year final exam says 'ignore nothing'. Every step along this progression introduces more complex equations into the solution, and every one of them has inherent some constants that come about from empirical observation, with no solid basis in theory. This was precisely the problem for which eniac was constructed, to produce ballistic trajectory tables for heavy guns aboard ships, so one could say, this was the problem which triggered the move to computers for 'modelling'. The ballistic tables produced from enaic were good, but, even so in many cases, not quite 'good enough'. Even today, long range artillery shots get adjusted after observing where the first round actually hit. We still dont understand all of the process and variables well enough to predict with precision, exactly where a long range artillery shell will land, unless it has some form of on-board guidance to correct the trajectory enroute.

When you look at the big picture, lots of measurements, most of which come from the last 100 years, call it 300 years if you want to go back to the newtonian periods when folks were starting to get real good at newtonian dynamics. The premise on large scale astronomy problems has always been, F=GMm/R^2, so, once we can approximate the distance values, we can solve for mass values. This is a basic premise on which most everything is built. A big part of the problem tho, comes about when looking at observations for distant galaxies, independant estimates of the mass values, dont jive with the values that come about from motion equation solutions. The current trend is to leave G constant, and fiddle with M relationships to come up with solutions. That leaves the conundrum of M values that dont agree with observation via other methods, and the need to invent 'unobserved' mass to get a final solution.

My thoughts are, a different approach. Leave the M relationships alone, fiddle with the G values to find solutions, with an ultimate goal, G=f(something). This is the methodology the fluids folks have worked with over the last 30 years trying to better understand complex flow problems. Leave the observed data as 'sacred', then devise mathematical methods that match the data. In many of the cases today, it's meant 'better approximations' rather than theories that finally make it all fit together, but, those better approximations ultimately are responsible for things like airline tickets costing far less today than they did 30 years ago. The refined approximations have allowed for dramatic improvements in aircraft efficiency, particularily when it comes to the flow thru the ducted fan turbines, ie engine efficiency.

In the hunt for a form equation for G, that produces an approximation which fits the observed data better than the currently accepted G=constant, I'll be more than a little happy with any kind of result, but, I also have clearly defined the meaning of a null result. A null result in this quest points very strongly at the currently accepted G=constant paradigm, but a positive result will have a dramatic impact on lots of stuff.

This much I can say, with some level of certainty. Direct measures of G come about in timeframes of the last few hundred years. They also happen at velocities that are some small fraction of C. We also know that mass values derived from G, when distance is large, ie distant galaxies, represent the configuration of that galaxy at some period in time that's much farther back than the 400 year timespan in which the direct measurements have been taken. In those observations, either mass is dramatically underestimated in the direct observation, or G is dramatically under estimated in the indirect observation. So there is a hint there, if G is f(something), then (something) has either a distance, or time term in the relationship(s), and/or there could be a factor of C in there similar to the way compressibility starts to show up in fluids equations.

So, if G=f(t) then there is potential to explain redshift, and resolve the conundrum of distant galaxies where observational data suggests a dramatic discrepancy between mass estimates via motion equations, and mass estimates via other means. Half-life decay functions are found thruout nature, and, it's also possible that a half life decay function on G would end up with values such that, over a timeframe of the last few hundred years, the change in G is so small, it's not within our ability to measure / calculate such a small change. It's an equation form that could well jive completely with observations, without the need for lots of 'extra mass', just an adjustment in G to account for the time discepancy between when the light left that galaxy, and it arrived here.

Another possiblity, G=f(something local), and when one does indeed refine the estimate of that localized parameter, it turns out to be something pretty much constant on the scale of our solar system and it's relationship to (something local), but, that relationship turns out to be one that forms a curve, with an inflection point at the event horizon of a black hole. Empirical measurements of the gravitational constant have all been confined to within our solar system, so, if it varies by something that doesn't very much within our solar system, direct measurements taken on / near earth wont show anything more than G=constant, even if its not.

We have a few more ideas on directions for looking, but that's not the current focus. So far, over the last year and a half, my focus has been on the physical setup to take measurements myself. I've now got the photometry side of the measurement equation down to the point I can do millimag photometry with my own gear, and get results in the single digit millimag ranges. The project this winter is to expand our measurement capability into similar capability with spectrum. Once we have our spectrum technique and equipment to the point we can get redshift measurements 'good enough' to use for this type of relationship study, the final step in the process will be to upgrade the telescope to something that allows us to go somewhat deeper than we can now, and we are currently looking at moving to 16 inch f/5 instruments. To date, we have verified most of our capability by doing transit measurements on known exoplanets, and have had our results analyzed by 'professionals' for confirmation.

And why are we going to such lengths to actually end up in a situation where we can take our own measurements, independant of what's available 'out there'. For the answer to that, just read the history on the dispute between Langley and the Wright brothers. The aerodrome never flew, until Curtis modified it with a lot of the design innovations from the wright flyer, after the wrights had flown. Yet Langley was entrenched into 'the system', and as the curator of the Smithsonian, he ensured that his aerodrome sat on display there for decades, taking credit as the first powered aircraft capable of manned flight. It took 40 years for that fraud to be corrected. The aerodrome was not capable of powered flight, until they 'fixed it' with the design concepts taken from the wright flyer. If you look into the specfic details, what you will find, the major difference between Langley and the Wrights design was pretty simple. The Wrights trusted the raw data they took in their own tests, while Langley was fitting a curve to the data, and working with equations from that fit. But, that set of equations ignored the aerodynamic stall as a 'measurement anomoly'. So, the final end result, the wright flyer actually flew, and there was photographic proof of the fact. The aerodrome made a soaking wet pile of crashed parts in the river, but went on to take the credit for flight for many years in the smithsonian. And if you are really interested in the specifics, it wasn't the wing or control systems that made the wright flyer better than the aerodrome, it was the propeller cross sections, and the fact that the wrights trusted wind tunnel test data in designing those components.

But I do have an ultimate goal, which has been alluded to many times on these forums. If we can find an approximation that describes G=f(something), then the next step is for a smart cookie engineer to figure out a way to control the variables in (something). Reach that point, and we can re-write the books on propulsion, which ends up with a round trip ticket away from this rock moving from fantasy to possible. Today, the round trip ticket away from this rock is fantasy. I really have no interest in cosmology, other than some of the underlying suppositions on which it's based. I want to re-write the books for propulsion, because that has a very immediate and real practical application. To me, the answer to the question 'where did we come from?' is 'Who Cares?'. I'm interested in the answer to 'Where are we going?'. The need for lots of 'dark stuff' to make the equations of force for F=GMm/R^2 tells me, there is something wrong with the equations, and, I know from fluids, the first place to look after you have isolated all sources of measurement error, is to re-visit the underlying basic equations, and question the constants. Any constant that comes from empirical data over time, is subject to very very large error if it's used in a regime far from that in which it was measured.

When I read about unobservable matter (a far better term than 'dark matter' because in modern language, 'dark' has sinister connotations which were not implied by the original term 'dark' to describe 'matter not observed'), it just reminds me of the bumblebee that cannot fly, yet, direct observation said otherwise. The solution to that conundrum was, stick to the premise that a bumblebee can fly, and re-write the fluid equations until they agree. I've got models on the computer here today, that indeed show, a bumblebee can fly. They are still littered with empirical constants, but, far better approximations than those used 40 years ago.

And now, it's become a personal line of research, to show, we dont need extra stuff to describe motion as observed, we need to fix the motion equations. It would be a fabulous side effect if the 'fix' for those equations ends up involving something we can control, because that is the ticket to getting off this rock. Newtonian physics were the foundation, relativity built on that, explained the problem of the orbit of mercury, without breaking the history of observations that fit the newtonian model. G=f(something) has the potential to build on both models, explain much of the observational discrepancies, yet remain consistent with both prior models, which combine to provide constraints on the search for (something). My interest is in finding a first approximation for (something), that agrees with observed data far better than the current approximation of G=constant. If we have a first approximation, then I'll leave it to the mathematicians and physicists to try correlate that approximation to some solid basis in physics.

And this will likely be my last posting on this subject, or any other in this forum, it's surely going to end up edited and/or locked thread, even tho the 'salting' accusations are well founded, and backed up by history, with references. When it comes to 'unobserved matter', I'm of the belief that far to much has been hung on the coat rung called 'dark matter', and not enough effort has been spent on looking for alternative explanations for discrepancies in motion equations, with a resulting acceptance of things that possibly shouldn't be accepted, very much like the acceptance of the aerodrome into the smithsonian for many many years. So, mind me if I dont fully trust the conclusions, and have it in my mind that I want to re-visit many of the fundamental measurements myself. I have a general distrust of much of what is 'accepted', and am stubborn enough to do the measurements and calculations with my own data. Then again, if nobody had my attitude, the wright brothers would never have built an airplane either.

## FURTHER EVIDENCE FOR DARK ENERGY

The concordance model is often called &LambdaCDM, because multiple measurements point to a Universe that contains not just regular matter, but also cold dark matter and dark energy (symbolized as the cosmological constant &Lambda, though dark energy might not actually be constant).

Evidence of dark energy from studies of the CMB and of large-scale structure began to collect very shortly after the supernovae

measurements of the accelerating expansion. Because dark energy creates an effect opposite to that of gravity, its presence should be detectable in the way that structures form in the Universe. There should be slight differences between a Universe composed of only matter, which condenses via gravity to form clumps, and a Universe with matter and dark energy. Dark energy opposes gravitational attraction and impedes the formation of clumps. If the dark energy is extremely strong, then no clumps will form at all. If it is quite weak, we will not be able to observe its effects. Clearly, in our Universe, the dark energy has not prevented the formation of structure, and so astronomers have been able to search for its imprint on the structures they observe.

In Figure 17.18, we show the error ellipses from three independent data sets. The diagonal orange region is derived from CMB measurements, and the blue region comes from supernova measurements. The nearly vertical green region shows results from large-scale structure measurements, specifically baryon acoustic oscillations (see Going Further 17.4: Baryon Acoustic Oscillations). Together the three data sets place strong constraints on the cosmological model, in this case the values of &Omegam (total matter content) and &OmegaDE (dark energy content). The diagonal line running from upper left to lower right indicates a flat Universe, in which &Omega = &Omegam + &OmegaDE = 1.

Figure 17.18 This plot shows constraints on the matter fraction (&Omegam) and dark energy fraction (&OmegaDE) as derived from measurements of large-scale-structure baryon acoustic oscillations (BAO, green), the cosmic microwave background ( CMB , orange), and supernovae ( SNe , blue). None of the measurements alone does a very good job of constraining either parameter, but together they place very strong limits on both, as indicated by the gray region at their intersection. Credit: Supernova Cosmology Project, Suzuki et al. 2012, Astrophysical Journal. 746, 85.

There is something truly remarkable about this process. Different sorts of measurements &mdash each using different sorts of instruments to look at completely different kinds of objects, all involving different kinds of physical processes &mdash give completely consistent results. Their error ellipses converge to a small region in the parameter space. One could imagine them giving radically different answers, totally inconsistent with one another. In that case we would know that our model cosmology&mdashthe concordance model&mdashwas completely wrong. The fact that all the disparate measurements are instead convergent in a small region of parameter space lends confidence that the model is actually a fair description of the Universe and its evolution.

There are many complementary sets of parameters of this sort. Figure 17.19 shows an example of constraints on the dark energy from measurements of the CMB. CMB measurements are sensitive to not only the dark energy, but also the amount of normal matter (baryons, &Omegabaryon), the amount of cold dark matter (&Omegacdm), and other parameters. These variables are shown along with the dark energy fraction, here called &Omega&Lambda. Error ellipses are determined from the Planck and WMAP missions. Notice how the baryon graph complements the cold dark matter graph, as their uncertainty ellipses are nearly perpendicular. Together they provide much tighter constraints on the cosmological model. Plots with other variables can also be constructed, as can plots using datasets other than those from the CMB.

Figure 17.19 The baryon fraction (&Omegabh 2 ) and cold dark matter fraction (&Omegach 2 ) are plotted vs the dark energy fraction (here called &Omega&Lambda) from CMB data. The smaller red contours are from Planck and WMAP polarization data. The larger gray contours are from WMAP alone. The colored dots depict different values of the Hubble constant, running from 64 km/s/Mpc (blue) to 72 km/s/Mpc (red). The horizontal band running through the most likely values for the various parameters shows that they are all consistent with dark energy making up about 70% of the total energy density. The two plots complement one another because the uncertainties in one are perpendicular to the uncertainties in the other. Credit: NASA/SSU/Aurore Simonnet. &mdashAdapted from Planck 2013 Results XVI. GOING FURTHER 17.4: BARYON ACOUSTIC OSCILLATIONS

## High Energy Astrophysics Faculty

### Anushka Abeysekara

Anushka is studying the most extreme environments in the Universe--such as gamma-ray emitting supernova remnants, pulsar wind nebulae, and active galactic nuclei that accelerate particles to speeds near the speed of light. These particles emit very-high-energy gamma-rays as they interact with the surrounding medium. Using HAWC and VERITAS gamma-ray observatories, he measures these gamma-rays and investigate questions such as, how these particles are moving away from the central engine, what fraction of the particles reach earth, and what is the nature of the medium between Earth and these objects.

### Paolo Gondolo

Cosmology, astrophysics, high energy physics.

### Dave Kieda

Dave's research focuses on the exploration of the fundamental forces in physics through the measurement of astrophysical objects in extreme settings. He develops techniques and performs observations in high energy gamma-ray astronomy to explore particle acceleration in Galactic objects, such as supernova remnants, microquasars and high mass x-ray binaries, and active galactic nuclei. He also is leading an effort to develop a new capability for ultra-high angular resolution (< 40 microarc-second) optical imaging of nearby stars using the quantum properties of light. His experimental observations use the ground-based VERITAS, HAWC , CTA, and StarBase-Utah observatories as well as the FERMI, Swift and NuStar satellite observatories.

### Stephan LeBohec

Stephan studies the possible implications of extending relativity to resolution scale transformations. He also participates in the redevelopment of intensity interferometry for astrophysical observations. Recently he started working on the mechanical and thermal noise characterization of mirrors used in gravitational wave detectors.

### Pearl Sandick

Pearl studies high energy phenomenology particle astrophysics cosmology. Her specialties include dark matter, supersymmetry, and other physics beyond the Standard Model.

### Wayne Springer

Wayne specializes in developing and using instruments to observe the Cosmos. He has investigated the Z boson and searched for the Higgs boson at CERN and has studied ultra-high energy cosmic rays with the HiRes and Telescope Array cosmic ray observatories in the west desert of Utah. He is currently using the HAWC observatory located on a Mexican volcano to study TeV gamma and cosmic rays from astrophysical sources and to search for Dark Matter.

### Yue Zhao

Yue's research interests are in new physics beyond the standard model. He has been working on several different methods in order to explore new physics. He is very interested in developing novel experimental ideas for new physics, including utilizing recent advances in precision measurement techniques. He is also interested in exploring the possibility of extending the purposes of existing experiments, such as LIGO and DUNE, to look for new physics. The new field of gravitational wave astronomy is particularly exciting to him, and studying new physics through gravitational wave will be one of his main focuses in the future.