Astronomy

What uncertainty does an error bar signify in astronomy?

What uncertainty does an error bar signify in astronomy?

When an astronomer talks about her/his topic and shows an X/Y-plot with error bars. What should one assume that those error bars represent? 1 standard deviation? Or 2? Or some specific significance level like 95% or 99%? Is there a generally understood convention for this, or does it vary depending on topic? I've noticed that it is very rarely explicitly stated what the meaning is of the range which the error bars cover.

And how are the cases treated, where exact observations reveal a variation in the actual population? Not "errors", but natural true certain variation. Are there other bars to illustrate that range of true values, as opposed to the variation due to observational uncertainty?


The most common way to represent uncertainty is with symmetric error bars around a central point. This is in turn commonly interpreted as the 95 % confidence interval. Ie, the actual data point is the centre of a Gaussian which has a width at 95 % of it's height as the size of the error bars.

This is only statistical uncertainty and often not explicitly stated. One also refers to measurement and discovery with different confidence intervals… discovery is commonly only claimed when there is a a 5-sigma confidence interval. Ie, if the measurement lies more than 5 widths of the peak away from the theory or prediction, you've made a discovery.

Note, we are leaving out here systematic uncertainty and instument bias, which can only increase the total uncertainty. Usually it is assumed that there is no correlation, so they are combined using the sum of their squares.

Long story short - always ask what the error bars represent, especially if they look too "clean".


In a published scientific paper, the significance of any error bars should be explained. If this has not been clearly stated then peer review is not doing it's job. However, in some fields there may be unstated conventions which slip through without explanation.

In my own field (star formation) plots which shows averages over a large data set would show means with error bars representing one standard deviation, while plots which show single measured values might have error bars representing calculated uncertainties on the measurement, for example due to detection limits or instrumental uncertainties.


Watch the video: Operační systémy - Procesy v operačním systému, druhy procesů (September 2021).