Astronomy

How to correct observed flux densities for redshift

How to correct observed flux densities for redshift

Say I have a spectrum of a galaxy at a redshift $z$, in flux density units of erg/s/cm^2/Angstrom. I'd like to recover the spectrum (in the same flux density units) at z=0, i.e. at its rest wavelengths. How do I do this?

I saw this paper (Distance measures in cosmology, Hogg (2000)) but am still confused, because of their use of luminosity and luminosity densities, and in my class I only have fluxes.


You are familiar with the relationship between flux and luminosity, right? This is usually taken as the definition of luminosity distance in extragalactic astronomy, $$ F = frac{L}{4pi D_L^2}.$$ The quickest way to convert this into a relationship with spectral variables is to use differential 1-forms. So, $L ightarrow L_lambda ,mathrm{d}lambda_e$ and $F ightarrow F_lambda ,mathrm{d}lambda_o$, with $lambda_e$ the photon wavelength at emission (also called the rest frame wavelength) and $lambda_o$ the observed wavelength at $z=0$ (also called the comoving wavelength). Thus egin{align} F_lambda ,mathrm{d}lambda_o & = frac{L_lambda ,mathrm{d}lambda_e}{4pi D_L^2} Rightarrow F_lambda & = frac{L_lambda}{4pi D_L^2} imes frac{mathrm{d}lambda_e}{mathrm{d}lambda_o}. end{align} Now you just need to take your derivative using the definition of redshift, $lambda_o = (1+z)lambda_e$, and use some way of calculating the luminosity distance in your cosmology.

Formally, a lot of people like to use $ u F_ u = lambda F_lambda$ as though it were a total flux, but it's not. The derivation will work, but only because redshifting scales all frequencies/wavelengths by a single constant. $ u F_ u$ isn't a measure of the flux under any part of the spectrum, it's a spectral flux using a different coordinate system. Consider the following egin{align} F & = F_ u ,mathrm{d} u &= u F_ u frac{mathrm{d} u}{ u} &= F_{ln u} ,mathrm{d}ln u. end{align} In other words, $ u F_ u$ is still a spectral flux. If $F_ u$ is in $mathrm{ergs},mathrm{cm}^{-2},mathrm{Hz}^{-1}$ then $ u F_ u$ will be in $mathrm{ergs},mathrm{cm}^{-2},(e ext{-fold})^{-1}$. The $e ext{-fold}$ is a pseudo-unit, like radians, dex, octave, decibel, or magnitudes. In principle it can be omitted, just like how angular frequencies can be in radians per second, or just $mathrm{s}^{-1}$.

Fun math exercise: show that $ln(10) u F_ u$ has units of $mathrm{ergs},mathrm{cm}^{-2},mathrm{dex}^{-1}$.


Calculating the redshift at which radiation energy density equaled mass density

Gee you have to get it done by tonight. There are some other guys who are really good, but they are not around right now. I will try to help a little in case they don't show up before you have to finish.

I think the answer is z= 3200, or so.

You know that for the CMB the redshift is z = 1100, so this is longer ago than the recombination.

I guess you take the energy density of the CMB, and multiply by 3200 4
and you take the energy-equivalent of matter density and multiply by 3200 3

and the two energy densities are supposed to be equal at the transition

I think there is a lot of cancelation, and that what it comes down to is this equation

3200 x CMB energy density = matter energy density

And what that boils down to is an equation for z (which I suspect is around 3200). Here is the equation

z = (CMB energy density)/(equivalent energy density of matter)
============

if 3200 is the right z, then the temperature at the time would have been

Im answering quick without really remembering the material but if you need quick help you can try to get some good out of this. Dont rely on it or trust it too much tho. Good luck


How to correct observed flux densities for redshift - Astronomy



MOJAVE ( M onitoring O f J ets in A ctive galactic nuclei with V LBA E xperiments) is a long-term program to monitor radio brightness and polarization variations in jets associated with active galaxies visible in the northern sky. Approximately 1/3 of these were observed from 1994-2002 as part of the VLBA 2 cm Survey. These jets are powered by the accretion of material onto billion-solar-mass black holes located in the nuclei of active galaxies. Their rapid brightness variations and apparent superluminal motions indicate that they contain highly energetic plasma moving nearly directly at us at speeds approaching that of light. Our observations are made with the world's highest resolution telescope: the Very Long Baseline Array (VLBA) at wavelengths of 7 mm, 1.3 cm, and 2 cm, which enables us to make full polarization images with an angular resolution better than 1 milliarcsecond (the apparent separation of your car's headlights, as seen by an astronaut on the Moon). We are using these data to better understand the complex evolution and magnetic field structures of these jets on light-year scales, close to where they originate in the active nucleus, and how this activity is correlated with gamma-ray emission detected by NASA's Fermi observator y .

For astronomers: All calibrated (u,v) visibility and FITS data for the MOJAVE and Boston U programs are available via html links on the source pages. If you are interested in Stokes Q,U,V (linear and circular polarization) FITS images, please contact us.

If you use these data in a publication, we ask that you please contact us so we can add a link to the list of external MOJAVE publications, and ask that you cite (Lister et al., 2018, ApJS, 234, 12) and include the following acknowledgment: "This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team (Lister et al. 2018)"


Blazar Monitoring Program List:
Most of the blazars in MOJAVE are monitored at other wavelengths by a variety of instruments. This blazar monitoring list page contains a sortable table of all blazars known to be monitored at optical wavelengths, as well as known TeV-emitting AGNs and MOJAVE-monitored sources.


How to correct observed flux densities for redshift - Astronomy

Lyman alpha systems are becoming a very useful source of information in physical cosmology.

The Lyman series is the series of energies required to excite an electron in hydrogen from its lowest energy state to a higher energy state. The case of particular interest for cosmology is where a a hydrogen atom with its electron in the lowest energy configuration gets hit by a photon (light wave) and is boosted to the next lowest energy level. The energy levels are given by En = -13.6 eV/n 2 and the energy difference between the lowest (n=1) and second lowest(n=2) levels corresponds to a photon with wavelength 1216 angstroms. The reverse process can and does occur as well, where an electron goes from the higher n=2 energy state to the ground state, releasing a photon of the same energy.

The absorption or emission of photons with the correct wavelength can tell us something about the presence of hydrogen and free electrons in space. That is, if you shine a light with wavelength 1216 at a bunch of neutral hydrogen atoms in their ground state, the atoms will absorb the light, using it to boost the electron to a higher energy state. If there are a lot of neutral hydrogen atoms in their ground state, they will absorb more and more of the light. So if you look at the light you receive, intensity as a function of wavelength, you will see a dip in the intensity at 1216 angstroms, depending on the amount of neutral hydrogen present in its ground state. The amount of light absorbed ('optical depth') is proportional to the probability that the hydrogen will absorb the photon (cross section) times the number of hydrogen atoms along its path.

Because the universe has many high energy photons and hydrogen atoms, both the absorption and emission of photons occurs frequently. In Lyman alpha systems, the hydrogen is found in regions in space, and the source for the photons are quasars (also called qsos), very high energy light sources, shining at us from behind these regions.

Because the universe is expanding, one can learn more than just the number of neutral hydrogen atoms between us and the quasar. As these photons travel to us, the universe expands, stretching out all the light waves. This increases the wavelengths lambda and lowers the energies of the photons (`redshifting').

Neutral hydrogen atoms in their lowest state will interact with whatever light has been redshifted to a wavelength of 1216 angstroms when it reaches them. The rest of the light will keep travelling to us.

The quasar shines with a certain spectrum or distribution of energies, with a certain amount of power in each wavelength. At right, the top picture shows a cartoon of how a quasar spectrum (the flux of light as a function of wavelength) might look if there were no intervening neutral hydrogen between the qso and us. In reality, gas around the quasar both emits and absorbs photons. With the presence of neutral hydrogen, including that near the quasar, the emitted flux is depleted for certain wavelengths, indicating the absorption by this intervening neutral hydrogen. As the 1216 wavelength is preferably absorbed, we know that at the location the photon is absorbed, its wavelength is probably 1216 angstroms. Its wavelength was stretched by the expansion of the universe from what it was initially at the quasar, and, if it had continued to travel to us, it would have been stretched some more from the 1216 angstroms wavelength it had at the absorber. Thus we see the dip in flux at the wavelength corresponding to that which the 1216 angstrom (when it was absorbed) photon would have had if it had reached us. As we can calculate how the universe is expanding, we can tell where the photons were absorbed in relation to us. Thus one can use the absorption map to plot the positions of region of intervening hydrogen between us and the quasar. The middle picture at right shows the flux for one nearby region while the bottom picture shows the case for several intervening regions.

It is common to see a series of absorption lines, called the Lyman alpha forest . Systems which are slightly more dense, Lyman limit systems , are thick enough that radiation doesn't get into their interior. Inside these regions there is some neutral hydrogen remaining, screened by the outer region layers. If the regions are very thick, there is instead a wide trough in the absorption, and one has a damped Lyman alpha system . Absorption lines generally aren't just at one fixed wavelength, but over a range of wavelengths, with a width and intensity (line shape) determined in part by the lifetime of the excited n=2 hydrogen atom state. These damped Lyman alpha systems have enough absorption to show details of the line shape such as that determined by the lifetime of the excited state. These dense clumps are thought to have something to do with galaxies that are forming.

One can use the ionization of neutral hydrogen to find galaxies as well, for example the Lyman break systems described in these technical conference proceedings.

There is a very cool visualization of how different systems absorb Lyman-alpha here by Andrew Pontzen.

    Neutral hydrogen: because we see any light at all, we can limit how much neutral hydrogen is out there between us and the quasar and what its distribution is. It used to be thought that there was a smooth intergalactic medium (IGM) with regions embedded in it, and the smooth background would provide an absorption at all positions between us and the quasar (Gunn-Peterson effect). But observers only see evidence of lumpy regions. There isn't evidence for a spatially smooth component of neutral hydrogen between us and the quasar sources. It is a question of active research what is making the amount of neutral hydrogen so small (that is, what is ionizing the rest of the hydrogen).


Guidelines for the Naming of Energy Band-Specific Table Parameters

If a table has multiple columns of the same type for different energy bands, the HEASARC will use the following X-ray band prefixes to specify the energy band:

  • hb_: indicates the hard energy band, typically 0.2 to 2 keV
  • sb_: indicates the soft energy band, typically 2 to 12 keV
  • fb_: indicates the total energy band, typically 0.2 to 12 keV

For example, a table might have hb_flux, sb_flux, and fb_flux parameters.

The HEASARC is hiring! - Applications are now being accepted for a scientist with significant experience and interest in the technical aspects of astrophysics research, to work in the HEASARC at NASA Goddard Space Flight Center (GSFC) in Greenbelt, MD. Refer to the AAS Job register for full details.


3. Results

3.1. Cosmological Distance Measurement Uncertainties

In order to obtain an indicator of the performance of each classification method, we parameterize the fractional uncertainty in angular diameter distance (dA) measurements as a function of contamination and incompleteness of the statistical sample of LAEs, as follows:

where is fractional contamination (as defined in Equation (6)) and is sample incompleteness. We use the observable power spectrum (Equation (5)) in a Fisher (1935) matrix code 20 (Shoji et al. 2009) that marginalizes over the contamination power spectrum (the second term in Equation (5)) to obtain for a grid of contamination and incompleteness values. A linear bias factor of 2.0 is used for LAEs in both redshift bins. We then use the grid of results to derive a fitting formula for the parameters. The uncertainty parameter, , for Lyα emitters in the simulated data is given by the parameter H in Table 2.

Table 2. Number of Observable LAEs in the Main HETDEX Spring Field at Nominal Flux Limits Based on Gr16 Schechter Functions, with Simulated Spectroscopic Measurement Noise, and the Corresponding Parameter Values for Equation (26) Determined by Fisher Matrix Analysis

Redshift Bin 1.9 < z < 2.5 2.5 < z < 3.5
LAE Counts 446,200 396,400
Parameter Fisher Matrix-derived Values
A 3.462 3.758
B 1.323 0.917
C 1.359 3.539
D 1.263 1.803
E 11.65 8.934
F 0.775 1.099
G 3.586 1.085
H 1.409 1.901

Results for Bayesian classification of emission-line galaxies in a simulated HETDEX catalog are presented for an optimized requirement of the posterior odds ratio (Equation (25)) for selection as LAEs. This requirement minimizes given perfect information on the simulated object labels by optimizing the trade off between contamination and incompleteness. Each redshift bin may be optimized independently to maximize the total amount of information obtained from the full 1.9 < z < 3.5 LAE sample. To accomplish this goal, we need to determine a set of values for the eight parameters in Equation (26) for each redshift bin. In our analysis we divided the full spectral range of HETDEX into two redshift bins and tuned the required posterior odds ratio for LAE classification in each bin separately Figure 4 and Tables 3 and 4 present results corresponding to a Bayesian method optimized separately for two redshift bins. The value of in total is estimated by inverse variance summation.

Figure 4. Four selected redshift bins shown in rows for a simulated HETDEX survey with g' (5σ = 25.1) band imaging survey and realistic measurement noise. Approximately 5% of the galaxies in each redshift bin are plotted. Left: all 5σ spectroscopic detections of emission-line galaxies whose primary lines are observed in the given wavelength interval. Middle (right): simulated emission-line galaxies classified by Bayesian method into high-redshift LAE (foreground ) samples at the corresponding redshifts. Correctly classified "true" LAEs ( emitters) are shown in red (blue) misidentified emitters (erroneously discarded LAEs) are indicated in blue (red).

Table 3. Classification Results for a Simulated HETDEX Catalog Based on Gr16 Luminosity Functions and Equivalent Width Distributions, with Simulated Aperture Photometry on g' (5σ = 25.1) Band Imaging

Classification Method > 20 Å Bayesian Method
Default a Optimized b
Galaxies classified as LAEs 637,400 847,500 796,200
Missed observable LAEs 218,700 20,100 50,800
Sample incompleteness 26.0% 2.39% 6.02%
Misidentified emitters 13,500 25,100 4,400
Fractional contamination 2.12% 2.96% 0.55%
Measurement uncertainty, 1.32% 1.19% 1.16%

a The "default" Bayesian requirement for LAE classification is . b The "optimized" Bayesian method requires > 1.38 for the classification of 1.9 < z < 2.5 LAEs and > 10.3 for 2.5 < z < 3.5 LAEs.

Table 4. Bayesian Method Classification Results for Three Simulated HETDEX Catalogs with g' Band Imaging (5σ = 25.1) "Baseline" Distributions are used as Bayesian Priors for Each Simulation Scenario

Simulation Scenario Baseline Pessimistic Optimistic
Distribution of LAEs Gr16 Ci12, z = 2.1 Gr07, z = 3.1
Evolving No Evolution No Evolution
Distribution of emitters Ci13
"True" observable LAEs 842,600 375,000 1,155,000
Galaxies classified as LAEs 796,200 345,000 1,075,800
Missed observable LAEs 50,800 32,700 83,500
Sample incompleteness 6.02% 8.73% 7.23%
Misidentified emitters 4420 7700 4350
Fractional contamination 0.55% 2.20% 0.40%

3.2. Improvement over Traditional EW Method

Compared to the traditional > 20 Å narrowband limit to classify emission-line galaxies as LAEs (which discards all data below the dashed line in Figure 3), the Bayesian method presented in Section 2.2 recovers a more complete statistical sample of high-redshift LAEs without an overall increase in misidentified low-redshift emitters. Our Bayesian method is adaptive to prior probabilities that reflect the evolution of the galaxy populations and the effect of cosmological volume on the relative density of galaxies as a function of wavelength.

LAEs at z < 2.065 are not contaminated by foreground emitters, since will not be detected at . Moreover, at z < 0.05, a galaxy's angular scale is greater than per kiloparsec, hence all but the most compact sources will be resolved in the imaging survey. Consequently, out to about z

2.4, our Bayesian analysis recovers all LAEs with negligible contamination (see Figure 4, top row).

The rate of contamination in the LAE sample identified by our Bayesian method is sub-percent up to z

3.0. Over 1.88 < z < 2.54, the Bayesian method recovers more than 99% of available LAEs, compared to a sample identified by the traditional > 20 Å cutoff that is only

Table 3 provides a comparative summary of the two classification methods. With respect to the traditional > 20 Å cut, the Bayesian method significantly increases the completeness of the sample of objects classified as LAEs by trading near-zero contamination in the case of the EW method for sub-percent contamination in the low-redshift bin (1.9 < z < 2.5).

Over the entire HETDEX spectral range (3500–5500 Å), our Bayesian method recovers

25% more LAEs than the traditional EW method. Over the redshift range 2 < z < 3, 86% of "true" LAEs missed by > 20 Å are correctly classified by the Bayesian method, representing the recovery of cosmological information that would be discarded by the EW method. In addition to improving the completeness of the LAE sample, the Bayesian method also reduces contamination by sources by a factor of

3.3. Aperture Spectroscopy for Additional Emission Lines

The presence of other emission lines redward of the primary detected line (in the case of emitters see Figure 1), when they fall within the spectral range of HETDEX, provides additional observed information for the two likelihood functions in the Bayes factor. Accounting for this spectral information leads to better classification performance by the Bayesian method in the form of additional reductions in fractional contamination and further increases in the completeness of the LAE sample.

At z < 0.1, the vast majority of -emitting galaxies will be detected via multiple emission lines we typically observe stronger λ5007 emission than λ3727. For the bulk of the HETDEX redshift range, 0.13 < z < 0.42, λ3869 is the only other line that falls into the spectrograph's bandpass.

With our previous assumption of statistically independent quantities, the likelihood functions and are each the product of the likelihood functions associated with the individual properties we wish to consider:

where the lines we wish to consider are λ3869, Hβ λ4861, λ4959, and λ5007, subject to their falling within the HETDEX spectral range. An example of other "properties" one might include in this analysis is the observed color of objects, obtained by having imaging data in more than one band (see discussion in Section 4.4).

Assuming a Gaussian noise distribution, we can calculate the probability of the measured flux at each expected line location. When a line is out of range, it contributes no information for or against the hypothesis that the primary detected line is the in question is set equal to unity, thereby having no effect on the value in the left-hand side of Equation (28).

The improvement due to accounting for additional emission lines is evident when we consider the boundary at which spectroscopic information from all additional lines is lost, when λ3869 is redshifted out of the HETDEX spectral range (3500–5500 Å) for z > 0.42 emitters ( > 5299 Å). The bottom row in Figure 4 shows 5σ emission-line detections at 5300 < < 5500 Å and their classification into samples of LAEs and emitters. Without spectroscopic information from the additional lines, the Bayesian cutoff between LAEs and emitters is reduced to a nearly straight line on a log–log plot of versus continuum flux densities ( ) in this redshift bin, which is the reddest 200 Å in the spectral range of HETDEX (compare to the third row in Figure 4).

3.4. Optimizing Area versus Depth in Fixed Observing Time

Using our Bayesian method and HETDEX as a baseline scenario, we investigate the survey design trade off between total survey area and depth of coverage per unit survey area. Holding the amount of available observing time fixed at the HETDEX survey design (denoted by the gray dashed line in Figure 5), we apply 5σ depths in both simulated spectroscopic and imaging surveys that are modified by , where t is the factor by which observing time per unit survey area changes as a result of a corresponding change in total survey area. Simulated measurement noise varies accordingly, as described in Section 2.1.

Figure 5. Trade off between survey area and depth in simulated surveys for 1.9 < z < 3.5 LAEs in fixed broadband imaging and spectroscopic time. A survey that reaches 25.1 magnitude will cover 300 deg 2 . Top: number of LAEs available to be recovered in each simulated survey and numbers correctly identified by each method for LAE classification. Bottom: measurement uncertainty in angular diameter distance (dA) corresponding to each combination of classification method and redshift range in simulations. The most accurate measurement of dA occurs with a survey area of 300–600 deg 2 .

The number of LAEs available to be recovered in the 5σ line flux-limited sample changes with survey design, as shown in the upper panel of Figure 5. For each survey design, we re-run the Fisher matrix code described in Section 3.1 with the number of available LAEs to determine a new set of parameters for Equation (26) (i.e., Table 2) and re-optimize the Bayesian method for each case.

Our analysis indicates that the current HETDEX survey design is effectively optimal in the trade off between area and depth when our Bayesian method is used as the redshift classifier: trading away from the nominal 300 deg 2 survey area moves the optimal for the two redshift bins in opposite directions (lower panel in Figure 5).


5. RESULTS

5.1. RM and

In Bernet et al. (2008), it was claimed that there was an association between higher values and the presence of strong (rest equivalent width above 0.3 Å) Mg ii absorption in the optical spectrum. This was based on a KS test on the distribution of for sources with and without such absorption. Here we reproduce this analysis but now using our new instead of . is the modulus of the Faraday Depth at which the FD distribution peaks, shifted by the Oppermann et al. (2015) estimates of the Galactic contribution to . We use the one-tailed Kolmogorov–Smirnov to test the hypothesis that objects with Mg ii absorption along the lines of sight have enhanced (i.e., our null hypothesis is that there is no such association or that clean lines of sight have enhanced ). Recall that objects with m −2 , i.e., PKS0506–61, PKS2134+004, and 4C+6.69, are excluded from the analysis since we believe they are strongly affected by Galactic effects (see Section 4).

Unlike Bernet et al. (2008), we adopt a weaker absorption cut at W0 = 0.1 Å and get , represented in Figure 8. We will use this cut throughout the remainder of the paper because this cut turns out to be more significant later on when we analyze the distribution structure of Faraday Depth, indicating that weak absorbers also affect the FD distribution. However, also using the same equivalent width cut at 0.3 Å in Mg ii absorption, we do not see a signal and obtain a p-value of 23%. Using fractional q and u for RM Synthesis as discussed in Section 2.4 makes effectively no difference to the significance level of this test for .

Figure 8. Empirical distribution function (EDF) of for objects without (red) and with (blue) interveners. The one-tailed KS test yields .

With our new data, the correlation of Mg ii with is less significant than found with RM in Bernet et al. (2008). They found a significance level of 92.2% in a two-tailed KS test, without even correcting for Galactic RM.

This difference may be partly due to the fact that we are only analyzing a subset of their sample. On the other hand, it could also reflect differences between the use of derived from RM Synthesis (as here) and the use of single RM values that are derived from individual sparsely sampled polarization data (as in Bernet et al. 2008). Indeed, when we do the KS test with RMKron for our sample (excluding PKS0506–61, PKS2134+004, and 4C+6.69 as with our analysis), we obtain a somewhat stronger result with . This stronger significance is driven mostly by the objects with m −2 , which is where we see discrepancies between and RMKron, as mentioned in Section 3.1. This difference between and RMKron will be explored further in a future paper.

Prompted by the statistical editor, we also have performed the Anderson–Darling (AD) 2-sample test for all our KS test results (here and later). It turns out that the AD test results are always slightly more significant but altogether consistent with the KS test results. Therefore, we remain with the KS test throughout this paper.

It should be noted that the sources with intervening absorption systems are generally at slightly higher redshifts (a difference of around 0.2 in their respective medians) and so the simple dependence of on wavelength would produce, all other things being equal, a lower observed dispersion for the more distant systems, i.e., opposite to what is seen in Figure 8.

5.2. Depolarization

We now look for correlations between the depolarization parameters and the presence of Mg ii absorption lines along the line of sight. As before, we apply the one-tailed KS test with the null hypothesis that sources with Mg ii absorption along the lines of sight are drawn from the same distribution as objects with clean lines of sight or that objects with clean lines of sight are more strongly depolarized.

The strongest signal is achieved with DP for which the KS test yields a p-value of only . Figure 9 shows the distribution functions of DP . As already mentioned in Section 4, if objects at high Galactic latitude are excluded, the signal increases to a nominal .

Figure 9. EDF of the depolarization parameter DP for objects without (red) and with (blue) interveners. The one-tailed KS test yields .

Also, DP yields a strong result with . is not as robust, due to the complex behavior of in many sources, and for 10 objects the fit either fails to converge or yields negative , which is unphysical. Once these objects are excluded from the KS test, however, yields .

To evaluate the real significance of the obtained p-values (here and elsewhere in the paper) we have randomly re-scrambled objects with and without Mg ii absorption. The re-scrambling has been realized by randomly drawing, without replacement, 16 objects and declaring them artificially to be objects with clean lines of sight while the remaining 33 objects are declared as objects with interveners. Subsequently, the KS test has been carried out to obtain p-values. The distribution of p-values (after 10,000 such draws) showed that the p-value estimates are conservative in the sense that p-values smaller than happen in 0.09%, in 0.6%, and in 0.2% of the realizations. This check shows that the p-values of the KS tests are conservative.

Somewhat weaker but still significant results are also achieved for DPHalf and with and . As with , the fit of fails for nine objects and those are excluded from the ks test for . For DPQuart, we see little or no effect with .

It is clear that we obtain rather different p-values with the six depolarization parameters and we will discuss this further in Section 5.4. Nonetheless, altogether, our results demonstrate that there is a clear and highly significant correlation between the presence of intervening Mg ii absorption systems and depolarization.

Farnes et al. (2015) compared fractional polarization spectral indices to the presence of Mg ii absorption with 41 objects and could not see any correlation between polarization structure and presence of interveners. However, recall that our sources are initially selected to be compact while the sample in Farnes et al. (2015) reduces to 15 when only flat spectrum sources are selected. Furthermore, in Farnes et al. (2015), only a handful of data points along the spectrum has been available, which could lead to an imperfect description of the polarization structure given the complexity we can find for some sources.

5.3. Structure in the FD Distribution

The previous section demonstrated that depolarization in the radio spectra is clearly statistically associated with the presence of intervening Mg ii absorption along the line of sight. Since, generically, depolarization reflects the presence of different coming from different parts of the source, we would then expect to see correlations between the presence of interveners and various parameters that capture the range of in a given source.

Armed with the quantitative parameters defined in Section 3.3, we then applied the KS tests. As before, we adopt the null hypothesis that the parameters are equally distributed between objects with and without intervening absorption systems or are enhanced for objects without interveners.

We find no significant cause to reject the null hypothesis, obtaining for , for G, for C, and for the subjective ranking. Figure 10 shows the distributions of the four parameters and it is fairly clear that there is no significant correlation between them and the presence of intervening absorbers. The results remain insignificant when the fractional q and u are used for RM Synthesis with for , for G, and 55% for C.

Figure 10. EDF of the second moment parameter (left), Gini coefficient G (middle left), coverage parameter C (middle right) and subjective ranking (right) for objects without (red) and with (blue) interveners. The one-tailed KS test yields , , , and , respectively.

These null results were surprising. We clearly see the connection between interveners and depolarization in Section 5.2, but cannot then associate the complexity in the FD distribution, which we would have expected was the cause of the depolarization, with the presence of interveners. Furthermore, we actually see very little correlation between our parameterizations of FD structure and depolarization.

This result then led us to re-examine the links between the FD distribution and depolarization, to isolate that feature of the FD distribution that is most strongly causing depolarization, and then to construct a further FD parameter that is, finally, clearly associated with the presence of intervening systems in our sample. This is the subject of the next two Sections 5.4 and 5.5. Furthermore, we are able to argue that the overall structure of the FD distribution examined in this section is actually likely reflecting other properties of the sources. This is briefly examined in Section 5.6.

5.4. The Connection between FD Distribution and Depolarization

As commented at the end of the previous section, we found a surprising disconnect between the evident richness of structure in the FD distribution and the presence of depolarization signatures in the overall polarization spectrum. The richness of the polarization structure was also surprising. As discussed already, we might expect to see simple monotonically decreasing polarization with increasing wavelength (for a simple Gaussian FD distribution), but instead we see a very wide range of behavior (as in Column (3) of Figures 1–3).

The polarized flux density of a given source at a given wavelength represents the vector sum of the complex representations of the different components, each of which rotates at a speed (in space) that is proportional to . It is worth distinguishing between the effects on of a few components in that are widely separated in , referred to as "gross structure" in the following, and the effects of very closely spaced, or continuous, structure in , as, for example, in the Gaussian width of a particular component, referred to as "fine structure." The former causes large variations in the polarized flux density with wavelength as the small number of polars rotate around each other producing an oscillatory behavior in the total amplitude, i.e., in the polarized flux density. In contrast, the fine structure in within a feature produces a more steady decrease in polarization as the continuous distribution of Faraday Rotation causes a progressive cancellation of polarized flux.

The FD distributions of many sources in Figures 1–3 show multiple discrete components, i.e., gross structure, and the parameters which we constructed in the previous section, including our own subjective ranking, were based primarily on the presence of these multiple discrete components rather than the width of individual components.

Further evidence for the distinction between multiple components and their widths (i.e., between gross and fine structure) comes from the comparison of the values to the parameters obtained from the depolarization curves (i.e., or ). Although we would have expected those parameters to trace the same physical quantity, i.e., the dispersion of the FD distribution, we obtain to be an order of magnitude larger than the depolarization parameters. This could be explained by arguing that describes gross structures while and describe fine structures. The parameter is sensitive to these fine structure effects because it measures the widths of the primary components. We will show later that they are of similar sizes as and .

We explore the effects of both discrete components and the Gaussian width on the polarization using a simple toy model. The toy model consists of N = 1000 FD cells, which are each associated with according to an underlying FD distribution, assuming that all cells have the same initial phase ψ and also assuming unit total flux density, I, so that . The polarization is then determined by

Figure 11 shows the resulting depolarization structure of a few simple models. The left panel corresponds to a model with one component, the middle panel to a model with two components where the secondary component carries 10% of the total flux density. And the right panel to a model with three components where the first secondary component carries 20%, and a second 10%, of the total flux density. All components have been convolved with a Gaussian of different widths in , namely (blue), m −2 (green), m −2 (red), and m −2 (cyan), respectively.

Figure 11. Obtained polarization structure for a one component model (left), two component model with the secondary component carrying 10% of the total flux density and 40 rad m −2 separation from primary component (middle) and three component model with first secondary component carrying 20% of the total flux density and 40 rad m −2 away and the third component carrying 10% and 60 rad m −2 separated from the primary component (right). Each model is convolved with a Gaussian of width (blue), m −2 (green), m −2 (red), and m −2 (cyan).

We see that highly non-monotonic and complex polarization structure is produced by adding just a few additional components. However, the overall depolarization is mainly driven by the convolution with the Gaussian rather than by these multiple components. Multiple discrete components in contrast tend to add oscillatory features to the polarization structure. The small amplitude of the oscillations reflects the small fractions of the flux density in the secondary components. In contrast, the convolution leads to phase dispersions affecting all of the polarized flux density.

We can now return to the different parameterizations of the depolarization introduced in Section 3.2 and examine which of them best reflects the fine structure , recalling that these were correlated to different degrees with the presence of Mg ii intervening absorption. We use the same type of simple models, but now construct 6000 models by varying the positions of the secondary component(s) relative to the primary component (between zero and 200 rad m −2 ), by varying the combined relative flux density contribution of the secondary component(s) (up to 40% of the total) and also by varying the initial phases ψ of the secondary component(s) relative to the primary component, and to each other (between 0 and π). These variations reflect what we see in the FD distributions of our sample quasars (see and C in Figure 10). Furthermore, each of these 6000 models is convolved with a range of between 0 and 40 rad m −2 . The obtained DP parameters of those models are shown in the different panels of Figure 12. As with the real data, it was not always possible to fit (bottom left) and (bottom right) with physically meaningful values. The red lines in each panel show the behavior of the single component model. As a comparison, the distribution of the corresponding DP parameters in our observed quasar sample are represented by the histograms on the right-hand axis.

Figure 12. 6000 one, two, or three component Faraday screen models have been implemented by varying relative Faraday Depth, size, and initial angle of the components and each model is convolved with between 0 and 40 rad m −2 . The obtained depolarization parameters DPQuart (top left), DPHalf (top right), DP (middle left), DP (middle right), (bottom left), and (bottom right) for each model are represented by the black lines. The red line represents the one component model. The histograms on the right show the respective depolarization parameter distribution of objects without (red) and with (blue) interveners in the sample. The histograms are stacked.

As discussed above, the feature in the FD distribution that most drives depolarization is . Hence, for our purposes, a "good" DP parameter should be one that well traces . Figure 12 shows the relation of the previously defined DP parameters to for the simple one component model (red line) and as a comparison for more complex models with two or three components (black lines). It shows how the appearance of additional components can bias DP and blur the relation between and DP. We see that DPQuart (top left) and DPHalf (top right) can be both increased and decreased by the presence of secondary components while DP (middle left) and DP (middle right) mostly tend to be increased.

Both (bottom left) and (bottom right) work surprisingly well in recovering an estimate of , despite the difficulties of fitting the Burn model due to the non-monotonic structure of . For DPQuart and DPHalf most of the objects in our sample are in that region where secondary components can strongly affect them. As opposed to DPQuart or DPHalf, we see that for DP and DP a fair portion of the sample have values close to one, where the value is more robust against additional components.

This analysis, based on a simple toy model, therefore offers a possible explanation as to why we got different KS significances for the different DP parameters in Section 5.2. Recall that we saw the strongest correlations with the presence of intervening absorption with DP and DP , and also, when measurable with and especially . We would have seen this behavior if the presence of Mg ii absorption was associated with the fine structure rather than the presence of the multiple components that dominate the visual impression of the FD distributions in Figures 1–3.

We therefore suggest that intervening material is primarily responsible for broadening the FD distribution rather than the presence of the multiple discrete components. We will test this in the next Section 5.5 and provide a further argument in favor of this idea in the subsequent Section 5.6.

5.5. Intervener Effects in Faraday Depth

In the previous section, we postulated that intervening material, traced by the Mg ii absorption, affects the observed FD distribution by means of a convolution effect. , introduced in Section 3.3, is a parameter constructed to be insensitive to the appearance of multiple components.

In Figure 13, we compare the to . The in the VLA data is around 17 rad m −2 and is around 24 rad m −2 for ATCA. Most objects have smaller than that, i.e., those sources are barely resolved in space, and the values should be interpreted with some caution. Nevertheless, and despite the difficulties in obtaining fits to with the Burn model, we see a reasonable overall correlation between and .

Figure 13. Comparison between the depolarization parameter and the standard deviation of the primary component for objects without (red) and with (blue) interveners. 10 objects for which the fit failed are missing.

If we now construct our usual KS test between absorption and , i.e., a one-tailed test with the null hypothesis that objects with Mg ii absorption along the lines of sight are drawn from the same distribution of as objects with clean lines of sight or that objects with clean lines of sight have enhanced , we can reject the null hypothesis with (Figure 14). When re-scrambling objects with and without Mg ii absorption along the lines of sight as in Section 5.1 or smaller happens in 1.9% of the realizations.

Figure 14. EDF of the depolarization parameter for objects without (red) and with (blue) interveners. The one-tailed KS test yields .

To assess the significance of this result, it should be borne in mind that most of the objects were not resolved in space. It is noticeable that all of the quasars with significantly larger than have intervening Mg ii absorption.

When we use the fractional q and u for RM Synthesis, we obtain here an even more significant p-value, with . Sources with spectral indices different than zero, i.e., sources that vary along the observed frequency, will have associated variations also in the polarized flux density, i.e., in Q and U, and this will affect the FD distribution. Since these variations are normally gradual with frequency, these will rather affect parameters like than adding new secondary components. This then would blur the KS Test with . Using fractional q and u is an attempt to take out this effect and indeed it yields a stronger result.

The strong association of both, depolarization and , with intervening material indicates that intrinsic effects within the sources do not contribute much to the fine structure of the Faraday Depth within each components, i.e., . This could be due to different properties of the magnetized plasma within sources compared to that in the intervening systems traced by Mg ii absorption. However, redshift effects should also be considered here. A given dispersion in the rest-frame at some redshift z will produce an observed that scales as . Since the quasars are generally at significantly higher redshifts than the intervening absorbing systems, this could explain why the observed is dominated by the intervening systems.

5.6. Intrinsic Effects in Faraday Depth

We have argued above that the presence of multiple discrete components in the FD distributions of our sources (to which the parameters we introduced in Section 5.3 were particularly sensitive) is not associated with the presence of intervening Mg ii absorption. In contrast, intervening material is clearly associated with the broadening of the FD distribution. This raises the possibility that the multiple components in arise actually intrinsically to the radio sources.

That this is likely the case is indicated when we consider the initial phases ψ of the components. This is shown on Figures 1–3 (Column (6)). We always see that if a secondary component is present, then the initial phase of that secondary component is different than the initial phase of the primary component. In other words, the cusps in the radial plots (in Column (6)), that correspond to the peaks in the plots in Column (5), lie at different azimuthal angles representing the initial phase ψ.

An important point is that source components with different initial phase ψ must come from spatially distinct emission regions, either in the plane of the sky, or along the line of sight. This is because ψ represents the phase of emission before any Faraday Rotation. It is then quite reasonable to imagine that these different emission regions have associated with them different intrinsic . This would naturally produce the multiple components, each with distinct and ψ, in the overall FD distribution, as observed.

However, the argument above is not watertight: it could also be imagined that different source regions (with different ψ) lying behind different parts of an intervening system would also suffer vastly different amounts of Faraday Rotation. We could for instance imagine a chequer-board intervening screen with just two values of intervening , , and , which lies in front of a source in which ψ varies spatially. We would then observe a Faraday Depth with two separated peaks and , each with a ψ that reflected the distribution of ψ of the source regions behind the and cells of the intervening screen. This would also produce two distinct components in with different ψ, with all of the Faraday Rotation coming from the intervening system. However, in this case, we would expect to see a correlation between the presence of Mg ii and the parameters introduced earlier that are sensitive to the presence of these multiple components in the FD distribution. This is not seen, suggesting that indeed, the different of the multiple components originate intrinsically to the source.

Also, the fact that we do not see a single example in which two or more cusps have the same ψ, which would happen, e.g., if a chequer-board intervening screen as described before lies in front of a source with constant ψ, should be taken as an indication that distinct components in the FD distribution arise from intrinsic effects.

We conclude therefore that the gross structure complexity and multiple distinct components in that are seen in many sources are produced by effects intrinsic to the sources, while intervening material introduces a small but pervasive broadening of the FD distribution. We explore further the implications of the latter effect in Section 6. Further exploration of the properties of the background sources from QU-fitting (O'Sullivan et al. 2012 Sun et al. 2015) and analysis of the shape of the plots is beyond the scope of the current work, but we plan to develop this further in the future. The results of the QU-fitting will also be very interesting to be compared with .


The Faintest Dwarf Galaxies

Joshua D. Simon
Vol. 57, 2019

Abstract

The lowest luminosity ( L) Milky Way satellite galaxies represent the extreme lower limit of the galaxy luminosity function. These ultra-faint dwarfs are the oldest, most dark matter–dominated, most metal-poor, and least chemically evolved stellar systems . Read More

Supplemental Materials

Figure 1: Census of Milky Way satellite galaxies as a function of time. The objects shown here include all spectroscopically confirmed dwarf galaxies as well as those suspected to be dwarfs based on l.

Figure 2: Distribution of Milky Way satellites in absolute magnitude () and half-light radius. Confirmed dwarf galaxies are displayed as dark blue filled circles, and objects suspected to be dwarf gal.

Figure 3: Line-of-sight velocity dispersions of ultra-faint Milky Way satellites as a function of absolute magnitude. Measurements and uncertainties are shown as blue points with error bars, and 90% c.

Figure 4: (a) Dynamical masses of ultra-faint Milky Way satellites as a function of luminosity. (b) Mass-to-light ratios within the half-light radius for ultra-faint Milky Way satellites as a function.

Figure 5: Mean stellar metallicities of Milky Way satellites as a function of absolute magnitude. Confirmed dwarf galaxies are displayed as dark blue filled circles, and objects suspected to be dwarf .

Figure 6: Metallicity distribution function of stars in ultra-faint dwarfs. References for the metallicities shown here are listed in Supplemental Table 1. We note that these data are quite heterogene.

Figure 7: Chemical abundance patterns of stars in UFDs. Shown here are (a) [C/Fe], (b) [Mg/Fe], and (c) [Ba/Fe] ratios as functions of metallicity, respectively. UFD stars are plotted as colored diamo.

Figure 8: Detectability of faint stellar systems as functions of distance, absolute magnitude, and survey depth. The red curve shows the brightness of the 20th brightest star in an object as a functi.

Figure 9: (a) Color–magnitude diagram of Segue 1 (photometry from Muñoz et al. 2018). The shaded blue and pink magnitude regions indicate the approximate depth that can be reached with existing medium.


How to correct observed flux densities for redshift - Astronomy

Dark absorption lines in spectra always appear at particular wavelengths
depending on the elements or compounds that caused them. So it's a simple
matter of comparing laboratory results with what's seen in distant galaxies.

The Doppler effect is not much of a discussion point. But Halton Arp was
one of the most staunch defenders that the redshift wasn't caused by
expansion of the universe, but by galaxies connected to each other and
being ejected from each other.

There's always some new theory strong in words and short in math. Expansion
is the best answer. The absorbtion lines show where the color is and we can
compare that to where it is. Nothing else clearly explains the difference.

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

Intergalactic space, solar system space or galaxy space are all pretty much
a vacuum. Even the impressive flouresencent clouds are a better vacuum than
anything you're going to produce. So the speed change in down in the
immeasurable area. I should add, that light is different from sound in that
it does not require a medium to travel through.

Secondly, even if you put light through a Bose condensate and slow it way
down and then bring it out the other side to measure for spectral shift,
there isn't any. The shift is related to the frequency and can be affected
by the speed of the object emitting or reflecting the light or receiving the
light. But the frequency does not change even under the extreme speed
changes that can be created in a laboratory.

Yes, you would. The speed of light in a vacuum is a constant, no matter
where it has been before. And the speed of light in any medium will be the
same, regardless of where the light has been before. And again, it still
will not change the frequency shift that is measured.

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

It is light slowing down upon striking a lens and speeding back up again as
it leaves the lens, that a lens refracts. If it didn't happen, there'd be no
refracting telescopes.

Different elements absorb light at particular frequencies and at known intervals
or spacing. If light from a distant star has been redshifted, the absorption
lines will appear offset from what would normally be expected for that element.
By comparing the absorption lines from where they should be to where they are,
one can determine very precisely how fast the object in question is speeding
away from an observer.

Absorption spectra of element whatever:
|| | |

Absorption spectra redshifted:
|| | |

Notice the spacing is still the same, just the lines are shifted to the left.
(Left in this example being a lower frequency).

Now my question to the rest of the group: gravity has more effect on lower
frequencies, less on higher frequencies. A good example is radio frequencies,
the lower ones follow the curvature of the earth while the higher ones shoot
straight off into space. Wouldn't this also hold true for light, certainly not
as pronounced but still measurable? If so, wouldn't it be possible to determine
if a spectral shift was due to velocity or gravity?

Yes, this was predicted by Einstein and was first tested at a solar eclipse.
Hipparchos then measured it for stars even farther from the line of sight to
the limb of the sun. But it is not frequency dependent. And it does not
shift the frequencies.

And yes, if it did affect the frequencies like the red shifts we observe, it
would have been easy to detect in gravitational lensing. If there were the
range hypothesized by the other poster in radio wavelengths (which are the
same thing as visible light, just at a different frequency), then we would
have a range from the sun's mass barely affecting visible wavelengths to
radio waves being bent into the radius of the earth's gravity. And slightly
longer radio waves would not propagate outward at all. They would leave the
antenna and bend straight down to the earth and never make it to the horizon
at all. Radio astronomy would be impossible because the radio frequencies
would never make it here. And what little did arrive would be so distorted
that we could not image anything. Additionally, the objects which are
gravitationally lensed would appear to be elongated into a little spectrum,
like we were viewing them through a prism.

Obviously, none of this takes place.

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

Yup, but this is not the same as radio waves following the curvature of the
earth.

Light certainly is affected by gravity, but the effects are very small. In
1919, Eddington measured a very slight bending of light of a star as it
passed the sun during a total eclipse this confirmed a prediction made by
Einstein's General Relativity theory. A google search for <eddington
eclipse> will bring a number of references.

More recently, gravitational lensing has been detected where light from a
distant galaxy is bent around a galaxy closer to us, giving multiple and
distorted images (similar to what is seen through old 'bottle' glass panes).
Do a google search for <einstein's cross> .

An important point to note with both of these is that the effect is very
small. The bending is measured in tiny fractions of a degree.

It is true that there is a small gravitational effect on light giving rise
to a red shift, but this is also very small. The correlation between
galactic redshift and distance has been measured and seems to be consistent
with the 'recession' model. Alternative causes have been proposed, tested
and explored, but have not been broadly accepted. So far, the expanding
universe model seems to be the best bet.

Google for "tired light" and you will find all the problems with this idea.

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

I'm not being exact in my terminology, but when you get a hit for one,
you'll find the rest as well.

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

Gravitational Red shift does exist, but it is a very minor effect. If the
effect was significant we would expect to measure similar red shifts for
nearby galaxies as for distant ones.

Some people have proposed a general loss of energy from light as it travels
further and further. This is called 'tired light'.

However I understand there are instances where a spectrum shows multiple
absorbtion bands (caused by the light passing through 1 or more clusters of
galaxies between the source and us), which is hard to explain by tired light
theories. There is also no physical model to allow light to lose energy in
this way (don't forget that according to the geodesic equation, as far as
the light is concerned it takes no time at all to pass from the distant
object to us - so there's no time for it to lose energy).

It's rather theoretical but the formula that describes the spacetime
separation 's' between 2 events on an inertial (non accelerating) body is

s^2 = (delta t)^2 - ((delta x)^2 + (delta y)^2 + (delta z)^2)/ c^2

where delta t is the 'time' between the two events as seen by an external
observer,
delta x , delta y and delta z are the 3 spatial coordinates between the two
events.

This breaks down into
s^2 = (delta t)^2 - (delta l /c)^2 where 'l' is the spatial separation.

For light, it is easy to see that both components of the right hand side of
the equation are the same, so it is reasonable to say that 's', the
separation between 2 events is zero.

As I've said it's rather theoretical and I don't expect you to take it on my
say-so, but it's quite an interesting area to study and there are several
good 'general reader' level books that explain the principles well.

I understand that light photons can be displaced in their trajectories
near a gravitationally radiant body, for example, the Sun. The
trajectories are curved by the gravity that exerts a force upon the
photons.

Are there photons that orbit the Sun in a spherical shell at "C"
velocity and at the appropriate radius from the Sun? Throughout the
centuries there must be quite a huge number of photons that have fallen
into that or other shells. To a camera wouldn't there be the appearance
of a line of light that would be due to the spherical shell of the many
photon orbits?

The "appropriate radius" for the sun's mass is quite small. Think about the
scale of the event horizon of a black hole, and what transition occurs as
you cross from one side to the other side of that radius. In the sun's case,
at that radius, you would be inside most of the sun's mass, ie, the sun
doesn't have it's mass inside of that orbit.

Second no: If you had a black hole and arranged photons into this orbit,
they would be unstable. It is too precarious a balance to be maintained. And
if you could maintain it and build up a sizable amount of light in those
orbits, then that would mean they were not exiting to your camera and would
still therefore be invisible.

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

If Einstein correctly qauged and calculated the bending of the
trajectory of the photon that is travelling near a gravitationally
radiant mass could not the rate or amount of bending for the specific
distances between the larger mass and photon, and also the value of the
major mass, be calculated? For example, knowing the rate of bending, the
mass of the the larger object, the velocity "C", could not the distances
involved and the radius or shape of the orbit be calculated?

Would not experimental and observed data, in that case, suffice instead
of knowing the mass of the photon? Once the radius of the orbit was
calculated, the answer to my question, "What is the radius?", would be
known.

Would not also the value of the mass of the photon be available via
mathematical calculation?

Doesn't that process of calculation necessarily result in an
understanding or discovery of an implicit premise that the photon does,
indeed, have mass?

The mass of the photon is, however, presumed to be zero. If that were so
the radius of its orbit wopuld be zero. Conversely, if the photon were
heavy the radius would be large. If zero, wouldn't the photons
ultimately collide with the large graviationally radiant body and
collect on its surface in combination with electrons? Would not such a
larger object re-radiate photons at different frequencies and appear
hot? At the center of the greater mass wouldn't there possibly be one
electron that hosts a great number of photons that are moving at "C"
velocity in orbit around the electron?

What's missing here that is needed to find the orbital radius and/or
mass of the photon?

"Ralph Hertle" <***@verizon.net> wrote in message news:***@verizon.net.

Hi Ralph, quite a lot in your last post.

I'm not aware it's measured directly, but in principle one could take 3
widely separated satellites and simply measure the angles between them. In
the vicinity of a gravitating mass, the angles won't add up to exactly 180
degrees. This is the exact definition of a 'curved space'.

In practice, I'm not aware that this has been done directly (once again the
angles are very small) but the curvature of space allows us to calculate
the theoretical rate of precession of the orbit of Mercury. This matches the
observed rate that had been otherwise inexplicable.

I'd suggest you get a book that you can read and re-read - this isn't
something to pick up from scratch in a format like this!

The bending of light was first measured (fairly inaccurately) by Arthur
Eddington during a total solar eclipse in 1919. Stars close the the sun's
limb display an apparent shift due to the sun's gravity of around 2" (figure
plucked from memory) from their catalogued positions.

Eddington also predicted stars' energy was from fusion of hydrogen to
helium, and the Eddington limit, balancing gravitational force & the force
due to radiation pressure is named after him.

There are already objects with superluminal effects, with redshift so high
as they run 4/5 times faster than speed of light!
So Einstein was completely wrong or this means that the redshift is not only
a speed measure but is due to other cumulative effects.
I have too much respect for relativity to think the speed of light is not a
real limit.
But strangely almost none is thinking about this strange redshift behaviour
and give us some not sciencefiction reason about it.
For me thinking about a quasar running almost at light speed is pure
sciencefiction too.

I assume Luigi is referring to objects documented with a red shift z > 1
Using the normal doppler equation z =v/c this would imply that the velocity
of the object is greater than c.

However, for high velocities, the Lorentzian form of the equation has to be
used,
z = sqrt((1+ v/c)/(1- v/c)) -1

This is the correct form for use as v approaches the speed of light and
gives the correct relationship between the emitted and observed wavelengths
for spectral lines.

I know this is the standard explication to save everything but may I have
some doubts?
A Godzilla monster shining like billions of stars running 10 billions years
in the past at almost the speed of light launching jets against me (and
you).
Which force can gave him this speed? Please don't tell me it was the big
bang because it wasn't so big to justify the Godzillas.
Maybe you tell me lambda or repulsive force coming from others universes
connected with ours via wormholes.

But I'm from Italy and after the Parmalat crack I can believe everything.
:-)

I hope someday someone will explain quasars in a simpler, less
sciencefiction way. or maybe someone has already done. maybe they are
not so big, so fast, so shining and the more important thing, so far away.

What exactly did you have in mind that is bigger (since the big bang wasn't
big enough?)

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

It's a paradox, nothing can be bigger than big bang so I couldn't realize
such a repulsive force.

Anyway these days I have a pain in my sprained right ankle that for me it's
bigger than big bang.
Everything is relative.

But aren't quasars, in standard theory, objects of the remote past, maybe
active galaxy nucleus of very old galaxies?
So how can we see microquasars in our galaxy? If they are in the Milky Way
they can't be very old (or far away that is the same thing).
Or I'm missing something?

tventuri/vlbi2.html where you
see a quasar jet emission at around 9c .
Maybe a black hole nine time more massive than the entire universe. :-)

Or search in google with "superluminal" and "quasar" and find a lot of
pages.
And also find where the standard cosmology is going now. toward
sciencefiction.
I ask myself how a lot of scientists can't show a minimal doubt about the
redshift estimate of universe distances.
See also http://www.cnn.com/2004/TECH/space/01/08/galaxies.find/index.html
and think about it. maybe the universe don't want to obey to our wonderful
theories.

tventuri/vlbi2.html where you
see a quasar jet emission at around 9c .
Maybe a black hole nine time more massive than the entire universe. :-)
Or search in google with "superluminal" and "quasar" and find a lot of
pages.
And also find where the standard cosmology is going now. toward
sciencefiction.
I ask myself how a lot of scientists can't show a minimal doubt about the
redshift estimate of universe distances.
See also http://www.cnn.com/2004/TECH/space/01/08/galaxies.find/index.html
and think about it. maybe the universe don't want to obey to our wonderful
theories.
Luigi Caselli

Hi Luigi
No need to worry, the relativistic form of the equation allows z values
greater than 1
There's an explanation here
http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/reldop2.html

Thanks for the reference, could you tell me if there is a limit in z also in
Lorentz formulas?
Maybe I can calculate myself but I'm a bit lazy these days.

And what do you think about the remote galaxy supercluster found at the
beginning of January?

I've not read that much about the string of galaxies, but I understand the
importance to be that it is a very large structure (300 Mly extent) at a
time when the universe was only 3000M years old.

It certainly sounds intriguing.

That isn't the explanation for apparent superluminal motion in quasar jets.

See, for example http://bustard.phys.nd.edu/PH308/AGN/superluminal.html,
since it's difficult to explain without diagrams

for an explanation. Interestingly, such line of sight effects also boost the
blob's luminosity to the obeserver.

Wouldn't photons emitted from a source that was receding at, say, .90 C,
have incredibly long wavelengths. The antennas needed would have to be
some multiples of the photon wavelength. I don't know modern antenna
science, however, I do know that a photon frequency of 1Hz is one hell
of a long distance.

Human beings cannot see, or by means of RF or other EMF apparatus,
receive light that is received at velocities other than C. That doesn't
mean that velocities other than C don't exist - its just that humans and
their apparatus cannot see or receive those velocities. (re. Harper).
That is conjecture, and that would indeed be a startling discovery.

I should have clarified that. Harper is a friend of mine, and I included
her name
more as a credit than a reference. Those ideas came from conversations with
her, and I don't believe there is a published reference available. If
you want to
ask more about her thoughts on the matter send me a personal email and I'll
see if I can get her email address for you. She is a philosopher and
Objectivist
type, and she would have further explanations.

Ralph Hertle
( ralph.hertleNo-***@No-Nothingverizon.net Remove "No-Nothing". )

If you go to
http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/reldop3.html#c3 there is
a very snazzy calculator to help with this.

0.9c gives a z of roughly 3.36, so wavelengths are stretched to 4.36 x their
original values.

This is significant, but all it means is that UV radiation is shifted into
the visible part of the spectrum and is detected here on earth.

This is an interesting question, but more for philosophy than physics:-

If there is a type of radiation that is posited to exist, but is also
posited to be undetectable (by any means I presume), then to what extent can
we say it exists? I'll not get involved in this question, because it seems
a fruitless debate.

Are you kidding? A discussion like that can keep netloons busy for weeks!

Chuck Taylor
Do you observe the moon?
Try the Lunar Observing Group
http://groups.yahoo.com/group/lunar-observing/

There are no "grid lines" in space. That is a graphical construct that
is used in analytical geometry, and the charted curves are used to make
graphically visible the amount, direction, or density of the
gravitational flux that is radiated from each mass that emits
gravitational existents.

So called curtvatures are graphical devices. They are epistemological
concepts that exist in the realm of ideas only. They are not
metaphysical existents, that is, the "grid lines" or "curvatures" of
space are not physically existing existents.

That does not mean that gravity does not exist. No. Gravity is
perceptibly in evidence. Gravity is an existent (or is existents -
plural), however, scientists do not yet know what the actual physical
cause of gravity is. That remains to be discovered.

The graphical devices serve to make graphically visible certain
discovered mathematical relationships and measurements of the amounts,
directions, and flux densities, of the gravitational existents that are
emitted from mass entities.

Regarding the bending of light. Once the velocity of light, say "C", is
known, the direction of the photon trajectory relative to the greater
gravitational mass and the distance from that mass are known, and the
amount of bending of the trajectory and its implied force are known, the
orbit of a photon may be calculated. Neglecting transitional paths, and
assuming a circular photon orbit, for example, once the radius of the
orbit of the photon has been calculated in the context of the velocity
"C", the mass of the photon is implied and may be calculated.

Does a photon have mass? If it had no mass there would be no orbit. All
photons would be attracted into gravitationally radiant masses. If the
photon did have a mass the orbit would be some radius vaule that is
greater than zero. The science that I have read gives us both stories,
and only one can be factual.

The photon is an existent that either has a mass or it does not.

Hi AW - I just want to point out that Ralph's view does not obviously
represent the standard view.

Some of his terminology is - how shall I put it - unique to him alone, so I
would withhold judgment until you have read around the subject.

Certainly, his idea that photons have mass is 'non standard', and his view
of gravitational flux is 'new'.

Social agreement does not repersent the facts or cause facts to exist.
The facts of the universe are solely caused by the existence of the
existents in the universe, their properties, there potentials of
becoming what they will become according to their natures, and their
relationships to other existents. What human beings may agree upon, be
it flat Earth, curved space, ether, or flat space, or 94 octane gasoline
being safer for cars that require just 87 octane, for example, has
nothing to do with the facts. Facts are identified by means of
discovery, conceptualization, logic, and demonstative proof, and not by
any back-patting agreement or emotional fears by social participants.

The standard view of the BB, for example, doesn't acknowledge all the
relevant facts. Concerning the declining energy levels of light photons,
the experiments of Rayleigh prove that phtons diminish in energy level
in inelastic collisions with hydrogen atoms. The standard view, which
you don't identify , which I would gather to mean the Biblical
creationist's Doppler/Hubble- Big Bang theory, conveniently ignores
these facts, and also fails to integrate those facts with the other
observed data, for example with a photon-caused aparent Red Shift of the
spectral characteristics of light.

You are right - I do not represent the standard view. The BB theory is
supported by social agreement only these days, and not by an integration
of all the relevant facts.

Isn't it strange that innovators and rational people build new theories
and new combinations of concepts, e.g., Aristotle (hierachical theory of
knowlwdege comprised of provable definitions, and his theory of
deductive logic and correct scientific thought), Rayleigh (the energy
reduction of photons in inelastic collisions), and, Einstein (GR).

I think that you understand what I have said, and that you are working
over the ideas before the public in order to socially discredit them
without actually proving your case.

Basing your ideas on the "standard view" is an example of the fallacies
of logic called ad vericundiam and ad populem. Throw in the fallacy of
ad hominem if you wish.

[ BTW. I wonder if there is a fallacy that is, "You are wrong
because because my psychological well being and inadequate self esteem
are threatened by the facts or by rational discussion." That may be the
fallacy that underlies paranoia, and it may be reducible to one of the
other fallacies. ]

If you had read what I said, you would find that I didn't advocate
either view. I explained the paradox that exists in modern science, that
on the one hand, the mass of the photon in a gravitationally bending
trajectory can have a calculated mass, while on the other hand, it is
claimed that the photon has no mass. A force, gravity, is required to
divert the photon form its otherwise straight line path, and the
straightline path is in combination with or is due to some force that is
countered or exceeded by gravity, and that force may by the countering
force of inertial properties. If inertial properties did not exist all
photons would be rapidly attracted to gravitationally radiant entities.
That would produce a true dimming of the universe - however, the
observed data prove otherwise.

I merely pointed out the contradiction of the mass and no mass views.

The concept of a "gravitational flux " is not new, and it is based on
the fact that gravitational existents are emitted from gravitationally
radiant objects according to their masses. You have certainly read of
the terms, 'flux density' and 'field density', haven't you? They
identify the accelerative force effect of certain radiant gravitational
existents at specified locations and directions relative to the
gravitationally radiant mass.

The standard view should be that the identity and properties of
gravitational existents and the cause of gravitational attraction have
not yet been discovered by scientists. We know only certain measurements
of the effects of those causes and some remarkably sophisticated
mathematical means of evaluating the measurements of the observed data.

Social agreements do not necessarily imply facts. Only rational
identifications in the context of actually existing entities necessarily
imply facts.


Units Relevant to ATOMDB

APEC models produce line and continuum emissivity tables which are used to calculate predicted line and continuum fluxes. These predicted fluxes may in turn be compared with an observed spectrum or a subset of spectral features. For lines observed at Earth with no redshift or column density, the predicted line flux for a single temperature model with Te(K) is given by

where epsilon(T_e) is the emissivity in ph cm^3/s, R is the distance to the source in cm, and the integral over N_e N_H dV is the emission measure in cm^-3, generally taken over a specified temperature interval. (Since emission measures for astronomical objects tend to involve large enough numbers to incur numerical difficulties, the traditional practice in X-ray astronomy is to scale the emission measure over distance squared to obtain a more tractable normalization.) For multi-temperature models the predicted flux is the sum of the flux components at each temperature.

We define the emissivity for a line transition from level k to level j as

where N_k is the level population of level k, and A_ is the atomic transition probability (also known as the Einstein A value) in Hz (s^-1). N_k is related to the electron density N_e by

where N_k/N_z is the ratio of the level population to the ion population, solved through the level population rate matrix N_z/N_Z is the charge state or ionization balance, often solved in collisional ionization equilibrium N_Z/N_ is the elemental abundance relative to hydrogen and N_/N_e is the fraction of hydrogen to electron density, about 0.8 for cosmic plasmas. While this definition of emissivity is convenient for our calculations, other definitions of emissivity abound and may be more convenient in other situations.

The continuum spectrum produced by bremsstrahlung, radiative recombination and two-photon emission is related to the same emission measure equal to the integral over N_e N_H dV however, it is important to note that the natural units of continuum emission (in wavelength space) are ph cm^3/s/Å, since the emission is spread out over a broad band. Thus when one is calculating the spectrum from the sum of lines and continuum, the bin size must be specified. Lines may be broadened by a number of physical processes as well as by the instrumental line spread function.