Astronomy

What is a hard spectral state vs. a soft spectral state?

What is a hard spectral state vs. a soft spectral state?

In X-ray astronomy, the source is considered to be in the hard or soft spectral state. So what is the meaning of the hard spectral state? What are the soft state and hard state in spectroscopy?


Active galaxies are known to change state as seen by a change in slope of their X-ray and gamma-ray spectra. We say that a spectrum has become harder (or changed to its hard state) when the slope changes so that there are relatively more high energy photons, and it becomes softer when the ratio of low energy photons to high energy photons increases. The physics of why they change state is not yet understood.

If the high energy spectrum is in a thermally dominated state, ie a Planck-like spectrum, then it is certainly soft. If it is in a steep powerlaw dominated state with low $alpha < 1, I_ u = f^{-alpha}$, then it is still usually considered soft. But if $alpha$ is higher, then it is hard. But, it is also a relative term, so a system cycling between any two different values of $alpha$ will be going between its hard state and its low state.


Not only in X-ray astronomy (chemistry also and pretty much everything related to X-rays). If you have an X-ray spectrum, the region with photon energy > 5-10 Kev is called "hard" X-rays, less than that it is called "soft" X-rays. Wiki has a nice explanation for that (Energy Ranges): https://en.wikipedia.org/wiki/X-ray However, I find the book better: https://web.archive.org/web/20121111141255/http://ast.coe.berkeley.edu/sxreuv/


Just to add an example to what has already been said by eshaya and Larz.Astro. Here is the spectrum of the Black Hole binary Cygnus X-1 in its hard and its soft state. The plot is taken from Gierlinski et al. 1999 .

You see that the soft state consist of mostly thermal emission below 10 keV, while the hard state is dominated by non-thermal (comtonized) emission above that. Or as the paper puts it:

The resulting Xγ spectrum consists of blackbody photons emitted by the disc (at low energies) and a component due to Compton upscattering of the disc photons by both thermal and nonthermal electrons in the corona.


Earthquake Hazards 201 - Technical Q&A

A list of technical questions & answers about earthquake hazards.

What is %g?

What is acceleration? peak acceleration? peak ground acceleration (PGA)?

What is spectral acceleration (SA)?

PGA (peak acceleration) is what is experienced by a particle on the ground, and SA is approximately what is experienced by a building, as modeled by a particle mass on a massless vertical rod having the same natural period of vibration as the building.

The mass on the rod behaves about like a simple harmonic oscillator (SHO). If one "drives" the mass-rod system at its base, using the seismic record, and assuming a certain damping to the mass-rod system, one will get a record of the particle motion which basically "feels" only the components of ground motion with periods near the natural period of this SHO. If we look at this particle seismic record we can identify the maximum displacement. If we take the derivative (rate of change) of the displacement record with respect to time we can get the velocity record. The maximum velocity can likewise be determined. Similarly for response acceleration (rate of change of velocity) also called response spectral acceleration, or simply spectral acceleration, SA (or Sa).

PGA is a good index to hazard for short buildings, up to about 7 stories. To be a good index, means that if you plot some measure of demand placed on a building, like inter story displacement or base shear, against PGA, for a number of different buildings for a number of different earthquakes, you will get a strong correlation.

PGA is a natural simple design parameter since it can be related to a force and for simple design one can design a building to resist a certain horizontal force.PGV, peak ground velocity, is a good index to hazard to taller buildings. However, it is not clear how to relate velocity to force in order to design a taller building.

SA would also be a good index to hazard to buildings, but ought to be more closely related to the building behavior than peak ground motion parameters. Design might also be easier, but the relation to design force is likely to be more complicated than with PGA, because the value of the period comes into the picture.

PGA, PGV, or SA are only approximately related to building demand/design because the building is not a simple oscillator, but has overtones of vibration, each of which imparts maximum demand to different parts of the structure, each part of which may have its own weaknesses. Duration also plays a role in damage, and some argue that duration-related damage is not well-represented by response parameters.

On the other hand, some authors have shown that non-linear response of a certain structure is only weakly dependent on the magnitude and distance of the causative earthquake, so that non-linear response is related to linear response (SA) by a simple scalar (multiplying factor). This is not so for peak ground parameters, and this fact argues that SA ought to be significantly better as an index to demand/design than peak ground motion parameters.

There is no particular significance to the relative size of PGA, SA (0.2), and SA (1.0). On the average, these roughly correlate, with a factor that depends on period.While PGA may reflect what a person might feel standing on the ground in an earthquake, I don't believe it is correct to state that SA reflects what one might "feel" if one is in a building. In taller buildings, short period ground motions are felt only weakly, and long-period motions tend not to be felt as forces, but rather disorientation and dizziness.

What is probability of exceedence (PE)?

For any given site on the map, the computer calculates the ground motion effect (peak acceleration) at the site for all the earthquake locations and magnitudes believed possible in the vicinity of the site. Each of these magnitude-location pairs is believed to happen at some average probability per year. Small ground motions are relatively likely, large ground motions are very unlikely.Beginning with the largest ground motions and proceeding to smaller, we add up probabilities until we arrive at a total probability corresponding to a given probability, P, in a particular period of time, T.

The probability P comes from ground motions larger than the ground motion at which we stopped adding. The corresponding ground motion (peak acceleration) is said to have a P probability of exceedance (PE) in T years.The map contours the ground motions corresponding to this probability at all the sites in a grid covering the U.S. Thus the maps are not actually probability maps, but rather ground motion hazard maps at a given level of probability.In the future we are likely to post maps which are probability maps. They will show the probability of exceedance for some constant ground motion. For instance, one such map may show the probability of a ground motion exceeding 0.20 g in 50 years.

What is the relationship between peak ground acceleration (PGA) and "effective peak acceleration" (Aa), or between peak ground velocity (PGV) and "effective peak velocity" (Av) as these parameters appear on building code maps?

Aa and Av have no clear physical definition, as such. Rather, they are building code constructs, adopted by the staff that produced the Applied Technology Council (1978) (ATC-3) seismic provisions. Maps for Aa and Av were derived by ATC project staff from a draft of the Algermissen and Perkins (1976) probabilistic peak acceleration map (and other maps) in order to provide for design ground motions for use in model building codes. Many aspects of that ATC-3 report have been adopted by the current (in use in 1997) national model building codes, except for the new NEHRP provisions.

This process is explained in the ATC-3 document referenced below, (p 297-302). Here are some excerpts from that document:

  • p. 297. "At the present time, the best workable tool for describing the design ground shaking is a smoothed elastic response spectrum for single degree-of-freedom systems…
  • p. 298. "In developing the design provisions, two parameters were used to characterize the intensity of design ground shaking. These parameters are called the Effective Peak Acceleration (EPA), Aa, and the Effective Peak Velocity (EPV), Av. These parameters do not at present have precise definitions in physical terms but their significance may be understood from the following paragraphs.
  • "To best understand the meaning of EPA and EPV, they should be considered as normalizing factors for construction of smoothed elastic response spectra for ground motions of normal duration. The EPA is proportional to spectral ordinates for periods in the range of 0.1 to 0.5 seconds, while the EPV is proportional to spectral ordinates at a period of about 1 second . . . The constant of proportionality (for a 5 percent damping spectrum) is set at a standard value of 2.5 in both cases.
  • "…The EPA and EPV thus obtained are related to peak ground acceleration and peak ground velocity but are not necessarily the same as or even proportional to peak acceleration and velocity. When very high frequencies are present in the ground motion, the EPA may be significantly less than the peak acceleration. This is consistent with the observation that chopping off the spectrum computed from that motion, except at periods much shorter than those of interest in ordinary building practice has very little effect upon the response spectrum computed from that motion, except at periods much shorter than those of interest in ordinary building practice. . . On the other hand, the EPV will generally be greater than the peak velocity at large distances from a major earthquake. "
  • p. 299. "Thus the EPA and EPV for a motion may be either greater or smaller than the peak acceleration and velocity, although generally the EPA will be smaller than peak acceleration while the EPV will be larger than the peak velocity.
  • ". . .For purposes of computing the lateral force coefficient in Sec. 4.2, EPA and EPV are replaced by dimensionless coefficients Aa and Av respectively. Aa is numerically equal to EPA when EPA is expressed as a decimal fraction of the acceleration of gravity. "

Now, examination of the tripartite diagram of the response spectrum for the 1940 El Centro earthquake (p. 274, Newmark and Rosenblueth, Fundamentals of Earthquake Engineering) verifies that taking response acceleration at .05 percent damping, at periods between 0.1 and 0.5 sec, and dividing by a number between 2 and 3 would approximate peak acceleration for that earthquake. Thus, in this case, effective peak acceleration in this period range is nearly numerically equal to actual peak acceleration.

However, since the response acceleration spectrum is asymptotic to peak acceleration for very short periods, some people have assumed that effective peak acceleration is 2.5 times less than true peak acceleration. This would only be true if one continued to divide response accelerations by 2.5 for periods much shorter than 0.1 sec. But EPA is only defined for periods longer than 0.1 sec.

Effective peak acceleration could be some factor lower than peak acceleration for those earthquakes for which the peak accelerations occur as short-period spikes. This is precisely what effective peak acceleration is designed to do.

On the other hand, the ATC-3 report map limits EPA to 0.4 g even where probabilistic peak accelerations may go to 1.0 g, or larger. THUS EPA IN THE ATC-3 REPORT MAP may be a factor of 2.5 less than than probabilistic peak acceleration for locations where the probabilistic peak acceleration is around 1.0 g.

The following paragraphs describe how the Aa, and Av maps in the ATC code were constructed.

The USGS 1976 probabilistic ground motion map was considered. Thirteen seismologists were invited to smooth the probabilistic peak acceleration map, taking into account other regional maps and their own regional knowledge. A final map was drawn based upon those smoothing's. Ground motions were truncated at 40 % g in areas where probabilistic values could run from 40 to greater than 80 % g. This resulted in an Aa map, representing a design basis for buildings having short natural periods. Aa was called "Effective Peak Acceleration."

An attenuation function for peak velocity was "draped" over the Aa map in order to produce a spatial broadening of the lower values of Aa. The broadened areas were denominated Av for "Effective Peak Velocity-Related Acceleration" for design for longer-period buildings, and a separate map drawn for this parameter.

Note that, in practice, the Aa and Av maps were obtained from a PGA map and NOT by applying the 2.5 factors to response spectra.

Note also, that if one examines the ratio of the SA(0.2) value to the PGA value at individual locations in the new USGS national probabilistic hazard maps, the value of the ratio is generally less than 2.5.

Sources of Information:

  • Algermissen, S.T., and Perkins, David M., 1976, A probabilistic estimate of maximum acceleration in rock in the contiguous United States, U.S. Geological Survey Open-File Report OF 76-416, 45 p.
  • Applied Technology Council, 1978, Tentative provisions for the development of seismic regulations for buildings, ATC-3-06 (NBS SP-510) U.S Government Printing Office, Washington, 505 p.

What is percent damping?

In our question about response acceleration, we used a simple physical modela particle mass on a mass-less vertical rod to explain natural period. For this ideal model, if the mass is very briefly set into motion, the system will remain in oscillation indefinitely. In a real system, the rod has stiffness which not only contributes to the natural period (the stiffer the rod, the shorter the period of oscillation), but also dissipates energy as it bends. As a result, the oscillation steadily decreases in size, until the mass-rod system is at rest again. This decrease in size of oscillation we call damping. We say the oscillation has damped out.

When the damping is small, the oscillation takes a long time to damp out. When the damping is large enough, there is no oscillation and the mass-rod system takes a long time to return to vertical. Critical damping is the least value of damping for which the damping prevents oscillation. Any particular damping value we can express as a percentage of the critical damping value.Because spectral accelerations are used to represent the effect of earthquake ground motions on buildings, the damping used in the calculation of spectral acceleration should correspond to the damping typically experienced in buildings for which earthquake design is used. The building codes assume that 5 percent of critical damping is a reasonable value to approximate the damping of buildings for which earthquake-resistant design is intended. Hence, the spectral accelerations given in the seismic hazard maps are also 5 percent of critical damping.

Why do you decluster the earthquake catalog to develop the Seismic Hazard maps?

The primary reason for declustering is to get the best possible estimate for the rate of mainshocks. Also, the methodology requires a catalog of independent events (Poisson model), and declustering helps to achieve independence.

Damage from the earthquake has to be repaired, regardless of how the earthquake is labeled. Some argue that these aftershocks should be counted. This observation suggests that a better way to handle earthquake sequences than declustering would be to explicitly model the clustered events in the probability model. This step could represent a future refinement. The other side of the coin is that these secondary events arent going to occur without the mainshock. Any potential inclusion of foreshocks and aftershocks into the earthquake probability forecast ought to make clear that they occur in a brief time window near the mainshock, and do not affect the earthquake-free periods except trivially. That is, the probability of no earthquakes with M>5 in a few-year period is or should be virtually unaffected by the declustering process. Also, in the USA experience, aftershock damage has tended to be a small proportion of mainshock damage.

How do I use the seismic hazard maps?

The maps come in three different probability levels and four different ground motion parameters, peak acceleration and spectral acceleration at 0.2, 0.3, and 1.0 sec. (These values are mapped for a given geologic site condition. Other site conditions may increase or decrease the hazard. Also, other things being equal, older buildings are more vulnerable than new ones.)

The maps can be used to determine (a) the relative probability of a given critical level of earthquake ground motion from one part of the country to another (b) the relative demand on structures from one part of the country to another, at a given probability level. In addition, © building codes use one or more of these maps to determine the resistance required by buildings to resist damaging levels of ground motion.

The different levels of probability are those of interest in the protection of buildings against earthquake ground motion. The ground motion parameters are proportional to the hazard faced by a particular kind of building.

Peak acceleration is a measure of the maximum force experienced by a small mass located at the surface of the ground during an earthquake. It is an index to hazard for short stiff structures.

Spectral acceleration is a measure of the maximum force experienced by a mass on top of a rod having a particular natural vibration period. Short buildings, say, less than 7 stories, have short natural periods, say, 0.2-0.6 sec. Tall buildings have long natural periods, say 0.7 sec or longer. A earthquake strong motion record is made up of varying amounts of energy at different periods. A building natural period indicates what spectral part of an earthquake ground-motion time history has the capacity to put energy into the building. Periods much shorter than the natural period of the building or much longer than the natural period do not have much capability of damaging the building. Thus, a map of a probabilistic spectral value at a particular period thus becomes an index to the relative damage hazard to buildings of that period as a function of geographic location.

Choose a ground motion parameter according to the above principles. For many purposes, peak acceleration is a suitable and understandable parameter.Choose a probability value according to the chance you want to take. One can now select a map and look at the relative hazard from one part of the country to another.

If one wants to estimate the probability of exceedance for a particular level of ground motion, one can plot the ground motion values for the three given probabilities, using log-log graph paper and interpolate, or, to a limited extent, extrapolate for the desired probability level.Conversely, one can make the same plot to estimate the level of ground motion corresponding to a given level of probability different from those mapped.

If one wants to estimate the probabilistic value of spectral acceleration for a period between the periods listed, one could use the method reported in the Open File Report 95-596, USGS Spectral Response Maps and Their Use in Seismic Design Forces in Building Codes. (This report can be downloaded from the web-site.) The report explains how to construct a design spectrum in a manner similar to that done in building codes, using a long-period and a short-period probabilistic spectral ordinate of the sort found in the maps. Given the spectrum, a design value at a given spectral period other than the map periods can be obtained.

What if we need to know about total rates of earthquakes with M>5 including aftershocks?

Aftershocks and other dependent-event issues are not really addressable at this web site given our modeling assumptions, with one exception. The current National Seismic Hazard model (and this web site) explicitly deals with clustered events in the New Madrid Seismic Zone and gives this clustered-model branch 50% weight in the logic-tree. Even in the NMSZ case, however, only mainshocks are clustered, whereas NMSZ aftershocks are omitted. We are performing research on aftershock-related damage, but how aftershocks should influence the hazard model is currently unresolved.

The seismic hazard map values show ground motions that have a probability of being exceeded in 50 years of 10, 5 and 2 percent. What is the probability of their being exceeded in one year (the annual probability of exceedance)?

Let r = 0.10, 0.05, or 0.02, respectively. The approximate annual probability of exceedance is the ratio, r*/50, where r* = r(1+0.5r). (To get the annual probability in percent, multiply by 100.) The inverse of the annual probability of exceedance is known as the "return period," which is the average number of years it takes to get an exceedance.

Example: What is the annual probability of exceedance of the ground motion that has a 10 percent probability of exceedance in 50 years?

Answer: Let r = 0.10. The approximate annual probability of exceedance is about 0.10(1.05)/50 = 0.0021. The calculated return period is 476 years, with the true answer less than half a percent smaller.

The same approximation can be used for r = 0.20, with the true answer about one percent smaller. When r is 0.50, the true answer is about 10 percent smaller.

Example: Suppose a particular ground motion has a 10 percent probability of being exceeded in 50 years. What is the probability it will be exceeded in 500 years? Is it (500/50)10 = 100 percent?

Answer: No. We are going to solve this by equating two approximations:

r1*/T1 = r2*/T2. Solving for r2*, and letting T1=50 and T2=500,
r2* = r1*(500/50) = .0021(500) = 1.05.
Take half this value = 0.525. r2 = 1.05/(1.525) = 0.69.
Stop now. Don't try to refine this result.

The true answer is about ten percent smaller, 0.63.For r2* less than 1.0 the approximation gets much better quickly.

For r2* = 0.50, the error is less than 1 percent.
For r2* = 0.70, the error is about 4 percent.
For r2* = 1.00, the error is about 10 percent.

Caution is urged for values of r2* larger than 1.0, but it is interesting to note that for r2* = 2.44, the estimate is only about 17 percent too large. This suggests that, keeping the error in mind, useful numbers can be calculated.

Here is an unusual, but useful example. Evidently, r2* is the number of times the reference ground motion is expected to be exceeded in T2 years. Suppose someone tells you that a particular event has a 95 percent probability of occurring in time T. For r2 = 0.95, one would expect the calculated r2 to be about 20% too high. Therefore, let calculated r2 = 1.15.

The previous calculations suggest the equation,
r2calc = r2*/(1 + 0.5r2*)
Find r2*.r2* = 1.15/(1 - 0.5x1.15) = 1.15/0.425 = 2.7

This implies that for the probability statement to be true, the event ought to happen on the average 2.5 to 3.0 times over a time duration = T. If history does not support this conclusion, the probability statement may not be credible.

The seismic hazard map is for ground motions having a 2% probability of exceedance in 50 years. Are those values the same as those for 10% in 250?

Yes, basically. This conclusion will be illustrated by using an approximate rule-of-thumb for calculating Return Period (RP).

A typical seismic hazard map may have the title, "Ground motions having 90 percent probability of not being exceeded in 50 years." The 90 percent is a "non-exceedance probability" the 50 years is an "exposure time." An equivalent alternative title for the same map would be, "Ground motions having 10 percent probability of being exceeded in 50 years." A typical shorthand to describe these ground motions is to say that they are 475-year return-period ground motions. This means the same as saying that these ground motions have an annual probability of occurrence of 1/475 per year. "Return period" is thus just the inverse of the annual probability of occurrence (of getting an exceedance of that ground motion).

To get an approximate value of the return period, RP, given the exposure time, T, and exceedance probability, r = 1 - non-exceedance probability, NEP, (expressed as a decimal, rather than a percent), calculate:

RP = T / r* Where r* = r(1 + 0.5r).r* is an approximation to the value -loge ( NEP ).
In the above case, where r = 0.10, r* = 0.105 which is approximately = -loge ( 0.90 ) = 0.10536
Thus, approximately, when r = 0.10, RP = T / 0.105

Consider the following table:

Rule of Thumb Exact
NEP T r r* Calculation RP RP
0.90 50 0.10 0.105 50/0.105 476.2 474.6
0.90 100 0.10 0.105 100/0.105 952.4 949.1
0.90 250 0.10 0.105 250/0.105 2381.0 2372.8

In this table, the exceedance probability is constant for different exposure times. Compare the results of the above table with those shown below, all for the same exposure time, with differing exceedance probabilities.

Rule of Thumb Exact
NEP T r r* Calculation RP RP
0.90 50 0.10 0.105 50/0.105 476.2 474.6
0.95 50 0.05 0.05125 50/0.05125 975.6 974.8
0.98 50 0.02 0.0202 50/0.0202 2475.2 2475.9

Comparison of the last entry in each table allows us to see that ground motion values having a 2% probability of exceedance in 50 years should be approximately the same as those having 10% probability of being exceeded in 250 years: The annual exceedance probabilities differ by about 4%. Corresponding ground motions should differ by 2% or less in the EUS and 1 percent or less in the WUS, based upon typical relations between ground motion and return period.

I am trying to calculate the ground motion effect for a certain location in California. I obtained the design spectrum acceleration from your site, but I would like to identify the soil type of this location - how can I get that?

You can't find that information at our site.

We don't know any site that has a map of site conditions by National Earthquake Hazard Reduction Program (NEHRP) Building Code category. There is a map of some kind of generalized site condition created by the California Division of Mines and Geology (CDMG). The map is statewide, largely based on surface geology, and can be seen at the web site of the CDMG. It does not have latitude and longitude lines, but if you click on it, it will blow up to give you more detail, in case you can make correlations with geographic features. There is no advice on how to convert the theme into particular NEHRP site categories.

For sites in the Los Angeles area, there are at least three papers in the following publication that will give you either generalized geologic site condition or estimated shear wave velocity for sites in the San Fernando Valley, and other areas in Los Angeles. Look for papers with author/coauthor J.C. Tinsley. This is older work and may not necessarily be more accurate than the CDMG state map for estimating geologic site response.

  • Ziony, J.I., ed, 1985, Evaluating earthquake hazards in the Los Angeles region--an earth-science perspective, U.S. Geological Survey Professional Paper 1360, US Gov't Printing Office, Washington, 505 p.
  • C. J. Wills, et al:, A Site-Conditions Map for California Based on Geology and Shear-Wave Velocity, BSSA, Bulletin Seismological Society of America,December 2000, Vol. 90 Number 6, Part B Supplement, pp. S187-S208.In general, someone using the code is expected either to get the geologic site condition from the local county officials or to have a geotechnical engineer visit the site.

What is a distance metric? Why is the choice of distance metric important in probability assessments? What distance should I use?

For earthquakes, there are several ways to measure how far away it is. The one we use here is the epicentral distance or the distance of the nearest point of the projection of the fault to the Earth surface, technically called Rjb. Even if the earthquake source is very deep, more than 50 km deep, it could still have a small epicentral distance, like 5 km. Frequencies of such sources are included in the map if they are within 50 km epicentral distance.

Several cities in the western U.S. have experienced significant damage from earthquakes with hypocentral depth greater than 50 km. These earthquakes represent a major part of the seismic hazard in the Puget Sound region of Washington. If the probability assessment used a cutoff distance of 50 km, for example, and used hypocentral distance rather than epicentral, these deep Puget Sound earthquakes would be omitted, thereby yielding a much lower value for the probability forecast. Another example where distance metric can be important is at sites over dipping faults. The distance reported at this web site is Rjb =0, whereas another analysis might use another distance metric which produces a value of R=10 km, for example, for the same site and fault. Thus, if you want to know the probability that a nearby dipping fault may rupture in the next few years, you could input a very small value of Maximum distance, like 1 or 2 km, to get a report of this probability.

This distance (in km not miles) is something you can control. If you are interested only in very close earthquakes, you could make this a small number like 10 or 20 km. If you are interested in big events that might be far away, you could make this number large, like 200 or 500 km. The report will tell you rates of small events as well as large, so you should expect a high rate of M5 earthquakes within 200 km or 500 km of your favorite site, for example. Most of these small events would not be felt. If an M8 event is possible within 200 km of your site, it would probably be felt even at this large of a distance.

Where can I find information on seismic zones 0,1,2,3,4?

A seismic zone could be one of three things:

  1. A region on a map in which a common level of seismic design is required. This concept is obsolete.
  2. An area of seismicity probably sharing a common cause. Example: "The New Madrid Seismic Zone."
  3. A region on a map for which a common areal rate of seismicity is assumed for the purpose of calculating probabilistic ground motions.

Building code maps using numbered zones, 0, 1, 2, 3, 4, are practically obsolete. 1969 was the last year such a map was put out by this staff. The 1997 Uniform Building Code (UBC) (published in California) is the only building code that still uses such zones. Generally, over the past two decades, building codes have replaced maps having numbered zones with maps showing contours of design ground motion. These maps in turn have been derived from probabilistic ground motion maps. Probabilistic ground motion maps have been included in the seismic provisions of the most recent U.S. model building codes, such as the new "International Building code," and in national standards such as "Minimum Design Loads for Buildings and Other Structures," prepared by the American Society of Civil Engineers.

Zone maps numbered 0, 1, 2, 3, etc., are no longer used for several reasons:

  • A single map cannot properly display hazard for all probabilities or for all types of buildings. Probabilities: For very small probabilities of exceedance, probabilistic ground motion hazard maps show less contrast from one part of the country to another than do maps for large probabilities of exceedance. Buildings: Short stiff buildings are more vulnerable to close moderate-magnitude events than are tall, flexible buildings. The latter, in turn, are more vulnerable to distant large-magnitude events than are short, stiff buildings. Thus, the contrast in hazard for short buildings from one part of the country to another will be different from the contrast in hazard for tall buildings.
  • Building codes adapt zone boundaries in order to accommodate the desire for individual states to provide greater safety, less contrast from one part of the state to another, or to tailor zones more closely to natural tectonic features. Because of these zone boundary changes, the zones do not have a deeper seismological meaning and render the maps meaningless for applications other than building codes. An example of such tailoring is given by the evolution of the UBC since its adaptation of a pair of 1976 contour maps. First, the UBC took one of those two maps and converted it into zones. Then, through the years, the UBC has allowed revision of zone boundaries by petition from various western states, e.g., elimination of zone 2 in central California, removal of zone 1 in eastern Washington and Oregon, addition of a zone 3 in western Washington and Oregon, addition of a zone 2 in southern Arizona, and trimming of a zone in central Idaho.

Older (1994, 1997) versions of the UBC code may be available at a local or university library. A redrafted version of the UBC 1994 map can be found as one of the illustrations in a paper on the relationship between USGS maps and building code maps.


Spectral evolution of the Atoll source 4U 1728-34 with RXTE and INTEGRAL: evidence for hard X-ray tail

X-Ray Astronomy 2009 Present Status, Multi-wavelength Approach and Future Perspectives: Proceedings of the International Conference (AIP Conference Proceedings). Vol. 1248 2010. p. 213-214.

Research output : Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review

T1 - Spectral evolution of the Atoll source 4U 1728-34 with RXTE and INTEGRAL: evidence for hard X-ray tail

N1 - M1 - Conference Proceedings

N2 - We report the temporal and spectral results on the INTEGRAL and RXTE 2006-2007 observation campaign of the Atoll source 4U 1728-34 (GX 354-0). The source shows, more than once, spectral evolution as revealed by the hardness intensity diagram. The soft state is well described by a Comptonization with an electron temperature of 3 keV and a high optical depth of 6. In the hard spectral state the emission extends to above 100 keV and it can be described by Comptoniziation (with higher temperature of 10 keV) plus a power law component with Γ of 1.8, which is evidence for non-thermal emission from the source.

AB - We report the temporal and spectral results on the INTEGRAL and RXTE 2006-2007 observation campaign of the Atoll source 4U 1728-34 (GX 354-0). The source shows, more than once, spectral evolution as revealed by the hardness intensity diagram. The soft state is well described by a Comptonization with an electron temperature of 3 keV and a high optical depth of 6. In the hard spectral state the emission extends to above 100 keV and it can be described by Comptoniziation (with higher temperature of 10 keV) plus a power law component with Γ of 1.8, which is evidence for non-thermal emission from the source.

KW - X-ray sources (astronomical)

KW - energy spectra and interactions

KW - outflows and bipolar flows

KW - effective temperatures

KW - and spectral classification

KW - opacity and line formation

M3 - Conference contribution

BT - X-Ray Astronomy 2009 Present Status, Multi-wavelength Approach and Future Perspectives


Spectral variability of GX 339−4 in a hard-to-soft state transition*

We report on INTEGRAL observations of the bright black hole transient GX 339&minus4 performed during the period 2004 August&ndashSeptember, including the fast transition (10 h) observed simultaneously with INTEGRAL and RXTE on August 15 and previously reported. Our data cover three different spectral states, namely hard/intermediate state (HIMS), soft/intermediate state (SIMS) and high/soft state (HSS). We investigate the spectral variability of the source across the different spectral states. The hard X-ray spectrum becomes softer during the HIMS-to-SIMS transition, but it hardens when reaching the HSS. A principal component analysis demonstrates that most of the variability occurs through two independent modes: a pivoting of the spectrum around 6 keV (responsible for 75 per cent of the variance) and an intensity variation of the hard component (responsible for 21 per cent). The pivoting is interpreted as due to changes in the soft cooling photon flux entering the corona, the second mode as fluctuations of the heating rate in the corona. These results are very similar to those previously obtained for Cygnus X-1. Our spectral analysis of the spectra of GX 339&minus4 shows a high energy excess with respect to pure thermal Comptonization models in the HIMS: a non-thermal power-law component seems to be requested by data. In all spectral states joint IBIS, SPI and JEM-X data are well represented by hybrid thermal/non-thermal Comptonization (eqpair). These fits allow us to track the evolution of each spectral component during the spectral transition. The spectral evolution seems to be predominantly driven by a reduction of the ratio of the electron heating rate to the soft cooling photon flux in the corona, lh/ls. The inferred accretion disc soft thermal emission increases by about two orders of magnitude, while the Comptonized luminosity decreases by at most a factor of 3. This confirms that the softening we observed is due to a major increase in the flux of soft cooling photons in the corona associated with a modest reduction of the electron heating rate.

Journal

Monthly Notices of the Royal Astronomical Society &ndash Oxford University Press

Published: Oct 11, 2008

Keywords: accretion, accretion discs black hole physics stars: individual: GX 339−4 gamma-rays: observations X-rays: binaries


Spectral Inversion of 43 to 22 GHz During Small Flares in a Hard State of Cyg X-3 in February 2016

We present simultaneous 22 and 43 GHz observations of the microquasar Cyg X-3 during a series of small-scale flaring activity in a hard state. In the end of January 2016, Cyg X-3 was in a state transition from a soft to hard state after a minimum hard X-ray flux at 15–50 keV. We have observed Cyg X-3 in 3–6 February immediately after Cyg X-3 went into the hard state. We found a series of episodic, low-flux flaring activity of < 500 mJy in the hard state at 22 and 43 GHz, simultaneously observed for the first time. The spectral slope was negative on 3 February, while there were the events of spectral inversion on 5 and 6 February, typical for a rise phase, indicating the optically thick emission, with an alternative possibility of a flat spectrum on 5 February. The previous observations of the small flares with the flux similar to our observation lasted < aday. The brightness temperature argument requires the time scale of such flares should be much less than a day for the flaring activity and associated jets to be non-thermal. Therefore, the flaring activity observed in 5 and 6 February would be more likely to be two separate flares of < 1 day in the rise, rather than a longer, single event.


4. Results

4.1. Light curves

Figure 1 shows the light curve of the three sources. The different states in which these sources were observed are indicated. In J1659, the nature of the light curve was reported to be fast rise and exponential decay (Kennea et al., 2011 Yamaoka et al., 2012) . The source was first observed when it was already in the HIMS transitions between the HIMS and the SIMS were observed during the later part of the outburst (Kalamkar et al., 2011) . In J1753, the nature of the light curve was also reported to be fast rise and exponential decay. It should be noted that the source was in the hard state during all these observations and that this is a hard state at ‘high’ intensity, as it is observed during the peak of the outburst (Cadolle Bel et al., 2007 Ramadevi & Seetha, 2007 Soleri et al., 2013 Zhang et al., 2007) . GX-339 was observed in the LHS, the HIMS and the SIMS with several transitions between the HIMS and the SIMS (Motta et al., 2011) . It should be noted that unlike J1659 and J1753, the light curve of GX-339 was reported to be slow rise slow decay (Debnath et al., 2010) .

4.2. Spectral evolution

Figure 2 shows representative energy spectra of the three sources. The spectra are in the HIMS for J1659, in the hard state for J1753 and in the LHS for GX-339. They all show the presence of the soft disk component and hard power-law like emission in these observations modelled with diskbb+comptt . The disk is significantly detected in many observations in these three sources (see below).

Figure 3 shows the evolution of the disk temperature as a function of the total unabsorbed flux. In J1659, the disk is detected over the entire period of observations. The temperature initially stays somewhat constant, followed by an increase in a correlated fashion with the flux. The correlation spans the HIMS and the SIMS, although a large scatter is seen at higher disk temperatures. In GX-339, the disk is not significantly detected in the first observation (and hence not shown in Figure 3 ), but it is detected in all subsequent observations. The temperature does not show a large change during the LHS (as also reported by Cadolle Bel et al., 2011) . The temperature begins to increase after the source enters the HIMS, (flux > 1.7e-08 erg/s/ c m 2 ). A scatter is seen above a disk temperature of 0.5 keV, corresponding to the time when the source exhibits transitions between the HIMS and the SIMS. In J1753, the disk is detected during the rise, peak and decay of the outburst till MJD 53587 (see Figure 1 ). In Figure 3 , it appears that the temperature increases till the flux reaches its maximum, followed by a decrease in temperature during the flux decay. However, as the errors on the disk temperature are large, this cannot be said conclusively. Independent of the large errors, we observe that both the disk temperature and the flux vary over a limited range in J1753 J1659 and GX-339 span a larger range of fluxes as well as disk temperatures.

The important spectral parameters that characterise the spectral model are the disk temperature (which in our model is equal to the input seed photon temperature) and the plasma optical depth. As the XRT CCD is only sensitive to X-ray emission in the energy range below 10 keV, we are unable to independently constrain the electron temperature, which requires detection of the spectral cut-off typically present at energies in excess of 10 keV. For this reason, the electron temperature was fixed at 50 keV in all fits. This has the effect of producing power-law like high energy emission in the XRT bandpass. The optical depth and electron temperature are known to be somewhat degenerate in the comptt model, hence, we are unable to uniquely constrain the absolute value of the optical depth. For this reason, although we fit the spectra with diskbb+comptt accounting for both the components of the accretion flow, here we present the evolution of, and correlations with only the disk parameters (temperature and radius). We emphasise that our choice of electron temperature does not affect our measurement of the disk parameters.

4.3. Timing evolution

Figure 4 shows representative XRT power spectra from the sources in the 0.5–10 keV energy band, using the same observations as in Figure 2 . The components in each power spectrum are identified as follows: The power spectrum of J1659 (HIMS) shows four components which, in the order of increasing frequency, are the low frequency noise (lfn), the ‘break’ component, the QPO and the broad band noise underlying the QPO, referred to as the ‘hump’, as identified in Kalamkar et al. (2011) with the RXTE data and in Kalamkar et al. (2014) with the Swift data. In GX-339 (LHS), based on the frequency and rms evolution properties we observe (see below), we identify the components as the lfn, the QPO (Motta et al., 2011) , the hump and an additional component seen around 3 Hz. A similar component was reported during the rise of the 2002/2003 outburst (Belloni et al., 2005) . As we detect this component only in GX-339, we do not study it further. The break is not detected during these 2010 XRT observations. The break was reported in very few observations during the 2002/2003 observations of GX-339 (Belloni et al., 2005) . The power spectrum of J1753 is of an observation from the peak of the outburst, during which the source was in the ‘hard’ state. Except for the lfn (not detected in any observation), the same components are detected in J1753 as described for J1659, as identified in Kalamkar et al. (2013) .

Figure 5.— Evolution of the frequency of the different variability components as a function of the total unabsorbed flux in the 0.5-10 keV energy range. The bottom panel indicates the coverage of the observations the components are not always detected. The states covered are: J1659 - HIMS and SIMS (lfn only - large inverted triangle), GX 339 - LHS (larger sized symbols) and HIMS, and, J1753 - hard state. The filled and open symbols show the frequency in the 0.5–2 keV and 2–10 keV bands, respectively.
Source lfn break QPO hump
type-C
J1659 HIMS, SIMS HIMS HIMS HIMS
J1753 hard state hard state hard state
GX-339 LHS, HIMS LHS, HIMS LHS, HIMS
Table 1 The table represents the detections of various variability components in the three sources in different spectral states. See Section 4.1 for the discussion on spectral states and Section 4.3 for the identification of the components. Figure 6.— Evolution of the frequency of the different variability components as a function of the disk temperature. The states covered are: J1659 - HIMS and SIMS (lfn only - large inverted triangle), GX 339 - LHS (larger sized symbols) and HIMS, and, J1753 - hard state. The filled and open symbols show the frequency in the 0.5–2 keV and 2–10 keV bands, respectively. Figure 7.— Evolution of the fractional rms amplitude of the different variability components as a function of the disk temperature. The states covered are: J1659 - HIMS and SIMS (lfn only - large inverted triangle), GX 339 - LHS (larger sized symbols) and HIMS, and, J1753 - hard state. The filled and open symbols show the fractional rms amplitude in the 0.5–2 keV and 2–10 keV bands, respectively.

The detections of the various components in different spectral states in the three sources are shown in Table 1 . These components are typical of the LHS and the HIMS, and have been reported in many BHBs (see, e.g., van der Klis, 2006) . We detect type C QPO in all the sources type B and type A QPOs which are typical of the SIMS, are not detected in our data. It should be noted that not all components are seen in each source in each observation. All the components are detected in the hard and soft bands, although not always simultaneously. Interestingly, we detect the QPO and the hump components more often in the hard band, while the break and lfn components are detected more often in the soft band. Figure 5 shows the frequency evolution of the variability components as a function of the total unabsorbed flux. In BHBs, the component frequencies generally correlate with flux. We observe that:

1) The QPO frequency is strongly correlated with flux in J1659 (HIMS) and GX-339 (LHS and HIMS, see below), but in our data the correlation is not clear in J1753 (hard state). A strong correlation has been reported in J1753 with the RXTE data (Zhang et al., 2007 Ramadevi & Seetha, 2007) during the decay.

2) The hump frequency shows a strong correlation with flux for J1659 (HIMS), but is not clearly seen in J1753 (hard state) in our data. In GX-339, the hump frequency does not exhibit a clear correlation with the flux during the observations at low flux, which are in the LHS. It increases sharply only during the two detections in the hard band which correspond to the HIMS the frequency is higher in the hard band than the soft band.

3) The break component in J1659 (HIMS) is detected much later along the outburst than the rest of the components and shows only two detections. The frequency shows an increase with flux only in the hard band. Correlation of the frequency with intensity has been reported with the RXTE data in the 2-60 keV range (Kalamkar et al., 2014) . The frequency is higher in the hard band than the soft band for the only simultaneous detection. Such an energy dependence of break frequency (frequency increasing with energy) has been reported in this source (Kalamkar et al., 2014) and also in other sources (Belloni et al., 1997 Kalemci et al., 2003) . In J1753 (hard state) during the peak of the outburst, the break frequency does not show a clear dependence on flux, but during the flux decay (below 1.5e-08 erg/s/ c m 2 ) the frequency decreases. The break component is not detected in GX-339.

4) The frequency of the lfn component varies in the range 0.01-0.1 Hz with no clear flux dependence over a large range of fluxes in J1659 and GX-339, which in the case of J1659 is across the HIMS and the SIMS (there is one detection of the lfn during the SIMS) and during the LHS and the HIMS in GX-339. The lfn is not detected in J1753.


Contents

ULXs were first discovered in the 1980s by the Einstein Observatory. Later observations were made by ROSAT. Great progress has been made by the X-ray observatories XMM-Newton and Chandra, which have a much greater spectral and angular resolution. A survey of ULXs by Chandra observations shows that there is approximately one ULX per galaxy in galaxies which host ULXs (most do not). [1] ULXs are found in all types of galaxies, including elliptical galaxies but are more ubiquitous in star-forming galaxies and in gravitationally interacting galaxies. Tens of percents of ULXs are in fact background quasars the probability for a ULX to be a background source is larger in elliptical galaxies than in spiral galaxies.

The fact that ULXs have Eddington luminosities larger than that of stellar mass objects implies that they are different from normal X-ray binaries. There are several models for ULXs, and it is likely that different models apply for different sources.

Beamed emission — If the emission of the sources is strongly beamed, the Eddington argument is circumvented twice: first because the actual luminosity of the source is lower than inferred, and second because the accreted gas may come from a different direction than that in which the photons are emitted. Modelling indicates that stellar mass sources may reach luminosities up to 10 40 erg/s (10 33 W), enough to explain most of the sources, but too low for the most luminous sources. If the source is stellar mass and has a thermal spectrum, its temperature should be high, temperature times the Boltzmann constant kT ≈ 1 keV, and quasi-periodic oscillations are not expected.

Intermediate-mass black holes — Black holes are observed in nature with masses of the order of ten times the mass of the Sun, and with masses of millions to billions the solar mass. The former are 'stellar black holes', the end product of massive stars, while the latter are supermassive black holes, and exist in the centers of galaxies. Intermediate-mass black holes (IMBHs) are a hypothetical third class of objects, with masses in the range of hundreds to thousands of solar masses. [2] Intermediate-mass black holes are light enough not to sink to the center of their host galaxies by dynamical friction, but sufficiently massive to be able to emit at ULX luminosities without exceeding the Eddington limit. If a ULX is an intermediate-mass black hole, in the high/soft state it should have a thermal component from an accretion disk peaking at a relatively low temperature (kT ≈ 0.1 keV) and it may exhibit quasi-periodic oscillation at relatively low frequencies.

An argument made in favor of some sources as possible IMBHs is the analogy of the X-ray spectra as scaled-up stellar mass black hole X-ray binaries. The spectra of X-ray binaries have been observed to go through various transition states. The most notable of these states are the low/hard state and the high/soft state (see Remillard & McClintock 2006). The low/hard state or power-law dominated state is characterized by an absorbed power-law X-ray spectrum with spectral index from 1.5 to 2.0 (hard X-ray spectrum). Historically, this state was associated with a lower luminosity, though with better observations with satellites such as RXTE, this is not necessarily the case. The high/soft state is characterized by an absorbed thermal component (blackbody with a disk temperature of (kT ≈ 1.0 keV) and power-law (spectral index ≈ 2.5). At least one ULX source, Holmberg II X-1, has been observed in states with spectra characteristic of both the high and low state. This suggests that some ULXs may be accreting IMBHs (see Winter, Mushotzky, Reynolds 2006).

Background quasars — A significant fraction of observed ULXs are in fact background sources. Such sources may be identified by a very low temperature (e.g. the soft excess in PG quasars).

Supernova remnants — Bright supernova (SN) remnants may perhaps reach luminosities as high as 10 39 erg/s (10 32 W). If a ULX is a SN remnant it is not variable on short time-scales, and fades on a time-scale of the order of a few years.


Access to Document

  • APA
  • Author
  • BIBTEX
  • Harvard
  • Standard
  • RIS
  • Vancouver

In: The Astrophysical Journal , Vol. 546, No. 2 PART 2, 10.01.2001.

Research output : Contribution to journal › Article

T1 - The correlated intensity and spectral evolution of Cygnus X-1 during state transitions

N2 - Using data from the All-Sky Monitor (ASM) aboard the Rossi X-Ray Timing Explorer (RXTE), we found that the 1.5-12 keV X-ray count rate of Cygnus X-1 is, on timescales from 90 s to at least 10 days, strongly correlated with the spectral hardness of the source in the soft state but is weakly anticorrelated with the latter in the hard state. The correlation shows an interesting evolution during the 1996 spectral state transition. The entire episode can be roughly divided into three distinct phases: (1) a 20 day transition phase from the hard state to the soft state, during which the correlation changes from being negative to positive, (2) a 50 day soft state with a steady positive correlation, and (3) a 20 day transition back to the hard state. The pointed RXTE observations confirmed the ASM results but revealed new behaviors of the source at energies beyond the ASM passband. We discuss the implications of our findings.

AB - Using data from the All-Sky Monitor (ASM) aboard the Rossi X-Ray Timing Explorer (RXTE), we found that the 1.5-12 keV X-ray count rate of Cygnus X-1 is, on timescales from 90 s to at least 10 days, strongly correlated with the spectral hardness of the source in the soft state but is weakly anticorrelated with the latter in the hard state. The correlation shows an interesting evolution during the 1996 spectral state transition. The entire episode can be roughly divided into three distinct phases: (1) a 20 day transition phase from the hard state to the soft state, during which the correlation changes from being negative to positive, (2) a 50 day soft state with a steady positive correlation, and (3) a 20 day transition back to the hard state. The pointed RXTE observations confirmed the ASM results but revealed new behaviors of the source at energies beyond the ASM passband. We discuss the implications of our findings.


Accretion flow diagnostics with X-ray spectral timing: the hard state of SWIFT J1753.5−0127

Recent XMM–Newton studies of X-ray variability in the hard states of black hole X-ray binaries (BHXRBs) indicate that the variability is generated in the ‘standard’ optically thick accretion disc that is responsible for the multi-colour blackbody emission. The variability originates in the disc as mass-accretion fluctuations and propagates through the disc to ‘light up’ inner disc regions, eventually modulating the power-law emission that is produced relatively centrally. Both the covariance spectra and time-lags that cover the soft bands strongly support this scenario.

Here, we present a comparative spectral-timing study of XMM–Newton data from the BHXRB SWIFT J1753.5−0127 in a bright 2009 hard state with that from the significantly fainter 2006 hard state to show for the first time the change in disc spectral-timing properties associated with a global increase in both the accretion rate and the relative contribution of the disc emission to the bolometric luminosity.

We show that, although there is strong evidence for intrinsic disc variability in the more luminous hard state, the disc variability amplitude is suppressed relative to that of the power-law emission, which contrasts with the behaviour at lower luminosities where the disc variability is slightly enhanced when compared with the power-law variations. Furthermore, in the higher luminosity data the disc variability below 0.6 keV becomes incoherent with the power-law and higher energy disc emission at frequencies below 0.5 Hz, in contrast with the coherent variations seen in the 2006 data. We explain these differences and the associated complex lags in the 2009 data in terms of the fluctuating disc model, where the increase in accretion rate seen in 2009 leads to more pronounced and extended disc emission. If the variable signals are generated at small radii in the disc, the variability of disc emission can be naturally suppressed by the fraction of unmodulated disc emission arising from larger radii. Furthermore, the drop in coherence can be produced by disc accretion fluctuations arising at larger radii which are viscously damped and hence unable to propagate to the inner, power-law emitting region.


Watch the video: Doing This Will Reset Your Car and Fix It for Free (September 2021).