Relation between phase and magnitude

Relation between phase and magnitude

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Given the phase of a planet or satellite, I can find the area illuminated visible to us, if we consider a 2D surface. But how do I find the percentage of illuminated area visible considering it a 3D surface and thus finding the magnitude of the partially illuminated planet/satellite.

In general, you'd have to project the light from the sun (Assuming we're talking about objects in our solar system) onto the 3D surface and project it onto a 2D surface. This can be accomplished via a ray tracing algorithm.

This procedure will give you the fraction of the surface illuminated.

Now if you are concerned about the fraction of the illuminated surface that is reflected in a specific directly -- so that it is "viewable" from that direction -- you have to do the ray tracing in 3D from the source (Sun) to the object and on to the viewer. There packages you can google that will do this type of ray tracing if you can model the objects in 3D…

Phase shift magnitude and direction determine whether Siberian hamsters reentrain to the photocycle

Body temperature (Tb) or activity rhythms were monitored in male Siberian hamsters (Phodopus sungorus) housed in an LD cycle of 16 h light/day from birth. At 3 months of age, rhythms were monitored for 14 days, and then the LD cycle was phase delayed by 1, 3, or 5 h or phase advanced by 5 h in four separate groups of animals. Phase delays were accomplished via a 1- or 3-h extension of the light phase or via a 5-h extension of the dark phase. The phase advance was accomplished via a 5-h shortening of the light phase. After 2 to 3 weeks, hamsters that were phase delayed by 1 or 3 h were then phase advanced by 1 or 3 h, respectively, via a shortening of the light phase. All of the animals reentrained to phase delays of 1 or 3 h and to a 1-h phase advance 79% reentrained to a 3-h phase advance. In contrast, only 13% of the animals reentrained to the 5-h phase advance, 13% became arrhythmic, and 74% free ran for several weeks. After the 5-h phase delay, however, reentrainment was observed in 50% of the animals although half of them required more than 21 days to reentrain. The response to a phase shift could not be predicted by any parameter of circadian rhythm organization assessed prior to the phase shift. These data demonstrate that a phase shift of the LD cycle can permanently disrupt entrainment mechanisms and eliminate circadian Tb and activity rhythms. Magnitude and direction of a phase shift of the LD cycle determine not only the rate but also the probability of reentrainment. Furthermore, the phase of the LD cycle at which the phase shift is made has a marked effect on the proportion of animals that reentrain. Light exposure during mid-subjective night combined with daily light exposure during the active phase may explain these phenomena.

The relation between VL and Vph

Phasor diagram for the phase voltage will be as shown below for the star connection. We will calculate any one of the line voltage. VRY is the voltage between R and Y. From the figure,

Hence, VRY is obtained by adding ( -VY ) and The resultant VRY will be 30 0 ahead of VR. Hence, VL is 30 0 ahead of V ph. The magnitude of VRY can be calculated as,

Hence, magnitude of line voltage is magnitude under root 3 of phase voltage.

Topics similar to or like Phase angle (astronomy)

Measure of the luminosity of a celestial object, on an inverse logarithmic astronomical magnitude scale. Defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly 10 pc, without extinction of its light due to absorption by interstellar matter and cosmic dust. Wikipedia

Technique of observing nearby astronomical objects by reflecting microwaves off target objects and analyzing the reflections. This research has been conducted for six decades. Wikipedia

Measure of the brightness of a star or other astronomical object observed from Earth. An object's apparent magnitude depends on its intrinsic luminosity, its distance from Earth, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer. Wikipedia

Observations of the planet Venus include those in antiquity, telescopic observations, and from visiting spacecraft. Spacecraft have performed various flybys, orbits, and landings on Venus, including balloon probes that floated in the atmosphere of Venus. Wikipedia

Astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. Sun. Wikipedia

Below the horizon. Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. Wikipedia

Study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light and radio, which radiates from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, temperature, density, mass, distance, luminosity, and relative motion using Doppler shift measurements. Wikipedia

In astronomy, a conjunction occurs when two astronomical objects or spacecraft have either the same right ascension or the same ecliptic longitude, usually as observed from Earth. The astronomical symbol for conjunction is ☌ (in Unicode U+260C) and handwritten. Wikipedia

Hobby where participants enjoy observing or imaging celestial objects in the sky using the unaided eye, binoculars, or telescopes. Even though scientific research may not be their primary goal, some amateur astronomers make contributions in doing citizen science, such as by monitoring variable stars, double stars, sunspots, or occultations of stars by the Moon or asteroids, or by discovering transient astronomical events, such as comets, galactic novae or supernovae in other galaxies. Wikipedia

List of the brightest natural objects in the sky. Meant for naked eye viewing. Wikipedia

Timekeeping system that astronomers use to locate celestial objects. Possible to easily point a telescope to the proper coordinates in the night sky. Wikipedia

The phases of Venus are the variations of lighting seen on the planet's surface, similar to lunar phases. The first recorded observations of them are thought to have been telescopic observations by Galileo Galilei in 1610. Wikipedia

Measure of the mass of a planet-like object. Solar mass, the mass of the Sun. Wikipedia

Time that the object takes to complete a single revolution around its axis of rotation relative to the background stars. It differs from the object's solar day, which may differ by a fractional rotation to accommodate the portion of the object's orbital period during one day. Wikipedia

In many cases astronomical phenomena viewed from the planet Mars are the same or similar to those seen from Earth but sometimes (as with the view of Earth as an evening/morning star) they can be quite different. Ozone layer, it is also possible to make UV observations from the surface of Mars. Wikipedia

Mechanical representation of the cyclic nature of astronomical objects in one timepiece. Astronomical clock. Wikipedia

Time a given astronomical object takes to complete one orbit around another object, and applies in astronomy usually to planets or asteroids orbiting the Sun, moons orbiting planets, exoplanets orbiting other stars, or binary stars. Often referred to as the sidereal period, determined by a 360° revolution of one celestial body around another, e.g. the Earth orbiting the Sun. Wikipedia

Unit of length, roughly the distance from Earth to the Sun and equal to about 150 e6km or

8 light minutes. The actual distance varies by about 3% as Earth orbits the Sun, from a maximum (aphelion) to a minimum (perihelion) and back again once each year. Wikipedia

Brightest object in the constellation Leo and one of the brightest stars in the night sky, lying approximately 79 light years from the Sun. Actually a quadruple star system composed of four stars that are organized into two pairs. Wikipedia

Traditional Hindu system of astrology, also known as Hindu astrology, Indian astrology and more recently Vedic astrology. Relatively recent term, entering common usage in the 1970s with self-help publications on Āyurveda or yoga. Wikipedia

In positional astronomy, two astronomical objects are said to be in opposition when they are on opposite sides of the celestial sphere, as observed from a given body (usually Earth). Said to be "in opposition" or "at opposition" when it is in opposition to the Sun. Wikipedia

Angle between the great circle through a celestial object and the zenith, and the hour circle of the object. Usually denoted q. Wikipedia

About the recorded history of observation of the planet Mars. Some of the early records of Mars' observation date back to the era of the ancient Egyptian astronomers in the 2nd millennium BCE. Wikipedia

Extraterrestrial sky is a view of outer space from the surface of an astronomical body other than Earth. That of the Moon. Wikipedia

In classical antiquity, the seven classical planets or seven sacred luminaries are the seven moving astronomical objects in the sky visible to the naked eye: the Moon, Mercury, Venus, the Sun, Mars, Jupiter, and Saturn. The word planet comes from two related Greek words, πλάνης planēs (whence πλάνητες ἀστέρες planētes asteres "wandering stars, planets") and πλανήτης planētēs, both with the original meaning of "wanderer", expressing the fact that these objects move across the celestial sphere relative to the fixed stars. Wikipedia

Visible-light astronomy encompasses a wide variety of observations via telescopes that are sensitive in the range of visible light (optical telescopes). Part of optical astronomy, and differs from astronomies based on invisible types of light in the electromagnetic radiation spectrum, such as radio waves, infrared waves, ultraviolet waves, X-ray waves and gamma-ray waves. Wikipedia

On the opposite side of the Sun from the Earth. Earth reference, the Sun will pass between the Earth and the object. Wikipedia

Optical instrument using lenses, curved mirrors, or a combination of both to observe distant objects, or various devices used to observe distant objects by their emission, absorption, or reflection of electromagnetic radiation. The first known practical telescopes were refracting telescopes invented in the Netherlands at the beginning of the 17th century, by using glass lenses. Wikipedia

Observation, visitation, and increase in knowledge and understanding of Earth's "cosmic neighborhood". This includes the Sun, Earth and the Moon, the major planets Mercury, Venus, Mars, Jupiter, Saturn, Uranus, and Neptune, their satellites, as well as smaller bodies including comets, asteroids, and dust. Wikipedia

This article documents the most distant astronomical objects discovered and verified so far, and the time periods in which they were so classified. Distances to remote objects, other than those in nearby galaxies, are nearly always inferred by measuring the cosmological redshift of their light. Wikipedia

Solar Physics

Equation 18 - Ratio of Fluxes
Equation 22 - Flux and Luminosity
Equation 24 - Inverse Square Law for Flux
Equation 26 - Stefan-Boltzmann Law
Equation 38 - Electron degeneracy pressure
Equation 61 - Wien's Displacement Law

Understanding Audio Phase

Has your mix ever sounded “not quite right” but you can’t put your finger on why? You might be experiencing phase cancellation, a phenomenon that can make certain frequencies vanish from your mix. To help you out, this Studio Basics article will help you understand phase — what it is, why it matters, and what it means to be out of phase.

The Laws of Physics

Essentially, phase refers to sound waves — or simply put, the vibration of air. When we listen to sound, what we’re hearing are changes in air pressure. Just like the ripple of a stone in water, sound is created by the movement of air. And just as in water, those movements cause a rippling effect — waves comprised of peaks and troughs. Those waves cause our eardrums to vibrate, and our brains translate that information into sound.

When we record sound, the diaphragms in our microphones essentially replicate the action of our eardrums, vibrating in accordance with those waves. The waves’ peaks cause the mic’s diaphragm to move in one direction, while their troughs generate movement in the opposite direction.

The first illustration below shows what happens when we’ve got two channels of a signal in phase. When both channels are in phase, we hear the sound at the same amplitude level at the same time in both ears.

But if one side of the stereo signal is reversed, as shown in the second illustration, the signals will cancel each other out. In fact, if we were using a pure sine wave, combining both signals out of phase would result in silence, since the sounds would literally cancel each other out.

In the real world, we normally don’t listen to pure sine waves. Since most of the music we hear and the instruments we record are a complex combination of multiple waves and harmonics, the results of phase cancellation will be equally complex.

In the Studio

When recording, phase issues can quickly become complicated, usually becoming a problem when more than one channel is used to record a single source, such as stereo miking a guitar, multi-miking a drum set, or using a microphone/DI combo for bass. As sound waves of different frequencies reach different microphones at different times, the potential for one mic to receive a positive phase while another receives a negative is greatly increased, and the relationship between all of these waves’ phases can be unpredictable. In fact, the more mics in play, the more inevitable some sort of phase issues become.

Let’s look at a simple scenario, like a stereo recording of an acoustic guitar.

Most often, two mics will be set up, with one pointed toward the sound hole to pick up the lower frequencies, and the second mic pointed toward the neck and fingerboard to pick up the attack. Of course, the guitar’s frequency range covers several octaves, which means a wide range of different audio wavelengths. Since the mics are a fixed distance from the source, those different waves will arrive at the mics at different points.

Inevitably, one or more harmonics will end up sounding weaker than the rest. Your best practice would involve moving the mics very slightly — even a fraction of an inch can make a difference — until you achieve the best sound to your ears. Another solution would be to use a mid-side miking technique, which you can read about in our Mid Side (MS) Mic Recording Basics article.

Again, the more microphones used in a recording, the more potential for phase problems. In modern music recording, that usually points to the drum kit. Consider even a single snare drum, miked from above and beneath. Since the top and bottom heads of the drum are usually moving in directly opposing motion (when the top drum head is hit, it moves inward, causing the bottom head to move outward), the two mics will record signals that are directly out of phase.

Now factor in the hi-hat mic, a pair of overheads, at least one kick drum mic and one on every tom, not to mention the relationship to ambient mics, and you’ve got a sonic soup that’s ripe for phase problems. That’s why many microphones, as well as mic preamps and consoles, offer a phase flip switch. It's also why a lot of “old school” recording engineers wax nostalgic about the days when they recorded a kit with only two or three mics!

There are plenty of other “gotchas” that can introduce phase problems into your recordings. For example, a bass track recorded direct (DI) can be too clean sounding, so putting a mic on the bass amp cabinet and mixing the two sounds can give the extra “oomph” it needs — but it can also introduce phase problems.

Even certain delay settings, including pre-delays within a reverb patch, can create a delay of your original signal that ends up being out of phase

Check Your Speakers

Phase cancellation can also occur by simply wiring speakers incorrectly, inadvertently reversing the polarity of one channel. It’s surprising how many home stereos — and even project studios — have their monitors wired out of phase. In some circumstances, it may not even be apparent without careful listening. Though this is commonly referred to as “out of phase wiring,” technically-speaking it’s an issue of polarity. That said, the audible effect of this polarity reversal is the same as you get with phase cancellation.

The easiest way to check your speakers is to sum your mix to mono (more on this later). Many stereos and most mixing consoles allow you to do this, but even in stereo, there are some telltale signs of phase problems.

What does a phase problem sound like? Since phase cancellation is most apparent in low frequency sounds, the audible result of out of phase monitors is typically a thin-sounding signal with little or no bass sound. Another possible result is that the kick drum or bass guitar will move around the mix, rather than coming from a single spot.

Another common artifact of out-of-phase stereo mixes is where signals panned to the center disappear, while sounds panned hard to one side remain. Often this will be the case with a lead vocal or instrument solo — the main part will vanish, leaving only the reverb. In fact, this is how many of those old “remove the lead vocal” karaoke boxes work — they flip the phase of one side of the stereo mix, relying on the assumption that in most commercially recorded tracks, the lead vocal is panned dead center.

So what’s the fix?

As with most things, the answer is “it depends.” Assuming you identify a phase problem during the recording process, a fix is as easy as moving a mic or flipping the phase on a mic or its input channel.

When attempting to capture ambience, there's also a quick cheat: the 3:1 Rule of Mic Placement. Put simply, when using two microphones to record a source, try placing the second mic three times the distance from the first mic, as the first mic is from the source. So if the first mic is one foot from a source, the second mic should be placed three feet from the second mic. Using this simple 3:1 rule can minimize phase problems created by the time delay between mics.

Of course, if the problem doesn’t show itself until you’re mixing, it’s often possible to pull the tracks up in your DAW, zoom in close on their waveforms, and slightly nudge one track just a bit. You’d be amazed what a difference just moving a track by one or two milliseconds can make. There are also some very effective phase alignment plug-ins on the market that can really clean things up — and even serve as great creative tools — one of which is the UAD Little Labs IBP Phase Alignment Tool Plug‑In.

Sum It Up

We’ve only scratched the surface, but the bottom line is that phase issues are a fact of life, and practically unavoidable.

The first order of business is to identify the problem. Most phase problems will not show themselves in stereo, and will only appear when you collapse your mix into a single summed channel. That’s why it’s critically important, as you build your mixes, to check them regularly in mono. Don’t wait until you’ve got a completed mix to sum it into mono. Check the basic tracks, especially drums and bass, early on in the process when the arrangement and the mix are less dense and fewer things are going on. And check it again every time you add a few more instruments, or change a track’s EQ, or add reverb.

As with many things, the sooner you catch a phase problem, the easier it will be to fix. Happy mixing!

How does the Moon's phase affect the skyglow of any given location, and how many days before or after a new Moon is a dark site not compromised?

By: Tony Flanders July 21, 2006 0

Get Articles like this sent to your inbox

How does the Moon's phase affect the skyglow of any given location? How many days before or after a new Moon is a dark site not compromised?

This photomosaic, covering about 65% of the sky, shows how unevenly distributed the Moon's glow is. The Moon itself is blocked by a shade to keep it from burning out the photo.
Tony Flanders

The answer is complicated because the Moon's glow is even more directional than light pollution. Skyglow is several times brighter near the Moon than on the opposite side of the sky. And the Moon's impact is greatly reduced when it's near the horizon.

But according to my own measurements and ones by Brian Skiff (Lowell Observatory), the sky brightness straight overhead at full Moon is roughly magnitude 18.0 per square arcsecond (18.0 mpss). That matches the skyglow on a moonless night at my home near the center of Boston, Massachusetts.

At first and last quarter, the Moon is only about a tenth this bright, yielding a sky brightness of 20.5 mpss — darker than anywhere within 40 miles of Boston's center. So in most suburban locations, a 50%-illuminated Moon has little effect unless it's close to the object that you're observing.

The natural background skyglow at a pristine site is around 22.0 mpss. That's somewhat brighter than the glow from a four-day-old (16%-illuminated) Moon.

Quantum Optics

8.15 Quantum Correlation Functions

The photocount variance formula ( 8.336 ) relates to a quantity that can be determined experimentally and tells us that the theoretical expression for this quantity (the right-hand side of the equation) involves the expectation value of a product of four field functions, in addition to that of a product of two field functions, where the factors in the product may correspond to different space-time points (this is seen more directly from the expression of the photocount variance in terms of the field strength operator, ie, one where N ^ D ( t , t ′ ) is replaced with Ĵ ( t , t ′ ) and the constant ηQ is appropriately rescaled).

More generally, one is led to the consideration of quantum correlation functions involving normal ordered products of field operators evaluated at distinct space-time points, where there may be an arbitrary number of factors in the products (with the number of positive frequency factors, however, being the same as the number of negative frequency factors). For instance, the first-order correlation function in formula ( 8.331 ) defines (up to an appropriate scale factor) the average intensity at (r, t),

while the first-order field correlation between space-time points (r1, t1) and (r2, t2) is of the form

A correlation function of the second order, on the other hand, looks like

These field correlation functions define the coherence characteristics of an optical field, where, as in the case of a classical field, correlation functions of relatively low order (mostly, those of order 1 and 2) relate to field characteristics that are commonly determined experimentally. As in Eq. ( 8.331 ), all the correlation functions involve products of field operators in the normal order.

The first-order correlation in Eq. ( 8.339a ) is determined by a device placed at the point r that measures the ensemble-averaged instantaneous intensity (compare Eqs. 8.331 and 8.339a with the semiclassical formula ( 7.159 )) by means of the mean photocount rate.

On the other hand, the second-order correlation function G (2) (r1t1, r2t2r1t1, r2, t2) gives the correlation between the photocount rate at (r1, t1) and that at (r2, t2 ). All these quantum correlation functions are analogous to classical correlation functions of various orders. Indeed, the optical equivalence theorem allows us to interpret the quantum correlations formally in terms of corresponding correlations of an equivalent classical field in a mixed classical state defined by a distribution function in a surrogate phase space.

This is seen by expressing the electric field operators occurring in the expressions for the correlation functions in terms of the creation and the annihilation operators and then invoking the P-representation of the field state in which the expectation value is sought to be evaluated (see Section 8.10.2 ).

However, as I have already mentioned, this does not reduce the quantum correlations to classical ones. As regards the features based on the first-order correlation functions, though, there indeed is a convergence between the quantum and the classical coherence characteristics. On the other hand, the coherence characteristics based on the second-order correlation functions clearly distinguish between the classical and the quantum descriptions. I will now briefly outline what this means.

Right Going (Forward Traveling) Wave

  • The gray dots represents the motion of the fluid particles in the medium, and as the wave travels from left to right, the particles are temporarily displaced to the right (in the positive direction) from their equilibrium postions, returning to equilibrium after the wave has passed.
  • The black plot and arrow represent the horizontal displacement of the fluid particle, originally at the green equilibrium postion, as the wave passes by. A large vertical value for the plot indicates a large positive horizontal displacement (to the right) from equilibrium.
  • The red arrow and plot represent the particle velocity . When the particles are moving to the right (in the positive direction), the velocity is positive and the arrow points to the right. When the particles are moving to the left (in the negative direction, back toward equilibrium) the velocity is negative and the arrow points to the left.
  • The blue plot and words represent the pressure . As the wave travels to the right, and the particles begin to be displaced in the positive direction (to the right) by different amounts, the particles at the leading edge of the wave are bunched together (compression) and the pressure is positive. As the wave passes, and the particles begin to move left (retuning to their equilibrium positions), the particles are more spread out (rarefaction) and the pressure is negative.

The first still frame at right shows that the particle (whose equilibrium location is identified by the green dot and dashed line ) is being displaced in the positive direction, as evidenced by the black arrow to the right. At this instant, the particle is moving to the right with a maximum velocity (the red arrow points to the right, and the value of the particle velocity plot is a maximum (at the green equilibrium location). The neighboring particles (before and after the green equilibrium position) have been displaced to the right, and are now bunched together so the pressure associated with the green equilibrium point location is a maximum.

The second still frame represents the time at which the particle (originally at the green equilibrium location) has reached its maximum displacement. At this instant, the particle velocity is zero . Also, the spacing between the displaced particle (originally at the green dot , but now displaced to the right tp the black dot at the tip of the arrow) and its displaced neighbors is the same value as when the particles were all at their equilibrium locations, so the pressure is zero .

The third still frame represents the time at which the particle (originally at the green equilibrium location) is still displaced in the positive direction (black arrow still points to the right), but the particle is moving to the left with a negative velocity as it returns to its equilibrium position. The particle is now further away from its neighbors than the equilibrium spacing (rarefaction), so the pressure is negative .

Phase Shifts and Interference/Diffraction Patterns

To see why relative phase shift is important, consider the superposition of two identical waves that have a relative phase shift of π pi π :

Destructive interference of waves (solid red and dashed red) at a relative phase shift of π pi π , giving the net result of zero (blue) everywhere.

These waves are called out of phase to denote the fact that the phase shift puts the peaks of one wave exactly opposite the peaks of the other. The result of the superposition is that the positive and negative peaks cancel, obtaining zero, which is called destructive interference.

If two waves are in phase, however, the peaks line up. This always occurs when the relative phase shift is zero, but also effectively occurs for small phase shifts. The result is constructive interference, where the peaks of the result are at a height given by the sum of the two original peaks:

Constructive interference of two waves (solid red and dashed red) that are perfectly in phase, giving a result of larger amplitude (blue).

Below, some examples of how superposition of waves at different phase shifts cause important interference and diffraction effects in physics are explored.

Photons corresponding to light of wavelength λ lambda λ are fired at a barrier with two thin slits separated by a distance d d d as shown in the diagram below. After passing through the slits, they hit a screen at a distance D D D away, D ≫ d D gg d D ≫ d and the point of impact is measured. Remarkably, both the experiment and theory of quantum mechanics predict that the number of photons measured at each point along the screen follows a complicated series of peaks and troughs called an interference pattern as below. The photons must exhibit the wave behavior of a relative phase shift somehow to be responsible for this phenomenon. Find the condition for which maxima of the interference pattern occur on the screen.

Left: actual experimental two-slit interference pattern of photons, exhibiting many small peaks and troughs. Right: schematic diagram of the experiment as described above [6].


Since D ≫ d D gg d D ≫ d , the angle from each of the slits is approximately the same and equal to θ heta θ . If y y y is the vertical displacement to an interference peak from the midpoint between the slits, it is therefore true that:

D tan ⁡ θ ≈ D sin ⁡ θ ≈ D θ = y . D an heta approx Dsin heta approx D heta = y. D tan θ ≈ D sin θ ≈ D θ = y .

Furthermore, there is a path difference Δ L Delta L Δ L between the two slits and the interference peak. Light from the lower slit must travel Δ L Delta L Δ L further to reach any particular spot on the screen, as in the diagram below:

Light from the lower slit must travel further to reach the screen at any given point above the midpoint, causing the interference pattern.

The condition for constructive interference is that the path difference Δ L Delta L Δ L is exactly equal to an integer number of wavelengths. The phase shift of light traveling over an integer n n n number of wavelengths is exactly 2 π n 2pi n 2 π n , which is the same as no phase shift and therefore constructive interference. From the above diagram and basic trigonometry, one can write:

Δ L = d sin ⁡ θ ≈ d θ = n λ . Delta L = dsin heta approx d heta = nlambda. Δ L = d sin θ ≈ d θ = n λ .

The first equality is always true the second is the condition for constructive interference.

Now using θ = y D heta = frac θ = D y ​ , one can see that the condition for maxima of the interference pattern, corresponding to constructive interference, is:

n λ = d y D , nlambda = frac, n λ = D d y ​ ,

i.e. the maxima occur at the vertical displacements of:

y = n λ D d y = frac y = d n λ D ​

When light shines on a thin film like a soap bubble, an interference pattern results. This is because the light that reflects of the top surface of the thin film has a small phase shift from the light that reflects back out off the bottom surface of the thin film, which has traveled an extra distance related to the thickness of the film (see below diagram).

Thin-film interference on a soap bubble [7]. The color dependence goes as the thickness of the bubble for monochromatic light the pattern would be of light and dark bands.

Schematic diagram of thin-film interference. Some light entering at angle θ 1 heta_1 θ 1 ​ reflects off the top surface, incurring a π pi π phase shift. The rest of the light enters the film at an angle dictated by Snell's law, reflects off the bottom, and exits again with a phase shift relative to the originally reflected wave.

To complicate things, when light reflects off a medium of higher index of refraction, Maxwell's equations require that the phase of the light shift by π pi π .

If the thin film is of thickness d d d , find the condition for destructive interference, in terms of d d d , the wavelength λ lambda λ of the light, the index of refraction n n n of the film, and the angle θ 1 heta_1 θ 1 ​ of incidence with respect to the normal, when light entering from air shines on the film. Note that the index of refraction of the film is greater than that of air (for which n a i r = 1 n_ = 1 n a i r ​ = 1 ).


For destructive interference, the total extra distance traveled (scaled by the index of refraction) must be an integer number of wavelengths of the light. This is because the ray that reflects off the top surface of the film picks up a phase shift of π pi π . If the extra distance traveled (scaled by index of refraction) is an integer number of wavelengths, this extra phase shift puts the two rays perfectly out of phase, resulting in destructive interference. The reason for the scaling by index of refraction is because the effective velocity of light is slower when n ≠ 1 n eq 1 n  ​ = 1 , so more phase is accumulated by traveling the same distance (frequency is the same, but velocity is slower, so there is more time for the frequency to accumulate phase Δ ϕ = ω Δ t Delta phi = omega Delta t Δ ϕ = ω Δ t .

To find the extra distance traveled in terms of θ 1 heta_1 θ 1 ​ , first use Snell's law to find the angle θ 2 heta_2 θ 2 ​ at which the light enters the film:

sin ⁡ θ 1 = n sin ⁡ θ 2 . sin heta_1 = nsin heta_2. sin θ 1 ​ = n sin θ 2 ​ .

From the diagram, one can see that the extra distance traveled inside the film is Δ L f i l m = A B + B C Delta L_ = AB+BC Δ L f i l m ​ = A B + B C :

Δ L f i l m = 2 d cos ⁡ θ 2 . Delta L_ = frac<2d>. Δ L f i l m ​ = cos θ 2 ​ 2 d ​ .

There is an extra path difference, from the amount the light that reflects off the top travels before the second ray exits the film parallel to it. This is segment A D AD A D in the diagram. Some plane geometry (try it yourself!) gives the length of A D AD A D as:

A D = 2 d tan ⁡ θ 2 sin ⁡ θ 1 . AD = 2d an heta_2 sin heta_1. A D = 2 d tan θ 2 ​ sin θ 1 ​ .

The total extra path difference accounting for the index of refraction is therefore:

2 n d cos ⁡ θ 2 − 2 d tan ⁡ θ 2 sin ⁡ θ 1 = 2 n d cos ⁡ θ 2 . frac<2nd>-2d an heta_2 sin heta_1 = 2nd cos heta_2. cos θ 2 ​ 2 n d ​ − 2 d tan θ 2 ​ sin θ 1 ​ = 2 n d cos θ 2 ​ .

Using the expression for θ 2 heta_2 θ 2 ​ in terms of θ 1 heta_1 θ 1 ​ from Snell's law and the fact that the π pi π phase shift puts the rays perfectly out of phase, one finds the condition for destructive interference, where m m m is any integer:

2 n d cos ⁡ θ 2 = m λ ⟹ 2 n d cos ⁡ ( sin ⁡ − 1 ( sin ⁡ ( θ 1 ) / n ) ) = m λ . 2nd cos heta_2= mlambda implies 2nd cos ( sin^ <-1>(sin( heta_1)/n)) = mlambda. 2 n d cos θ 2 ​ = m λ ⟹ 2 n d cos ( sin − 1 ( sin ( θ 1 ​ ) / n ) ) = m λ .

The concept of a relative phase shift is also responsible for the experimental technique of interferometry, which was for instance used at LIGO to discover gravitational waves. Interferometers send laser light down and back along two perpendicular tubes and measure the interference pattern where the light rays recombine. If the length of either arm is slightly longer or shorter than the other, the light picks up a small relative phase which is measured by the interference pattern.

Watch the video: Relation between phase difference and path difference for two particles class 11 physics (November 2022).