-->

Rabu, 01 November 2017

In physics and engineering, a phasor (a portmanteau of phase vector), is a complex number representing a sinusoidal function whose amplitude (A), angular frequency (ω), and initial phase (θ) are time-invariant. It is related to a more general concept called analytic representation, which decomposes a sinusoid into the product of a complex constant and a factor that encapsulates the frequency and time dependence. The complex constant, which encapsulates amplitude and phase dependence, is known as phasor, complex amplitude, and (in older texts) sinor or even complexor.

A common situation in electrical networks is the existence of multiple sinusoids all with the same frequency, but different amplitudes and phases. The only difference in their analytic representations is the complex amplitude (phasor). A linear combination of such functions can be factored into the product of a linear combination of phasors (known as phasor arithmetic) and the time/frequency dependent factor that they all have in common.

The origin of the term phasor rightfully suggests that a (diagrammatic) calculus somewhat similar to that possible for vectors is possible for phasors as well. An important additional feature of the phasor transform is that differentiation and integration of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple algebraic operations on the phasors; the phasor transform thus allows the analysis (calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain. The originator of the phasor transform was Charles Proteus Steinmetz working at General Electric in the late 19th century.

Glossing over some mathematical details, the phasor transform can also be seen as a particular case of the Laplace transform, which additionally can be used to (simultaneously) derive the transient response of an RLC circuit. However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.

Definition



source : www.chegg.com

Euler's formula indicates that sinusoids can be represented mathematically as the sum of two complex-valued functions:

A â‹… cos ⁡ ( ω t + θ ) = A â‹… e i ( ω t + θ ) + e âˆ' i ( ω t + θ ) 2 , {\displaystyle A\cdot \cos(\omega t+\theta )=A\cdot {\frac {e^{i(\omega t+\theta )}+e^{-i(\omega t+\theta )}}{2}},}    

or as the real part of one of the functions:

A ⋅ cos ⁡ ( ω t + θ ) = Re ⁡ { A ⋅ e i ( ω t + θ ) } = Re ⁡ { A e i θ ⋅ e i ω t } . {\displaystyle {\begin{aligned}A\cdot \cos(\omega t+\theta )=\operatorname {Re} \{A\cdot e^{i(\omega t+\theta )}\}=\operatorname {Re} \{Ae^{i\theta }\cdot e^{i\omega t}\}.\end{aligned}}}

The function A ⋅ e i ( ω t + θ ) {\displaystyle A\cdot e^{i(\omega t+\theta )}} is the analytic representation of A ⋅ cos ⁡ ( ω t + θ ) . {\displaystyle A\cdot \cos(\omega t+\theta ).}   Figure 2 depicts it as a rotating vector in a complex plane. It is sometimes convenient to refer to the entire function as a phasor, as we do in the next section. But the term phasor usually implies just the static vector , A e i θ . {\displaystyle ,Ae^{i\theta }.}   An even more compact representation of a phasor is the angle notation:   A ∠ θ . {\displaystyle A\angle \theta .\,}   See also vector notation.

Phasor arithmetic



source : en.wikipedia.org

Multiplication by a constant (scalar)

Multiplication of the phasor   A e i θ e i ω t {\displaystyle Ae^{i\theta }e^{i\omega t}\,} by a complex constant,   B e i ϕ {\displaystyle Be^{i\phi }\,}  , produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid:

Re ⁡ { ( A e i θ ⋅ B e i ϕ ) ⋅ e i ω t } = Re ⁡ { ( A B e i ( θ + ϕ ) ) ⋅ e i ω t } = A B cos ⁡ ( ω t + ( θ + ϕ ) ) {\displaystyle {\begin{aligned}\operatorname {Re} \{(Ae^{i\theta }\cdot Be^{i\phi })\cdot e^{i\omega t}\}&=\operatorname {Re} \{(ABe^{i(\theta +\phi )})\cdot e^{i\omega t}\}\\&=AB\cos(\omega t+(\theta +\phi ))\end{aligned}}}

In electronics, B e i ϕ {\displaystyle Be^{i\phi }\,}   would represent an impedance, which is independent of time. In particular it is not the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid.

Differentiation and integration

The time derivative or integral of a phasor produces another phasor. For example:

Re ⁡ { d d t ( A e i θ ⋅ e i ω t ) } = Re ⁡ { A e i θ ⋅ i ω e i ω t } = Re ⁡ { A e i θ ⋅ e i π / 2 ω e i ω t } = Re ⁡ { ω A e i ( θ + π / 2 ) ⋅ e i ω t } = ω A ⋅ cos ⁡ ( ω t + θ + π / 2 ) {\displaystyle {\begin{aligned}\operatorname {Re} \left\{{\frac {d}{dt}}(Ae^{i\theta }\cdot e^{i\omega t})\right\}=\operatorname {Re} \{Ae^{i\theta }\cdot i\omega e^{i\omega t}\}=\operatorname {Re} \{Ae^{i\theta }\cdot e^{i\pi /2}\omega e^{i\omega t}\}=\operatorname {Re} \{\omega Ae^{i(\theta +\pi /2)}\cdot e^{i\omega t}\}=\omega A\cdot \cos(\omega t+\theta +\pi /2)\end{aligned}}}

Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant, i ω = ( e i π / 2 ⋅ ω ) . {\displaystyle i\omega =(e^{i\pi /2}\cdot \omega ).\,}  

Similarly, integrating a phasor corresponds to multiplication by 1 i ω = e âˆ' i Ï€ / 2 ω . {\displaystyle {\frac {1}{i\omega }}={\frac {e^{-i\pi /2}}{\omega }}.\,}   The time-dependent factor,   e i ω t {\displaystyle e^{i\omega t}\,} ,  is unaffected.

When we solve a linear differential equation with phasor arithmetic, we are merely factoring   e i ω t {\displaystyle e^{i\omega t}\,}   out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the capacitor in an RC circuit:

d   v C ( t ) d t + 1 R C v C ( t ) = 1 R C v S ( t ) {\displaystyle {\frac {d\ v_{C}(t)}{dt}}+{\frac {1}{RC}}v_{C}(t)={\frac {1}{RC}}v_{S}(t)}

When the voltage source in this circuit is sinusoidal:

v S ( t ) = V P ⋅ cos ⁡ ( ω t + θ ) , {\displaystyle v_{S}(t)=V_{P}\cdot \cos(\omega t+\theta ),\,}

we may substitute:

v S ( t ) = Re ⁡ { V s ⋅ e i ω t } {\displaystyle {\begin{aligned}v_{S}(t)&=\operatorname {Re} \{V_{s}\cdot e^{i\omega t}\}\\\end{aligned}}}
v C ( t ) = Re ⁡ { V c ⋅ e i ω t } , {\displaystyle v_{C}(t)=\operatorname {Re} \{V_{c}\cdot e^{i\omega t}\},}

where phasor   V s = V P e i θ , {\displaystyle V_{s}=V_{P}e^{i\theta },\,}   and phasor V c {\displaystyle V_{c}\,} is the unknown quantity to be determined.

In the phasor shorthand notation, the differential equation reduces to:

i ω V c + 1 R C V c = 1 R C V s {\displaystyle i\omega V_{c}+{\frac {1}{RC}}V_{c}={\frac {1}{RC}}V_{s}}

Solving for the phasor capacitor voltage gives:

V c = 1 1 + i ω R C â‹… ( V s ) = 1 âˆ' i ω R C 1 + ( ω R C ) 2 â‹… ( V P e i θ ) {\displaystyle V_{c}={\frac {1}{1+i\omega RC}}\cdot (V_{s})={\frac {1-i\omega RC}{1+(\omega RC)^{2}}}\cdot (V_{P}e^{i\theta })\,}

As we have seen, the factor multiplying V s {\displaystyle V_{s}\,}   represents differences of the amplitude and phase of v C ( t ) {\displaystyle v_{C}(t)\,}   relative to V P {\displaystyle V_{P}\,}   and θ . {\displaystyle \theta .\,}

In polar coordinate form, it is:

1 1 + ( ω R C ) 2 â‹… e âˆ' i Ï• ( ω ) ,  where  Ï• ( ω ) = arctan ⁡ ( ω R C ) . {\displaystyle {\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot e^{-i\phi (\omega )},{\text{ where }}\phi (\omega )=\arctan(\omega RC).\,}

Therefore:

v C ( t ) = 1 1 + ( ω R C ) 2 â‹… V P cos ⁡ ( ω t + θ âˆ' Ï• ( ω ) ) {\displaystyle v_{C}(t)={\frac {1}{\sqrt {1+(\omega RC)^{2}}}}\cdot V_{P}\cos(\omega t+\theta -\phi (\omega ))}

Addition

The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency:

A 1 cos ⁡ ( ω t + θ 1 ) + A 2 cos ⁡ ( ω t + θ 2 ) = Re ⁡ { A 1 e i θ 1 e i ω t } + Re ⁡ { A 2 e i θ 2 e i ω t } = Re ⁡ { A 1 e i θ 1 e i ω t + A 2 e i θ 2 e i ω t } = Re ⁡ { ( A 1 e i θ 1 + A 2 e i θ 2 ) e i ω t } = Re ⁡ { ( A 3 e i θ 3 ) e i ω t } = A 3 cos ⁡ ( ω t + θ 3 ) , {\displaystyle {\begin{aligned}A_{1}\cos(\omega t+\theta _{1})+A_{2}\cos(\omega t+\theta _{2})&=\operatorname {Re} \{A_{1}e^{i\theta _{1}}e^{i\omega t}\}+\operatorname {Re} \{A_{2}e^{i\theta _{2}}e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{A_{1}e^{i\theta _{1}}e^{i\omega t}+A_{2}e^{i\theta _{2}}e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{(A_{1}e^{i\theta _{1}}+A_{2}e^{i\theta _{2}})e^{i\omega t}\}\\[8pt]&=\operatorname {Re} \{(A_{3}e^{i\theta _{3}})e^{i\omega t}\}\\[8pt]&=A_{3}\cos(\omega t+\theta _{3}),\end{aligned}}}

where:

A 3 2 = ( A 1 cos ⁡ θ 1 + A 2 cos ⁡ θ 2 ) 2 + ( A 1 sin ⁡ θ 1 + A 2 sin ⁡ θ 2 ) 2 , {\displaystyle A_{3}^{2}=(A_{1}\cos \theta _{1}+A_{2}\cos \theta _{2})^{2}+(A_{1}\sin \theta _{1}+A_{2}\sin \theta _{2})^{2},}
θ 3 = arctan ⁡ ( A 1 sin ⁡ θ 1 + A 2 sin ⁡ θ 2 A 1 cos ⁡ θ 1 + A 2 cos ⁡ θ 2 ) {\displaystyle \theta _{3}=\arctan \left({\frac {A_{1}\sin \theta _{1}+A_{2}\sin \theta _{2}}{A_{1}\cos \theta _{1}+A_{2}\cos \theta _{2}}}\right)}

or, via the law of cosines on the complex plane (or the trigonometric identity for angle differences):

A 3 2 = A 1 2 + A 2 2 âˆ' 2 A 1 A 2 cos ⁡ ( 180 ∘ âˆ' Î" θ ) , = A 1 2 + A 2 2 + 2 A 1 A 2 cos ⁡ ( Î" θ ) , {\displaystyle A_{3}^{2}=A_{1}^{2}+A_{2}^{2}-2A_{1}A_{2}\cos(180^{\circ }-\Delta \theta ),=A_{1}^{2}+A_{2}^{2}+2A_{1}A_{2}\cos(\Delta \theta ),}

where Î" θ = θ 1 âˆ' θ 2 {\displaystyle \Delta \theta =\theta _{1}-\theta _{2}} . A key point is that A3 and θ3 do not depend on ω or t, which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In angle notation, the operation shown above is written:

A 1 ∠ θ 1 + A 2 ∠ θ 2 = A 3 ∠ θ 3 . {\displaystyle A_{1}\angle \theta _{1}+A_{2}\angle \theta _{2}=A_{3}\angle \theta _{3}.\,}

Another way to view addition is that two vectors with coordinates [A1 cos(ωt + θ1), A1 sin(ωt + θ1)] and [A2 cos(ωt + θ2), A2 sin(ωt + θ2)] are added vectorially to produce a resultant vector with coordinates [A3 cos(ωt + θ3), A3 sin(ωt + θ3)]. (see animation)

In physics, this sort of addition occurs when sinusoids interfere with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical sinusoids for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle, so the angle between each phasor to the next is 120° (2π/3 radians), or one third of a wavelength λ/3. So the phase difference between each wave must also be 120°, as is the case in three-phase power

In other words, what this shows is:

cos ⁡ ( ω t ) + cos ⁡ ( ω t + 2 Ï€ / 3 ) + cos ⁡ ( ω t âˆ' 2 Ï€ / 3 ) = 0. {\displaystyle \cos(\omega t)+\cos(\omega t+2\pi /3)+\cos(\omega t-2\pi /3)=0.\,}

In the example of three waves, the phase difference between the first and the last wave was 240 degrees, while for two waves destructive interference happens at 180 degrees. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength λ {\displaystyle \lambda } . This is why in single slit diffraction, the minima occurs when light from the far edge travels a full wavelength further than the light from the near edge.


As the single vector rotates in an anti-clockwise direction, its tip at point A will rotate one complete revolution of 360° or 2π representing one complete cycle. If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time. Each position along the horizontal axis indicates the time that has elapsed since zero time, t = 0. When the vector is horizontal the tip of the vector represents the angles at 0°, 180° and at 360°.

Likewise, when the tip of the vector is vertical it represents the positive peak value, ( +Am ) at 90° or π/2 and the negative peak value, ( -Am ) at 270° or 3π/2. Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved. So we can say that a phasor represent a scaled voltage or current value of a rotating vector which is “frozen” at some point in time, ( t ) and in our example above, this is at an angle of 30°.

Sometimes when we are analysing alternating waveforms we may need to know the position of the phasor, representing the Alternating Quantity at some particular instant in time especially when we want to compare two different waveforms on the same axis. For example, voltage and current. We have assumed in the waveform above that the waveform starts at time t = 0 with a corresponding phase angle in either degrees or radians.

But if a second waveform starts to the left or to the right of this zero point or we want to represent in phasor notation the relationship between the two waveforms then we will need to take into account this phase difference, Φ of the waveform. Consider the diagram below from the previous Phase Difference tutorial.

Applications



source : www.chegg.com

Circuit laws

With phasors, the techniques for solving DC circuits can be applied to solve AC circuits. A list of the basic laws is given below.

  • Ohm's law for resistors: a resistor has no time delays and therefore doesn't change the phase of a signal therefore V=IR remains valid.
  • Ohm's law for resistors, inductors, and capacitors: V = IZ where Z is the complex impedance.
  • In an AC circuit we have real power (P) which is a representation of the average power into the circuit and reactive power (Q) which indicates power flowing back and forward. We can also define the complex power S = P + jQ and the apparent power which is the magnitude of S. The power law for an AC circuit expressed in phasors is then S = VI* (where I* is the complex conjugate of I, and I and V are the RMS values of the voltage and current).
  • Kirchhoff's circuit laws work with phasors in complex form

Given this we can apply the techniques of analysis of resistive circuits with phasors to analyze single frequency AC circuits containing resistors, capacitors, and inductors. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components with magnitude and phase then analyzing each frequency separately, as allowed by the superposition theorem.

Power engineering

In analysis of three phase AC power systems, usually a set of phasors is defined as the three complex cube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination of symmetrical circuits. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents. In the context of power systems analysis, the phase angle is often given in degrees, and the magnitude in rms value rather than the peak amplitude of the sinusoid.

The technique of synchrophasors uses digital instruments to measure the phasors representing transmission system voltages at widespread points in a transmission network. Differences among the phasors indicate power flow and system stability.

Telecommunications: analog modulations

The rotating frame picture using phasor can be a powerful tool to understand analog modulations such as Amplitude modulation (and its variants ) and Frequency modulation.

x ( t ) = ℜ e { A e j θ . e j 2 π f 0 t } {\displaystyle x(t)=\Re e\left\{Ae^{j\theta }.e^{j2\pi f_{0}t}\right\}} , where the term in brackets is viewed as a rotating vector in the complex plane.

The phasor has length A {\displaystyle A} , rotates anti-clockwise at a rate of f 0 {\displaystyle f_{0}} revolutions per second, and at time t = 0 {\displaystyle t=0} makes an angle of θ {\displaystyle \theta } with respect to the positive real axis.

The waveform x ( t ) {\displaystyle x(t)} can then be viewed as a projection of this vector onto the real axis.

  • AM modulation: phasor diagram of a single tone of frequency f m {\displaystyle f_{m}}
  • FM modulation: phasor diagram of a single tone of frequency f m {\displaystyle f_{m}}

See also



source : www.chegg.com

  • In-phase and quadrature components
  • Analytic signal
    • Complex envelope
  • Phase factor, a phasor of unit magnitude

Footnotes



References



Further reading



  • Douglas C. Giancoli (1989). Physics for Scientists and Engineers. Prentice Hall. ISBN 0-13-666322-2. 
  • Dorf, Richard C.; Tallarida, Ronald J. (1993-07-15). Pocket Book of Electrical Engineering Formulas (1 ed.). Boca Raton,FL: CRC Press. pp. 152â€"155. ISBN 0849344735. 

External links



  • Phasor Phactory
  • Visual Representation of Phasors
  • Polar and Rectangular Notation


 
Sponsored Links