Circuit Theory/Phasor Analysis Phasor Analysis[edit] The mathematical representations of individual circuit elements can be converted into phasor notation, and then the circuit can be solved using phasors. Resistance, Impedance and Admittance[edit] In phasor notation, resistance, capacitance, and inductance can all be lumped together into a single term called "impedance". The phasor used for impedance is . is Voltage and is current. And the Ohm's law for phasors becomes: It is important to note at this point that Ohm's Law still holds true even when we switch from the time domain to the phasor domain. Impedance is still measured in units of Ohms, and admittance (like Conductance, its DC-counterpart) is still measured in units of Siemens. Let's take a closer look at this equation: If we break this up into polar notation, we get the following result: Resistors[edit] Resistors do not affect the phase of the voltage or current, only the magnitude. Capacitors[edit] A capacitor with a capacitance of C has a phasor value: Where

Fractional quantum Hall effect The fractional quantum Hall effect (FQHE) is a physical phenomenon in which the Hall conductance of 2D electrons shows precisely quantised plateaus at fractional values of . It is a property of a collective state in which electrons bind magnetic flux lines to make new quasiparticles, and excitations have a fractional elementary charge and possibly also fractional statistics. where is an odd integer. Introduction[edit] The fractional quantum Hall effect (FQHE) is a collective behaviour in a two-dimensional system of electrons. where p and q are integers with no common factors. and There were several major steps in the theory of the FQHE. Laughlin states and fractionally-charged quasiparticles: this theory, proposed by Laughlin, is based on accurate trial wave functions for the ground state at fraction as well as its quasiparticle and quasihole excitations. Fractionally charged quasiparticles are neither bosons nor fermions and exhibit anyonic statistics. See also[edit] Notes[edit] D.C.

Vieta's formulas In mathematics, Vieta's formulas are formulas that relate the coefficients of a polynomial to sums and products of its roots. Named after François Viète (more commonly referred to by the Latinised form of his name, Franciscus Vieta), the formulas are used specifically in algebra. The Laws[edit] Basic formulas[edit] Any general polynomial of degree n (with the coefficients being real or complex numbers and an ≠ 0) is known by the fundamental theorem of algebra to have n (not necessarily distinct) complex roots x1, x2, ..., xn. Equivalently stated, the (n − k)th coefficient an−k is related to a signed sum of all possible subproducts of roots, taken k-at-a-time: for k = 1, 2, ..., n (where we wrote the indices ik in increasing order to ensure each subproduct of roots is used exactly once). The left hand sides of Vieta's formulas are the elementary symmetric functions of the roots. Generalization to rings[edit] belong to the ring of fractions of R (or in R itself if 's are computed from the 's. and

Complex or imaginary numbers - A complete course in algebra The defining property of i The square root of a negative number Powers of i Algebra with complex numbers The real and imaginary components Complex conjugates IN ALGEBRA, we want to be able to say that every polynomial equation has a solution; specifically, this one: x2 + 1 = 0. That implies, x2 = −1. But there is no real number whose square is negative. i2 = −1. That is the defining property of the complex unit i. In other words, i = The complex number i is purely algebraic. Example 1. 3i· 4i = 12i2 = 12(−1) = −12. Example 2. −5i· 6i = −30i2 = 30. We see, then, that the factor i2 changes the sign of a product. Problem 1. To see the answer, pass your mouse over the colored area. The square root of a negative number If a radicand is negative -- , where a > 0, -- then we can simplify it as follows: = i In other words: The square root of −a is equal to i times the square root of a. Problem 2. Powers of i Let us begin with i0, which is 1. And we are back at 1 -- the cycle of powers will repeat. 1, i, −1, or −i

Linear complex structure In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as a complex vector space. Every complex vector space can be equipped with a compatible complex structure, however, there is in general no canonical such structure. Definition and properties[edit] A complex structure on a real vector space V is a real linear transformation such that J2 = −idV. Here J2 means J composed with itself and idV is the identity map on V. (x + i y)v = xv + yJ(v) for all real numbers x,y and all vectors v in V. Going in the other direction, if one starts with a complex vector space W then one can define a complex structure on the underlying real space by defining Jw = i w for all w in W. which corresponds to Then a representation of C is a real vector space V, together with an action of C on V (a map ). Examples[edit] Cn[edit] where

Phasor An example of series RLC circuit and respective phasor diagram for a specific Glossing over some mathematical details, the phasor transform can also be seen as a particular case of the Laplace transform, which additionally can be used to (simultaneously) derive the transient response of an RLC circuit.[10][8] However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required.[10] Definition[edit] Euler's formula indicates that sinusoids can be represented mathematically as the sum of two complex-valued functions: [a] or as the real part of one of the functions: The term phasor can refer to either [citation needed] or just the complex constant, . An even more compact shorthand is angle notation: See also vector notation. A phasor can be considered a vector rotating about the origin in a complex plane. . represents the angle that the vector forms with the real axis at t = 0. Phasor arithmetic[edit] In electronics, and .

Cosine The cosine function is one of the basic functions encountered in trigonometry (the others being the cosecant, cotangent, secant, sine, and tangent). Let be an angle measured counterclockwise from the x-axis along the arc of the unit circle. is the horizontal coordinate of the arc endpoint. The common schoolbook definition of the cosine of an angle in a right triangle (which is equivalent to the definition just given) is as the ratio of the lengths of the side of the triangle adjacent to the angle and the hypotenuse, i.e A convenient mnemonic for remembering the the definition of the sine, cosine, and tangent is SOHCAHTOA (sine equals opposite over hypotenuse, cosine equals adjacent over hypotenuse, tangent equals opposite over adjacent). As a result of its definition, the cosine function is periodic with period . also obeys the identity The definition of the cosine function can be extended to complex arguments using the definition The cosine function has a fixed point at 0.739085... for is where to

Matrix (mathematics) Each element of a matrix is often denoted by a variable with two subscripts. For instance, a2,1 represents the element at the second row and first column of a matrix A. Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. The numbers, symbols or expressions in the matrix are called its entries or its elements. The size of a matrix is defined by the number of rows and columns that it contains. Matrices are commonly written in box brackets: An alternative notation uses large parentheses instead of box brackets: The entry in the i-th row and j-th column of a matrix A is sometimes referred to as the i,j, (i,j), or (i,j)th entry of the matrix, and most commonly denoted as ai,j, or aij. Sometimes, the entries of a matrix can be defined by a formula such as ai,j = f(i, j).

Complex Numbers A Complex Number A Complex Number is a combination of a Real Number and an Imaginary Number Real Numbers are numbers like: Nearly any number you can think of is a Real Number! Imaginary Numbers when squared give a negative result. Normally this doesn't happen, because: when we square a positive number we get a positive result, and when we square a negative number we also get a positive result (because a negative times a negative gives a positive), for example −2 × −2 = +4 But just imagine such numbers exist, because we will need them. The "unit" imaginary number (like 1 for Real Numbers) is i, which is the square root of −1 Because when we square i we get −1 i2 = −1 Examples of Imaginary Numbers: And we keep that little "i" there to remind us we need to multiply by √−1 Complex Numbers A Complex Number is a combination of a Real Number and an Imaginary Number: Examples: Can a Number be a Combination of Two Numbers? Can we make up a number from two other numbers? We do it with fractions all the time.

Complexification Formal definition[edit] Let V be a real vector space. The complexification of V is defined by taking the tensor product of V with the complex numbers (thought of as a two-dimensional vector space over the reals): The subscript R on the tensor product indicates that the tensor product is taken over the real numbers (since V is a real vector space this is the only sensible option anyway, so the subscript can safely be omitted). As it stands, VC is only a real vector space. However, we can make VC into a complex vector space by defining complex multiplication as follows: More generally, complexification is an example of extension of scalars – here extending scalars from the real numbers to the complex numbers – which can be done for any field extension, or indeed for any morphism of rings. Formally, complexification is a functor VectR → VectC, from the category of real vector spaces to the category of complex vector spaces. Basic properties[edit] where v1 and v2 are vectors in V. where or

AC power The blinking of non-incandescent city lights is shown in this motion-blurred long exposure. The AC nature of the mains power is revealed by the dashed appearance of the traces of moving lights. Real, reactive, and apparent power[edit] In a simple alternating current (AC) circuit consisting of a source and a linear load, both the current and voltage are sinusoidal. If the load is purely resistive, the two quantities reverse their polarity at the same time. If the loads are purely reactive, then the voltage and current are 90 degrees out of phase. Practical loads have resistance, inductance, and capacitance, so both real and reactive power will flow to real loads. Engineers care about apparent power, because even though the current associated with reactive power does no work at the load, it heats the wires, wasting energy. Conventionally, capacitors are considered to generate reactive power and inductors to consume it. The complex power is the vector sum of real and reactive power. .

Related: Mathematics