Cherish (mareserinitatis) wrote,
Cherish
mareserinitatis

  • Mood:

Get Seriesous - Pt. 2

Before reading this post, you'll want to have read Fourier Transformers and Get Seriesous - Pt. 1.

When I left off on the previous post, we had a formula for the Fourier Series:

(1)

We had talked about the fact that the sines and cosines represented harmonics and were wondering where the coefficients come from. When you're looking at a Fourier Series that represents a signal, you can imagine that you have decomposed the signal into waves of different frequencies.

Making a Fourier Series, though, is a bit like cooking, and as such, you don't want to add the wrong amounts. The coefficients a and b tell you how much of each frequency you need, whether you need a lot or a little. It's like reading the list of ingredients on the recipe, which hopefully tells you if you need teaspoons or tablespoons (or cups or gallons...but almost certainly some non-metric unit of measure).

How are these values determined? The standard way is to assume that you know the function you are trying to decompose, which we will call f(x), and apply it to some formulas. Let's start with ao. The formula to determine ao is:

. (2)

If you've had calc, you're probably scratching your head and saying, "Huh! That looks familiar." If you haven't, you're scratching your head and saying, "Huh! That looks like a mess." (In which case, I'm surprised you've made it this far.) Either way, this integral has a particular meaning. It is actually the average of your function f(x) over the period T. You're taking the area under a curve, and dividing it by it's length...hence, you get an average. In circuits, you tend to think of this as a DC signal. If you're looking at some sort of time signal, you can interpret this as an offset from zero, but nothing that has a periodic component.

This is where it starts to get interesting...and somewhat mathy. Cosine and sine form a set of orthogonal functions. I'm not going to explain these much, but I will give you an example so you have a way to think about them. Imagine a dot on a graph. Generally, we can describe it's position by a set of two numbers, x and y. The horizontal position is described by x and the vertical by y, but x is not dependent on y and y is not dependent on x. However, we need both x and y to find our dot.

Orthogonal functions are like x and y - they are independent of each other, but we need a set of them to describe something.

Having a set of orthogonal functions has some extremely interesting consequences. We have been assuming that we can take a signal and break it into sines and cosines. Because sines and cosines are orthogonal, this means that if we take something called an inner product of two of these, it will end up being zero unless those two components are the same. We're defining an inner product as

. (3)

So if we slap a sine and a cosine in (3) for f(x) and g(x) respectively, then the result of this calculation will be zero. What is even cooler is if we instead chose to use two cosines that are harmonics of each other, the result will also be zero unless those cosines have the same frequency. The same applies to two different sine functions: the inner product will be zero for two sine functions unless they have the same frequency. This is only valid when we integrate over the period, T.

What this means is we have a handy way of finding the sinusoids that make up a signal. All we have to do is stick our signal into (3) as our f(x) and then try every possible harmonic. If that harmonic isn't part of our signal, we'll get a zero for (3). If it is, the result will be related to how strongly that harmonic contributes to the signal. In other words, it will give us our amplitude.

So now we can get our coefficients an and bn:

(4)

and

. (5)

(Disclaimer: every book I read has its pet notation. You may see (2), (4), and (5) all reduced by a factor of 2. Also, in writing this, I have assumed a period of 2π. If the period is different, you will have to insert a factor of ω into the arguments of your trig functions, where &omega=2π/T.)

Once we have these, we can use equation (1) to make an approximation to our signal. We can't recreate the signal exactly because we'll never be able to use an infinite number of terms. However, we almost never need that many terms as the higher order terms can more or less be characterized as noise and will have smaller coefficients. However, it won't be perfect: if there are big jumps in the signal, sometimes we'll get approximations that have Gibbs phenomenon (small waves or ringing) as well as values that may overshoot the jump. Generally, for a well behaved signal (i.e. something somewhat sinusoidal without big jumps or cricks), it will work fairly well.

Up to this point, we now know that we can represent a signal as a combination of an offset or DC value and sines and cosines of various frequencies. This is known as a trigonometric Fourier series because it uses trig functions as the basis functions (aka the orthogonal functions). The reason I did this is because it is very easy to see what's going on. (And, if you're an electrical engineer, you'll be dealing with sinusoids a lot.) In the next post, I will do an example of a signal decomposition. After that, I will address generalized Fourier series (which aren't quite as easy to visualize - hence all the time on the trigonometric series) which will then lead us into Fourier Transforms.

Okay. I'm beat. Have fun and let me know if you have any questions thus far.
Tags: fourier series, math, signals
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments