Fourier Transformers

Get Seriesous - Pt. 1

Get Seriesous - Pt. 2

Get Seriesous - Pt. 3

This will hopefully be my last installment on Fourier Series. Then we can move onto Fourier Transforms and some signal processing. (For those of you who are wondering, I don't have, at this point, any intentions to cover things like Laplace or z-transforms. The friend who requested my help is in geology and would like the information to examine geological signals. I realize this is just a bump in the road for electrical engineering folks, though.)

The reason I started with the trigonometric Fourier Series (and I suspect this why most textbooks take the same approach) is because they are extremely intuitive. We've been working with sines and cosines since high school. However, they are a bit more computationally intense than the exponential Fourier series and limited to real signals.

I'm not going to go into the derivation of the exponential Fourier series from the trigonometric series. I apologize if this seems terse. There are, however, textbooks which have nice explanations. I'll merely say that we can get one from the other using Euler's identity:

(1).

We can then use this along with our definition of the trigonometric Fourier series to find the exponential Fourier series, which is:

(2).

The big changes we see here are that our trig functions have been replaced by the exponential, we have a single coefficient, and that n goes from -∞ to +∞, versus the coefficient for the trigonometric series, which started with 0.

The coefficient for the exponential series is

(3),

which looks remarkably similar to the forms for the coefficients of the trig series. It turns out that the relationship between the coefficient of the exponential series and the trig series aren't quite as straightforward as you'd first imagine...at least, depending on your particular reference and how they choose their notation.

The easiest case is that of a

_{o}. It turns out that by setting n=0, your equation for D

_{o}ends up being the same as that for a

_{o}.

If you go through the trouble of rewriting out (3) in terms of your trig functions, you'll end up with

and

.

(I should make mention of the fact that some sources use

*c*instead of

*D*. That's perfectly fine, but other sources use

*c*to denote the cosine Fourier series (sometimes called compact Fourier series), which is why I used D

_{n}.)

For the last step, I'm going to go back and redo the example of the sawtooth function in the previous post. As you'll recall, we approximated the function f(x)=x on (-1,1). When we plug that into (3), we get

which results in

.

When we did this before, I pointed out the odd and even values of the function. Here, it's not as obvious what's going on, so I'll just say that we can think of each term D

_{n}matching up with D

_{-n}to create some sort of sinusoid using Euler's identity above.

Again, I evaluated a few of the coefficients (both negative and positive), and the resulting coefficients can be simplified to

.

I plotted this versus f(x) using coefficients from -25 to 25, I got the following:

.

This looks remarkably like the other plots, eh?

Alright. We have now covered trigonometric Fourier series and discussed a bit on how you can work these into the general Fourier series (or exponential series). I personally find it easier to understand trig series because they're intuitive. Exponential series may not be, but they can at least be mathematically derived from the trig series. However, they're more general, as I mentioned above, because we can use complex values for our coefficients (like we did in the example) and are not confined to a real valued function for f(x).

There are a couple other things I want to mention. First, I've been using the function f(x) the whole time. Usually, however, we're dealing with a signal that is a function of time. It's just a superficial change to go from x → t and f(x) → f(t). I'm mentioning this because when I get to Fourier transforms, I may choose to move back into the time domain rather than using something generally considered to be a function with spatial dependence.

Another thing I'll mention again is the Gibbs phenomenon. The ringing in at the endpoints is a function of the discontinuity (i.e. big jump) in the signal. If you're dealing with a smooth signal (i.e. something that would be composed of sinusoids, as an example, rather than something with jumps or sharp turns, you won't have to worry too much about the Gibbs phenomenon. However, there'll be a bit more on this because when processing your data, you also have to think about the window you're using. The gist of the discussion would be that sharp points and jumps will make your results messier than they need be.

The final thing to keep in mind is that we have effectively created a function, in this case, D

_{n}, that is dependent on n, or if you like, n&omega. In other words, we have a function that has a relationship to the frequency content of the signal we've approximated. This will be the main point of Fourier transforms. So with that, I think I've covered enough to move on...in a later post.