Fourier Transformers

Get Seriesous - Pt. 1

Get Seriesous - Pt. 2

Alright, post number 3 on the topic of Fourier Series. After Fourier Series, we'll go into Fourier Transforms.

I know, it's a long ride, with lots of bumps...or harmonics? (ba dum ching!) Sorry...

First, it's a good idea to see what ground we've covered. We discussed that we can take a well-behaved signal and approximate it using a Fourier Series. The Fourier Series will break the signal into three components: an offset, a bunch of sines, and a bunch of cosines. It uses the principle of orthogonality to pick out individual components. Our job is to figure out how much the offset is as well as the coefficients of our sines and cosines. To do this, we need to use the equations in part 2.

I always like to understand theory, but it's also a good idea to work through a couple examples so that we can see how this works in practice. It's also good to know what sorts of things these are used for.

In electrical engineering, you learn to run a signal through a system of electronics. You can characterize the system (via use of transforms) algebraicly in terms of the system's dependence on frequency. However, frequency is defined in terms of sinusoids, so if you put another type of signal into your system (say a square wave or a triangle wave), you may have to decompose the signal into sinusoids, determine the response to the sinusoid, and then add up the responses. (There's an easier way to do it, but we'll get to that a little later.)

You can also use Fourier Series as a solution to some differential equations in Cartesian coordinates. The cool thing about this is you don't need to have a problem that repeats itself over space. You can sometimes find the solution in a region by assuming that the solution is periodic but that you only need the solution over one period. A prime example, usually covered in an electromagnetics course in either physics or electrical engineering, would be the solution to the "potential in a box" problem (i.e. finding a solution to Laplace's equation) when using separation of variables.

While I'd love to do a potential in a box problem and show off my mad skillz in Mathematica, I think it'll require too much discussion of topics tangential to the point of this discussion. For this post, at least, we'll stick with signal decomposition, i.e. the first example. Specifically, we'll take the example of a sawtooth wave. We'll define our wave such that f(x)=x on x=(-1,1):

We'll say that x is the horizontal axis and f(x) is the vertical axis. Because our signal is periodic, at x=1, the function will make a vertical drop from 1 to -1 and then will continue to rise off to the right. In fact, it will go off to infinity in both directions so that there will be a slanted line moving upward and to the right until it gets to +1. Then it will drop straight down to -1 and repeat itself. The period, T, of the signal is 2 (from -1 to 1).

Now we need to decompose this signal into the three parts mentioned before: the average (a

_{o}), the cosine coefficients (a

_{n}), and the sine coefficients (b

_{n}).

Looking at the plot, we can already tell a couple things. First, the average area under the curve should be zero. That's because, left of zero, our area will be under the x-axis, giving it a negative area. Right of zero, it will be positive. Both areas have the same magnitude (1/2). This means that our overall area will be equal to zero.

Just to check, we compute the value of a

_{o}using

.

With the values from our function plugged into the equation, we get

which, once evaluated, gives us a value of zero. This means that the average for the function is zero, which is what we suspected.

Next, we can see from looking at it that the function isn't even in the sense that f(x)=f(-x). That is, it is not symmetric about the y-axis. In fact, it appears to be antisymmetric or odd. Mathematically, this means that f(-x)=-f(x). Because we know that cosine functions are even and sine functions are odd, we would intuitively expect that the coefficients for the cosines (a

_{n}) will be small while the coefficients for the sines (b

_{n}) will be reasonably large. While this is fairly intuitive looking at the very simple example I've picked, this will generally not be evident with more complex signals.

Let's start by looking at the coefficients for the cosines. From the previous post, we know that the formula to determine a

_{n}is

.

I know what you're thinking. You're wondering where the heck that ω came from. I mentioned right after that if our period isn't 2π (and in this case, it's 2), we have to use ω. For this example, ω=2π/T=π. Plugging in this and the rest of the values results in

.

If you're feeling really industrious, you can solve this using integration by parts. If you're not feeling industrious, you can grab your pocket CRC, and look up the relevant formula there. Or, if you're just plain lazy (like me), you can plug it into Mathematica and find out that those people who did it by hand wasted a lot of time because the result is zero.

That means that all of the coefficients for the cosines, i.e. the even components, are all zero. Again, this is not totally unexpected because the signal looks odd. (Mathematically, not as in weird...).

The last thing to do is to find the coefficients for the sine functions (the odd portion). We whip out our handy-dandy formula

which turns into

once we introduce our substitutions. Using my weapon of choice (Mathematica), I find that b

_{n}is in fact not zero. It ends up being

which looks sort of icky. Fortunately, it's not. I played around with this by sticking in values of n, 1 through 8. Here's what I got:

n | b_{n} |

1 | 2/π |

2 | -1/π |

3 | 2/3π |

4 | -1/2π |

5 | 2/5π |

6 | -1/3π |

7 | 2/7π |

8 | -1/4π |

Now, looking at this, it is clear that the odd numbers follow one pattern while the evens follow another. Looking at the odd number, it appears that a general formula for odd b

_{n}is 2/nπ. The evens, however, follow b

_{n}=-2/nπ. Fortunately, these only differ by the sign, so we can find an easier formula for b

_{n}than what we got from Mathematica. That is,

.

Now were ready to rock and roll. We have all our values for a

_{o}, a

_{n}, and b

_{n}. In this case, the only thing that wasn't zero was b

_{n}. But because we have determined all of the coefficients, we now have our Fourier series. We take our coefficients and insert them into

.

Our resulting Fourier series ends up being

.

Now we have our Fourier series, so we can call it done. It is worth noting, though, that n is supposed to go to infinity, which is a practical impossibility. The best we can do, in reality, is come up with an approximation to the signal based on a finite value for n. It's also worth noting that we have a well-behaved signal

*except*at the end points. This creates problems.

So let's look at the case where n=10, i.e. we take the first 10 terms of the last equation. I whipped up a plot in Mathematica showing our signal and our approximation. You'll notice that the approximation is rather wiggly. This is what is known as Gibbs phenomenon (also called ringing).

The good news is that it improves if we use a larger value of n. When I plotted it again with n=25, the approximation along most of the plot was much better, although there is still some ringing at the endpoints.

That's all for right now. I'm going to next go into general Fourier series, but certainly not tonight (or, this morning now).