Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Coincidentally, I was just reading through the wiki pages on Fourier transforms the other evening, and having a fantastically hard time wrapping my head around some parts of it. Maybe the wrong place to ask, but if anyone had on hand some good intuitive (potentially directed more towards the programmer than the data scientist) description of the Fourier transform?


I don't know any data scientists, or even what one is really, but here's the EE explanation: the Fourier transform maps a signal in the time domain to the frequency domain, and vice versa. So if you have a sine wave in the time domain [say something at 3khz: y = sin(3000x)], the Fourier transform would turn that into a pair of Dirac deltas (spikes that are infinitely thin but have an area of one) at omega = +-3000. Or, if you had a pulse, the Fourier transform would map that to a sinc [sinc(x) = sin(x)/x].

The Fourier transform is related to the Laplace transform, which maps signals in the time domain to those in the "S domain", a domain where locations are normally described with complex coordinates which contains information both on frequency of the periodic components of the signal as well as any transients (starting conditions, in layman's terms).


Another approach to understanding fourier transforms at the esteemed BetterExplained: http://betterexplained.com/articles/an-interactive-guide-to-...

This is more about the actual Fourier Transform operation itself, rather than the FFT algorithm. OTOH, it has whizzy interactive animations.

The thing that made it simpler to me was the fact that an FT takes us from an oscilloscope to a spectrum analyser and back again.


If I have a time-varying signal containing a sinewave at some frequency f, and if I want to detect that signal, I could try generating a bunch of sinewaves that were changing at different rates and multiply each by the original signal of interest over time. If one of the sinewaves was close enough in frequency to the signal of interest, the sequential multiplication in the time domain would leave a nonzero result. I could call that a spectral line that identified the original signal. For those test frequencies not matched to the incoming signal, the sequential multiplications would produce a zero result.

It's sort of like tuning a radio, trying to detect a narrow incoming signal -- the signal is only detectable at a specific receiver frequency. The difference is the DFT tunes a spectrum of frequencies simultaneously, and presents its results as a set of spectral lines covering a specified frequency domain.

All the players in this story are complex numbers. This allows an unambiguous multiplication result when the signal is compared to various test frequencies -- for a monotonic frequency signal, the result is zero except at the frequency of interest.

The FFT is a refinement of the DFT that minimizes the amount of computation, but its results are identical to the DFT.

One of the more interesting aspects of Fourier analysis is the role of the Euler formula:

http://en.wikipedia.org/wiki/Euler's_formula

Understanding Euler's formula given one a profound insight into how Fourier and inverse Fourier transforms work. It turns out that an inverse Fourier transform is accomplished by simply changing the sign of a subexpression:

http://arachnoid.com/sage/fourier.html#Fourier_Transform

The same sign change determines whether a DFT converts from time to frequency domain or the reverse:

http://arachnoid.com/sage/fourier.html#Discrete_Fourier_Tran...

HTH


You have a time series of N points. You could use normal curve fitting to fit a constant function + (N-1) sine and cosine waves, and it turns out there is always a perfect fit, whatever your N points are. Discrete fourier transform is "just" a faster way to compute the same result.




Here's a more mathy way of understanding it. You know how Taylor series work in Calculus? You have some differentiable function and you are able to approximate the function arbitrarily well by adding up more and more polynomials.

Turns out you can do the same thing with linear combinations of sines and cosines, with some slightly different conditions on the function. In particular, we need the function to be periodic. This is all that's happening: by adding up sines and cosines so they cancel out "just so," we can approximate other functions. The Fourier transform is the tool by which we calculate the coefficients of the various sines and cosines in the linear combination.

Now, if we relax the notion that we require the function to be periodic, we get the "integral" form of the Fourier transform. We do this by slowly extending the permitted period to infinity since any function is periodic over a closed interval, although perhaps with a period of 1. This shouldn't be too surprising as the integral is a pretty specific sense the "continuous" version of a sum.

When people talk about "frequency domain" what they mean is the Fourier transform "at x" is the contribution of "frequency x" to the overall value of the function. I give you the frequency, you give me its contribution.

In fact, there's a very general theorem called the Stone-Weierstraß Theorem which tells us when you can approximate a function with linear combinations from a set of other functions. You can then go through a similar process to determine the extent to which each of these other functions contribute to the value of the original.

This is a lot of hand waving and will most likely upset any mathematicians in the audience. I left out mountains of details. :)

If you're comfortable with the idea of vector spaces and orthonormal bases, then the existence of the Fourier series is equivalent to the statement that linear combinations of sines and cosines have a dense span in the space of functions whose integral over the entire real line is finite. Yes, our vectors are functions and the sines and cosines don't quite form a basis, but they do form a "dense span," which is another way of saying that for a given function there's some linear combination that can approximate our functions up to any level of precision we desire.

Here's another statement that wraps up Fourier transforms, Fourier series, and all that in one go. Let C([0,1]/{0,1}) be the space of functions continuous on [0,1] where we identify 0 and 1 as the same point to obtain a circle. Consider the family of functions f_n(x) = e^{2πinx} where n is an integer. Then the set {f_n : n is an integer} has a dense linear span in C([0,1]/{0,1}).

One consequence of this is that the f_n form an orthonormal basis for L^2([0,1]), the space of "square-integrable functions" on [0,1] (http://en.wikipedia.org/wiki/Square-integrable_function).

Remember that e^{ix} = cos(x) + isin(x), so f_n(x) = e^{2πinx} = cos(2πnx) + isin(2πnx), which is where the sines and cosines come from.

Sorry for the fact that "pi" and "n" look so similar. It's HN's font.

Anyhow, the above may or may not have been useful, I just thought I'd throw in a more mathy explanation than one usually sees. It didn't really click for me until I was able to relate what was happening with the Fourier series and Fourier transform to how I understood vector spaces and bases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: