Interesting topic, but one thing should be made clear for people without an engineering/applied mathematics background.
The FFT is not the same as the fourier transform. Most of the time the distinction is irrelevant, but in this case it truly does matter. The transformed signal measured with the lens is the result of a sampled continuous space fourier transform (CSFT). The FFT computed is the discrete space fourier transform. The two are not always equivalent.
I bring this up because the author uses the terms "fourier transform" and "FFT" interchangeably.
You are correct; the FFT or DFT is not the same thing as the continuous-time Fourier transform (CFT).
This technical report[1] examining the relation between the DFT and CFT shows in eq. 20 that the DFT is just the CFT evaluated at w=k2pi/(N*T). As N becomes large (number of pixels is large), this approaches the CFT.
I left this detail out; I wanted to put this in terms the reader knew.
If you just left out one "F" from "FFT" it would be more precise and less confusing. Is there any reason to bring the Fast Fourier Transform algorithm up at all?
Also, ignores that Fourier optics only holds for the paraxial approximation (i.e. the small angle approximation). I'd put the rule of thumb somewhere around a numerical aperture of 0.3 [0] -- beyond that, polarization effects start to come into play, and beyond 0.5-0.6, it becomes an issue in, for example, microscopy.
[0] http://en.wikipedia.org/wiki/Numerical_aperture -- Similar to f-number from photography, and describes how big the aperture of the lens is in relation to its focal length. Specifically, it's the sine of the angle from the optical axis to the ray extending from the focal point to the edge of the lens, for a single lens system.
From experience you are being overly conservative. Foremost polarization effects depend on the rigidity of your surface and the bandwidth of your light source. The correction is minimal. Most importantly oil immersion 1.4 NA is common and nobody complains.
>>Also, ignores that Fourier optics
Although there is another good point. After making the born approximation you get this kind of dish shape in the Fourier transform, but on the other hand the dish is compensated on both sides of the lense
Ah yeah, to be fair it's not a _huge_ correction, but in what I do (single-mode confocal microscopy) it definitely becomes an issue around NA=0.3 when trying to characterize spot sizes and collection efficiency functions. Namely, the overlap integrals can change significantly if your source is also polarized. Also, I'm probably more sensitive to these things than most since I'm at the single-photon level usually.
One interesting point is why is the far-field amplitude of light diffracted by the lens a spatial Fourier transform? The best explanation I know of is in Chapter 21 of the Feynman Lectures on Physics (vol 2), accessible online (thanks to CalTech) at http://www.feynmanlectures.caltech.edu/II_21.html#Ch21-S3
In short: Moving charges create electromagnetic radiation (light). Since light travels at non-infinite speed, when a wiggle of light reaches us, we're actually seeing the imprint of the motion of the source charges at an earlier point in time. This earlier time is clearly related to how far the observer is from the source. Hence, if the distance is r then at time t we would be seeing the wiggle of the source charge at an earlier time = t - r/c. For a collection of source charges we need to integrate the delayed source potential over the volume of the source, so it's intuitively clear why that would be a Fourier transform.
There's technicalities of far-field and 1/r fall-off, which I've glossed over, you can find full details in Chapter 21. The final line is a gem: 'You will not, then, be surprised to find that the laws of electricity and magnetism are already correct for Einstein's relativity. We will not have to “fix them up,” as we had to do for Newton's laws of mechanics.'
Wow this brings back some memories! I spent a long time doing research in this area. You can do very cool stuff by realizing that spatial light pattern propagates as Fourier transform with a "spherical" term in the integral (having a perfect convex lens at the focal point cancels this term, hence Fourier transform. Also if laser propagates to sufficient large distance, the term vanishes. A lensless, diffractive projector will always be in focused).
Note that you need a coherent, planar light source like an expanded laser beam.
My favorite example of using a lens for a FT is creating a dark field image. You just put a small beam block in the center of a pupil plane and block all of the low frequency components. You end up with bright lines only at the edges of things which are the high frequency components of the image.
Also "google images" isn't an image source, and just because you found something on the internet doesn't mean it is free to use.
There are already several comments here from people who are knowledgeable on the topic, so I might not be fully correct in the following:
I think x-ray crystallography applies similar techniques to try to work backwards from an image of diffracted light to the crystallized molecular structure that would have created the (transform) of light to yield the image. For example, you have some compound, say DNA, and want to know its molecular shape. I think one of the methods that Rosalind Franklin and company used was to take crystallized DNA, shoot x-rays at it, and study the resulting diffraction pattern(s) to determine that DNA had to be a helical structure with atoms bonding at such and such angles. And if that's not clear enough I hope it doesn't escape you to note that it immediately suggests a mechanism for DNA replication hah.
That's hastily written and just tying random facts from undergrad so feel free to correct/add/disprove at will I am sure there are some commenters who know way more in much more detail. I do miss the full time learning days!
Yes, FFT is an algorithm, a computer-optimized implementation of the Fourier transform.
Speaking of optics, the famous slit-experiment, where you can see the diffraction patterns of light passing through one or more slits, is the fraunhoffer-pattern, the 2D fourier transform of the slit: http://physics.stackexchange.com/questions/94852/why-is-the-...
I think the most memorable thing I learned in quantum mechanics is that quantum mechanical scattering is also exactly the fourier transform. Suddenly braggs law became much more sensible.
No, this is only under the 1st born approximation, in the far-field and without the 'dish' shape intrinsic to propagation. A lot of times this model doesn't work. Also now you are in the momentum space which doesn't map immediately to the x,y,z coordinates you might have started with. Also the evanescent fields that the Ewald sphere didn't include...
Hehe, well, thanks for the correction. Yes on all of these things, I'm sure.
I should have said "the exact solution for the most basic justifiable approximation". Honestly, it's been 9 years since I last did quantum seriously, so that was the extent that I remembered.
Deriving Bragg's law in front of my quals committee and refusing to use "mirror planes" despite it being a Materials Science Ph.D. was just viscerally satisfying. It may not have been absolutely correct math, but at least it was way more justifiable than the typical accepted "solution" in my field =)
Ben Krasnow also did a good piece on this, showing a system that he built with a 4F correlator. Of course, his piece is full of things that look like things you could build at home if you had infinite resources on hand ... but it really does drive home just how possible it is to build this system, and provides a very intuitive view of how it works.
A very interesting application of this point of view was a proposed projector of Light Blue Optics. The projector would have the property that it was near 100% efficient: it would essentially steer the light to the desired locations (rather than blocking it out where it was not desired).
Unfortunately that near-100%-efficiency wasn't quite realised in practice, for several reasons. I think I can describe some of them without giving away anything commercially sensitive:
1. The spatial light modulator we used didn't give anything like the 180 degrees of phase rotation outlined there, which means that a large fraction of the light landing on it passed straight through.
2. Having only two phase states leads, as mentioned in that page, to a conjugate image with as much light in it as the one we actually want.
3. All that random noise does indeed average out nicely when you have many "subframes". But since there's no such thing as negative light, parts of the image that are meant to be black will inevitably end up something other than black. So there's a loss of contrast, which means that some of the light in the image isn't really doing you much good.
4. The optical design has a bunch of lenses and mirrors and things in it, and every surface is an opportunity to lose a little bit of light.
The actual optical efficiency figure was, let's say, somewhat less than 100%. (Also, for every frame we displayed we had to compute a lot of Fourier transforms, and the compute hardware takes power too. Which wouldn't matter for large mains-powered projectors, but is more of an issue when you're trying to make a small low-power device for mobile use.)
We had some next-generation technology in the works that would (if brought to completion) have fixed most of these issues and produced better efficiency along with better image quality -- but then we made the (very sensible) decision to get out of the picoprojection market completely.
Fascinating, I've wondered a few times over the past year or so why the ideas were shelved.
Each of your points is very illuminating. It's such a nice idea that it's a shame it turned out not to be worth pursuing from a business point of view.
Perhaps some day somebody will take up those next-generation ideas!
The FFT is not the same as the fourier transform. Most of the time the distinction is irrelevant, but in this case it truly does matter. The transformed signal measured with the lens is the result of a sampled continuous space fourier transform (CSFT). The FFT computed is the discrete space fourier transform. The two are not always equivalent.
I bring this up because the author uses the terms "fourier transform" and "FFT" interchangeably.