Good discussion.
There is one huge obstacle in this discussion: TERMINOLOGY.
Specifically, the word frequency “RESOLUTION”
Let’s go over that obstacle once and for all. People use that term differently. I used that term “resolution” wrong compared to how must in the vibration world would use it (my usage comes from the instrumetnation world.. where resolution is how finely can you read a result). In my defense, although I used the wrong term, I did explain exactly what I meant.
Many of the authors use that term to mean the ability to separate frequencies. The basis for all the statements about zero padding not improving resolution is that it does not improve the ability to separate frequencies (same for any of the techniques). I agree! (*) I have said so from the beginning.
* IRstuff makes a good point, zero padding probably does give some help for the task of closely spaced frequencies based on signal to noise considerations (excluding those spaced less than a bin width apart which can never be separated by any means). And while it’s a good point he brought up, it’s not the point I’ve been trying to make and not what I’m going to discuss here. I’m going to try to explain better what I’ve been saying all along.
The benefit I’m referring to is NOT separating closely spaced frequencies. It is getting the best possible estimate of a given frequency from the given data for all frequencies which can be improved. This excludes frequency estimates that are impossible to improve from original data (frequencies separated by less than one bin width). For the sake of argument I’ll also exclude frequencies that are close but more than a bin width because those are a grey area leading inevitably to more discussion and not necessary to make my point. I am talking about frequencies that are spaced far apart. It is STILL important to me to estimate the frequency those far-apart frequencies as best we can. Why? This was illustrated by the example 11 Jul 12 21:41 culminating in the numbers 1.000, 2.001, 3.007, 4.002, which is very typical of real-life experience that tells us it’s important to have the best available estimate available for ALL frequencies in the spectrum. To review, all numbers in this example expressed as multiples of running speed. We had running speed harmonics plus a bpfo frequency occuring at 3.014. I am assuming the 3.014 closely spaced to 3.000 cannot be reliably separated into two peaks. If we had analyzed this spectrum with original FFT bin centers (NOT using the 3 tools), then expressed all the peaks as multiple of the estimated running speed peak, we might get a series of peak something like 1.000, 2.005, 3.008, 4.010 (just an example, assumes the first peak gets estimated low so the others are high multiples) . The outlier in that pattern is not really obvious. When we apply our techniques, we improve the estimates 1, 2,4 and end up with a pattern more like 1.000, 2.001, 3.007, 4.002. Now the outlier in the pattern stands out much more prominently and we can more easily recognize the 3.007 as a deviation from the pattern which deserves further investigation. (there are likely other clues higher in the frequency spectrum to look for.). This is not a contrived example, rather this is illustrative of the technique I use almost every time I analyze a spectrum. ...I use the software peak label feature and also use the software order label feature such that each (interpolated) peak frequency is shown as a multiple of the fundamental (interpolated). I look at how close the harmonics are to exact integers and use this as a basis to identify peaks that may not fall in the pattern. And by the way even though we call it order label the same feature can be used with fundamental frequencies other than running speed.
Now that we have established that there is value to precisely identifying the frequency of peaks other than those closely spaced, let me get to my point.
There is leakage created by the windowing process. None of the three techniques I’ve mentioned does anything to reduce that (because they do not change the length of the time window hence do not change the width of the frequency sinc that gets convolved with the true spectrum). When we combine this spreading with noise that can be present, we can describe the resulting uncertainty simplistically as an uncertainty band.
Here’s what the three techniques can’t do:
They CAN’T reduce the width of that uncertainty band
Here’s what the three techniques can do:
They CAN tell us where the center of that uncertainty band should be
(And typically center of uncertainty band is also our most accurate estimate).
In the case of DTFT and zero-padding these tools can provide virtually unlimited ability to locate where the center of the uncertainty band should be.
In the case of zero padding, more zeros gets us closer to that ideal. In the case of frequency interpolation, our estimate of the peak is moved toward the same ideal point that would be predicted by the other two techniques.
This last paragraph applies where peaks are not closely spaced. Closely spaced peaks deserve separate discussion. The usefulenss is in getting best possible estimate of these peaks from available time record when more data is not available.
=====================================
(2B)+(2B)' ?