Noway2
Electrical
- Apr 15, 2005
- 789
For the past couple of days, when time allows, I have been attempting to analyze the behavior of a PLL implemented in software that is used in some of our products. The PLL is modeled after a linear PLL in that it multiplies the incomming (sine) wave with a reference sine wave, filters (?) the result and then uses the result to drive a numerically controlled oscilator that generates the reference wave.
In the filter portion, the software takes the error signal and multiplies it by a proportional constant (call it K1) and multiplies the running summation of the error signal(integration) by another constant (call it K2). The two products are then summed together. In other words, the error signal is passed through a PI compensator and the result is used to drive the NCO.
If this were a continuous time system, I would model the 'filter' block as K1 + K2/s. However, this is not a continous time system as it is working off of sampled data. What I have are the constants, K1 and K2 (discrete time), but what I would like is to find an analytical solution that can be modeled in a tool such as Matlab or Octave.
In this case, I am having trouble seeing how I would model the integrator in the PI block. If I recall correctly, a discrete time intergrator is modeled something like 2/T * (z+1)/(z-1), assuming the use of the bilinear transform. Would I simply model this as 2*K2*(z+1) / T*(z-1)? Similarly, since the NCO is typically modeled as an integrator, I assume I would use a similar model? The phase detector I am treating as a unity gain coeffecient in this application, but my question is how to handle the other components?
I am interested in modeling this system because of the results of some experiments I ran performing an iterative analysis with a PC program. I found that if I fed in a clean sinewave input that the system tracked cleanly, but if I introduced a small signal random number noise to input the signal to simulate real sampling, that output (from the PI block) would jitter significantly in frequency. Significantly in this case being about 10% of the nominal value.
My boss believed that the PLL worked perfectly on the basis that they had been shipping it for many years, but now I am challenging that assertion. I would like to either prove or disprove my assertion, hence my desire to model the system? If it turns out that the PLL is too noise sensitive, I would like to be able to work with a model to come up with a better solution.
To repeat, my question is, given the sample based constants and that the intergration runs as a sum, how would I model this block for a closed form solution, assuming one exists?
In the filter portion, the software takes the error signal and multiplies it by a proportional constant (call it K1) and multiplies the running summation of the error signal(integration) by another constant (call it K2). The two products are then summed together. In other words, the error signal is passed through a PI compensator and the result is used to drive the NCO.
If this were a continuous time system, I would model the 'filter' block as K1 + K2/s. However, this is not a continous time system as it is working off of sampled data. What I have are the constants, K1 and K2 (discrete time), but what I would like is to find an analytical solution that can be modeled in a tool such as Matlab or Octave.
In this case, I am having trouble seeing how I would model the integrator in the PI block. If I recall correctly, a discrete time intergrator is modeled something like 2/T * (z+1)/(z-1), assuming the use of the bilinear transform. Would I simply model this as 2*K2*(z+1) / T*(z-1)? Similarly, since the NCO is typically modeled as an integrator, I assume I would use a similar model? The phase detector I am treating as a unity gain coeffecient in this application, but my question is how to handle the other components?
I am interested in modeling this system because of the results of some experiments I ran performing an iterative analysis with a PC program. I found that if I fed in a clean sinewave input that the system tracked cleanly, but if I introduced a small signal random number noise to input the signal to simulate real sampling, that output (from the PI block) would jitter significantly in frequency. Significantly in this case being about 10% of the nominal value.
My boss believed that the PLL worked perfectly on the basis that they had been shipping it for many years, but now I am challenging that assertion. I would like to either prove or disprove my assertion, hence my desire to model the system? If it turns out that the PLL is too noise sensitive, I would like to be able to work with a model to come up with a better solution.
To repeat, my question is, given the sample based constants and that the intergration runs as a sum, how would I model this block for a closed form solution, assuming one exists?