Home |
Search |
Today's Posts |
|
#1
![]() |
|||
|
|||
![]() I am trying to estimate the confidence limits for measurement of white noise passed through a limited band filter. In the first instance, can we consider the filter to be an ideal low pass filter. The noise voltage can be though of as a stream of instantaneous values with Gaussian distribution, mean of zero, and standard deviation equal to the RMS voltage. If I take samples of this waveform, I should be able to calculate the noise power (given the resistance). The noise power is proportional to the variance of these samples, and the constant of proportionality is 1/R. Shannon's Information Theory says to me that I need to sample the waveform at least at double the highest frequency of any component (the break point of the low pass filter). It seems to me that what I am doing in statistical terms is taking a limited set of samples and using it to estimate the population variance (and hence the noise power in a resistor). So, I can never be absolutely certain that my set of samples will give the same variance as the population that I sampled. I should expect that on repeated measurement of the same source, that there will be variation, and that a component of that variation is the chance selection of set of samples on which the estimate was based. It seems reasonable to assume that taking more samples should give me higher confidence that my estimate is closer to the real phenomena, the population variance. So, I am looking for a predictor of the relationship between sample variance (the "measured" power) and population variance (the "actual" power), number of samples, and confidence level. The statistic Chi^2=(N-1)*S^2/sigma^2 (where S^2 is the sample variance and sigma^2 is the population variance) seems a possible solution. The distribution of Chi^2 is well known. So I have plotted values for the confidence limits indicated by that approach, the plot is at http://www.vk1od.net/fsm/RmsConfidenceLimit01.gif . The x axis value of number of samples relates to the minimum number of samples to capture the information in the filtered output (in the sense of Shannon), ie bandwidth*2. It seems to me that this should also apply to a noise source that has been passed through a bandpass filter (with a lower break point 0), so long as the sampling rate is sufficient for the highest frequency, but that the number of samples used for the graph lookup is bandwidth*2. I understand that there are other sources of error, this note is focused on choice of appropriate number of samples (or integration time) to manage the variation in the sampling process due to chance. Am I on the right track? Comments appreciated. Owen PS: This is not entirely off topic, I am measuring ambient noise from a receiving system, and antenna performance assessment is the purpose. -- |
#2
![]() |
|||
|
|||
![]()
On Sat, 08 Jul 2006 22:08:20 GMT, Owen Duffy wrote:
I am trying to estimate the confidence limits for measurement of white noise passed through a limited band filter. In the first instance, can we consider the filter to be an ideal low pass filter. Hi Owen, This will possibly be your greatest source of error, the clipping of the spectrum. In Fourier Analysis, the operation is called "windowing" and there are a world of window shapes that offer either excellent frequency resolution at the cost of amplitude accuracy, or the t'other way 'round. Insofar as the window shape, this deviates from an "ideal" filter response, but then an "ideal" filter response (infinite skirt) does not guarantee accuracy. Blackman and Tukey in their seminal work, "The Measurement of Power Spectra" (1958) assert that "a realistic white noise spectrum must be effectively band-limited by an asymptotic falloff at least as fast as 1/f²." Consider the discussion at: http://www.lds-group.com/docs/site_d...%20Windows.pdf Shannon's Information Theory says to me that I need to sample the waveform at least at double the highest frequency of any component (the break point of the low pass filter). That's Nyquist sampling rate at slightly more than double than Fmax. Shannon predicts the bit error rate for a signal to noise ratio. It seems to me that what I am doing in statistical terms is taking a limited set of samples and using it to estimate the population variance (and hence the noise power in a resistor). Seeing that the RMS voltage is applied fully to the resistance, shouldn't that be signal + noise power in a resistor? The noise in this sense only describes the deviation from the distributions' shape. So I have plotted values for the confidence limits indicated by that approach, the plot is at http://www.vk1od.net/fsm/RmsConfidenceLimit01.gif . The x axis value of number of samples relates to the minimum number of samples to capture the information in the filtered output (in the sense of Shannon), ie bandwidth*2. When I've done brute force noise reduction through ever increasing samples, it always appeared to follow a square law relationship. Am I on the right track? What are you using as a source of noise? 73's Richard Clark, KB7QHC |
#3
![]() |
|||
|
|||
![]()
On Sat, 08 Jul 2006 17:10:34 -0700, Richard Clark
wrote: On Sat, 08 Jul 2006 22:08:20 GMT, Owen Duffy wrote: I am trying to estimate the confidence limits for measurement of white noise passed through a limited band filter. In the first instance, can we consider the filter to be an ideal low pass filter. Hi Owen, This will possibly be your greatest source of error, the clipping of the spectrum. In Fourier Analysis, the operation is called "windowing" and there are a world of window shapes that offer either excellent frequency resolution at the cost of amplitude accuracy, or the t'other way 'round. Insofar as the window shape, this deviates from an "ideal" filter response, but then an "ideal" filter response (infinite skirt) does not guarantee accuracy. Blackman and Tukey in their seminal work, "The Measurement of Power Spectra" (1958) assert that "a realistic white noise spectrum must be effectively band-limited by an asymptotic falloff at least as fast as 1/f²." Consider the discussion at: http://www.lds-group.com/docs/site_d...%20Windows.pdf Shannon's Information Theory says to me that I need to sample the waveform at least at double the highest frequency of any component (the break point of the low pass filter). That's Nyquist sampling rate at slightly more than double than Fmax. Shannon predicts the bit error rate for a signal to noise ratio. Ok. It seems to me that what I am doing in statistical terms is taking a limited set of samples and using it to estimate the population variance (and hence the noise power in a resistor). Seeing that the RMS voltage is applied fully to the resistance, shouldn't that be signal + noise power in a resistor? The noise in this sense only describes the deviation from the distributions' shape. In this case, the KTB noise due to the loads own resistance is so small as to be insignificant and not require a correct to be applied. So I have plotted values for the confidence limits indicated by that approach, the plot is at http://www.vk1od.net/fsm/RmsConfidenceLimit01.gif . The x axis value of number of samples relates to the minimum number of samples to capture the information in the filtered output (in the sense of Shannon), ie bandwidth*2. When I've done brute force noise reduction through ever increasing samples, it always appeared to follow a square law relationship. I expected that, and the radioastronomy folk seem to work on that basis from Dicke's work, if I understand it correctly. Am I on the right track? What are you using as a source of noise? Noise from the real world which I understand is not exactly white, but I figure that if I understand the behavior from a white noise point of view, the answer will be very close for noise that resembles white noise. The noise is audio output from an SSB receiver (operating below AGC gain compression threshold) that, if you like, is acting as a linear downconverter with a narrow pass band filter. Typically, the passband is 300-2400Hz. The sampling is done in a PC sound card at a rate of 11kHz. The application here is using these samples to synthesise a "true RMS voltmeter". The question is how many samples, or how long an integration time at a given bandwidth, is required to reduce the likely contribution of chance to the sampling process below, say 0.1dB, at a confidence level of, say 90% (in a two tailed test). Thanks for your response Richard. Owen -- |
#4
![]() |
|||
|
|||
![]()
On Sun, 09 Jul 2006 00:44:26 GMT, Owen Duffy wrote:
It seems to me that what I am doing in statistical terms is taking a limited set of samples and using it to estimate the population variance (and hence the noise power in a resistor). Seeing that the RMS voltage is applied fully to the resistance, shouldn't that be signal + noise power in a resistor? The noise in this sense only describes the deviation from the distributions' shape. In this case, the KTB noise due to the loads own resistance is so small as to be insignificant and not require a correct to be applied. Hi Owen, I was not thinking of thermal noise (actually, I've been quite deeply involved in studying phonon interaction, but not to this purpose). Rather, I was offering that if you integrate under the curve of the gaussian distribution, and compare to the computed/measured noise power, then and only then through that difference would you resolve error. Of course, this may be a description of what you have already offered in previous discussion. What are you using as a source of noise? Noise from the real world which I understand is not exactly white, but I figure that if I understand the behavior from a white noise point of view, the answer will be very close for noise that resembles white noise. The noise is audio output from an SSB receiver (operating below AGC gain compression threshold) that, if you like, is acting as a linear downconverter with a narrow pass band filter. Typically, the passband is 300-2400Hz. The sampling is done in a PC sound card at a rate of 11kHz. The application here is using these samples to synthesise a "true RMS voltmeter". I've used both biased Zeners and weakly illuminated Photomultiplier Tubes. The PMT doesn't offer much power, but it is flat out to the 100s of MHz. Another precision method is to load each output of a ring counter (as big a ring as possible) with a random selection of resistor values and feed them into a summing junction. This is especially useful in the AF region. 73's Richard Clark, KB7QHC |
#5
![]() |
|||
|
|||
![]()
Owen, I can't understand your problem. Could you condense it? I
would just take every measurement to be correct at the time it was made and stop worrying about it. Have some faith in your measuring instruments. Variation in the presense of noise can be expected. If you want to be more accurate, sit and watch the meter for 30 seconds and make a mental average. If the noise statistics are stable then you will obtain the same answer five minutes later. This will give you confidence in the measurements. Which is what you are looking for. "Statistics" is amongst the most useful of the many branches of mathematics. But it comes to an end when trying to estimate the confidence to be placed in setting confidence limits. If you have a few months to spare, refer to the works of Sir Ronald Arthur Fisher, the greatest of all Statisticians. He was involved with genetics, medicine, agriculture, weather, engineering, etc. (You remind me of Gossett and the "t" distribution. Early in the 20th century Gossett was a chemist/mathematician working in the quality control department of the famous Guinness brewery in Dublin. In his work he derived the distribution of "t" which allowed confidence limits to be set up for the normal distribution based on the measurements on small samples themselves. He realised he had invented and mathematically proved a long-wanted, important, practical procedure. But his powerful employer could not allow chemistry and mathematics to be associated with yeast, hops and all the other natural flavoured ingredients in their beer so they barred him from publishing a learned paper on the subject under his own name. So he used the nom-de-plume "Student". Ever since then the name of his statistical distribution amongst scientists, engineers and everyone involved with statistics has been known as "Student's t". Guinness, untainted by Gossett, is still a popular drink in English and Irish pubs.) Student's "t" will very likely appear in the solution to your problem, whatever it is. ---- Reg. |
#6
![]() |
|||
|
|||
![]()
On Sun, 9 Jul 2006 02:26:44 +0100, "Reg Edwards"
wrote: Owen, I can't understand your problem. Could you condense it? I would just take every measurement to be correct at the time it was made and stop worrying about it. I am designing the instrument. I am exploring the number of samples required to reduce the effect of chance on the measurement result (in respect of the sampling issue) to an acceptable figure. Have some faith in your measuring instruments. Variation in the presense of noise can be expected. If you want to be more accurate, sit and watch the meter for 30 seconds and make a mental average. If the noise statistics are stable then you will obtain the same answer five minutes later. This will give you confidence in the measurements. Which is what you are looking for. "Statistics" is amongst the most useful of the many branches of mathematics. But it comes to an end when trying to estimate the confidence to be placed in setting confidence limits. If you have a few months to spare, refer to the works of Sir Ronald Arthur Fisher, the greatest of all Statisticians. He was involved with genetics, medicine, agriculture, weather, engineering, etc. Somehow, I guessed this would turn to alcohol! (You remind me of Gossett and the "t" distribution. Early in the 20th century Gossett was a chemist/mathematician working in the quality control department of the famous Guinness brewery in Dublin. In his work he derived the distribution of "t" which allowed confidence limits to be set up for the normal distribution based on the measurements on small samples themselves. He realised he had invented and mathematically proved a long-wanted, important, practical procedure. But his powerful employer could not allow chemistry and mathematics to be associated with yeast, hops and all the other natural flavoured ingredients in their beer so they barred him from publishing a learned paper on the subject under his own name. So he used the nom-de-plume "Student". Ever since then the name of his statistical distribution amongst scientists, engineers and everyone involved with statistics has been known as "Student's t". Guinness, untainted by Gossett, is still a popular drink in English and Irish pubs.) Well, despite my name and Irish father, there is not a skerrick of Irish in me, or Guiness for that matter. Student's "t" will very likely appear in the solution to your problem, whatever it is. Student's t distribution is a probability distribution of the mean of a sample of normally distributed random numbers. The mean of noise is 0 (unless it contains DC, which is not noise). The Chi-square distribution (which I proposed in the original post) is the probability distribution of the variance of a sample of normally distributed random numbers. The variance of noise voltage is the RMS^2, or proportional to power, and so is of interest. Owen -- |
#7
![]() |
|||
|
|||
![]() I am designing the instrument. I am exploring the number of samples required to reduce the effect of chance on the measurement result (in respect of the sampling issue) to an acceptable figure. ======================================== The first thing to do is calibrate the instrument against a standard noise source. Immediately, the uncertainty in the standard is transferred to the instrument - plus some more uncertainty due to the manner in which the standard and instrument are associated. Does the instrument read in watts, decibels, or what? The second thing to do is to be verbally and numerically more precise about "to reduce the effect of chance on the measurement result to an acceptable figure." At the outset you should define the acceptable figure. What effects? In what units is the acceptable figure? It is then not a difficult matter to decide the number of measurements, by taking samples, to give a predetermined level of confidence in the average or mean. But I have the feeling you are over-flogging the issue. You don't really have a problem. ---- Reg. |
#8
![]() |
|||
|
|||
![]()
On Sun, 9 Jul 2006 21:33:27 +0100, "Reg Edwards"
wrote: I am designing the instrument. I am exploring the number of samples required to reduce the effect of chance on the measurement result (in respect of the sampling issue) to an acceptable figure. ======================================== The first thing to do is calibrate the instrument against a standard noise source. Immediately, the uncertainty in the standard is transferred to the instrument - plus some more uncertainty due to the manner in which the standard and instrument are associated. Reg, I think you have missed my point. Because of the random nature of white noise, an attempt to measure the noise source by sampling the noise for a short period introduces an error due to the sampling process. That sampling error is related to the quantity of "information" gathered by the sampling process, ie the length of "integration" or number of samples. The issue is not about absolute calibration, it is about one source of error in measuring a white noise source, and quantification of bounds on that error to a level of confidence. Does the instrument read in watts, decibels, or what? The second thing to do is to be verbally and numerically more precise about "to reduce the effect of chance on the measurement result to an acceptable figure." I am sorry if that is wordy, but I think it is precise in expressing the problem. To give a specific application, suppose that I want to do an receiver system performance test by comparing noise from one cosmic noise source with quiet sky, and I expect the variation with my G/T to be 0.5dB. At the outset you should define the acceptable figure. What effects? In what units is the acceptable figure? The acceptable figure will depend on the application, I am trying to understand the principle. It is then not a difficult matter to decide the number of measurements, by taking samples, to give a predetermined level of confidence in the average or mean. But I have the feeling you are over-flogging the issue. You don't really have a problem. So, coming back to the application above, I note that successive measurements of the same white noise source passed through a limited bandwidth filter have variation from measurement to measurement, and that variation is related to the length of time that length of "integration" time or number of samples used for each measurement. In trying to understand this relationship, I explored the use of the Chi-square distribution as discussed in my initial posting. In looking for more information on that relationship, I found Dicke being quoted with an estimate of the sensitivity of a radiometer as the minimum detectable signal being the one in which the mean deflection of the output indicator is equal to the standard deviation of the fluctuations about the mean deflection of the indicator. He is quoted as saying: mean(delta-T)= (Beta * Tn) /( delta-v * t)^0.5 where delta-T is the minimum detectable signal; Beta is a constant of proportionality that depends on the receiver and is usually in the range 1 to 2; Tn is the receiving system noise temperature; delta-v is the pre-detection receiver bandwidth; and t is the post detection integration time constant. (I do not have a derivation of Dicke's formula.) This suggests that an estimate of the error (in dB) due to the sampling process is 10*log(1+Beta /( delta-v * t)^0.5). I have plotted the above expression at Beta=2 over the plots that I did based on the Chi-square distribution, they are at http://www.vk1od.net/fsm/RmsConfidenceLimit03.gif . You will see that the Dicke (Beta=2) line follows (ie it pretty much obscures by overwriting) my Chi-square based 95% confidence line. It appears that the two methods arrive at similar answers. Dicke's Beta seems to be determined empiracally. Varying Beta has the same effect as changing the confidence level in my Chi-square based estimator. Owen PS: Still remains relevant to antennas, I am measuring the performance of a receiver system, which includes the antenna and alll noise sources. -- |
#9
![]() |
|||
|
|||
![]()
On Sun, 09 Jul 2006 23:20:39 GMT, Owen Duffy wrote:
mean(delta-T)= (Beta * Tn) /( delta-v * t)^0.5 where delta-T is the minimum detectable signal; Beta is a constant of proportionality that depends on the receiver and is usually in the range 1 to 2; Tn is the receiving system noise temperature; delta-v is the pre-detection receiver bandwidth; and t is the post detection integration time constant. I should have mentioned that I don't understand why the Beta factor would vary from receiver to receiver. It seems that it was determined empirically. If the indicating instrument was a meter pointer observed by a person, perhaps Beta might have captured the observer effects more than the equipment. Owen -- |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
Resonant Stub Measurement | Antenna | |||
Resonant stub measurement | Antenna | |||
FCC: Broadband Power Line Systems | Policy | |||
Stunning Consumer Confidence Fall | Shortwave | |||
Consumer confidence falls! | Scanner |