Home |
Search |
Today's Posts |
#1
![]() |
|||
|
|||
![]() I am trying to estimate the confidence limits for measurement of white noise passed through a limited band filter. In the first instance, can we consider the filter to be an ideal low pass filter. The noise voltage can be though of as a stream of instantaneous values with Gaussian distribution, mean of zero, and standard deviation equal to the RMS voltage. If I take samples of this waveform, I should be able to calculate the noise power (given the resistance). The noise power is proportional to the variance of these samples, and the constant of proportionality is 1/R. Shannon's Information Theory says to me that I need to sample the waveform at least at double the highest frequency of any component (the break point of the low pass filter). It seems to me that what I am doing in statistical terms is taking a limited set of samples and using it to estimate the population variance (and hence the noise power in a resistor). So, I can never be absolutely certain that my set of samples will give the same variance as the population that I sampled. I should expect that on repeated measurement of the same source, that there will be variation, and that a component of that variation is the chance selection of set of samples on which the estimate was based. It seems reasonable to assume that taking more samples should give me higher confidence that my estimate is closer to the real phenomena, the population variance. So, I am looking for a predictor of the relationship between sample variance (the "measured" power) and population variance (the "actual" power), number of samples, and confidence level. The statistic Chi^2=(N-1)*S^2/sigma^2 (where S^2 is the sample variance and sigma^2 is the population variance) seems a possible solution. The distribution of Chi^2 is well known. So I have plotted values for the confidence limits indicated by that approach, the plot is at http://www.vk1od.net/fsm/RmsConfidenceLimit01.gif . The x axis value of number of samples relates to the minimum number of samples to capture the information in the filtered output (in the sense of Shannon), ie bandwidth*2. It seems to me that this should also apply to a noise source that has been passed through a bandpass filter (with a lower break point 0), so long as the sampling rate is sufficient for the highest frequency, but that the number of samples used for the graph lookup is bandwidth*2. I understand that there are other sources of error, this note is focused on choice of appropriate number of samples (or integration time) to manage the variation in the sampling process due to chance. Am I on the right track? Comments appreciated. Owen PS: This is not entirely off topic, I am measuring ambient noise from a receiving system, and antenna performance assessment is the purpose. -- |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
Resonant Stub Measurement | Antenna | |||
Resonant stub measurement | Antenna | |||
FCC: Broadband Power Line Systems | Policy | |||
Stunning Consumer Confidence Fall | Shortwave | |||
Consumer confidence falls! | Scanner |