Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Old July 8th 06, 11:08 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 168
Default Confidence limits for noise measurement


I am trying to estimate the confidence limits for measurement of white
noise passed through a limited band filter.

In the first instance, can we consider the filter to be an ideal low
pass filter.

The noise voltage can be though of as a stream of instantaneous values
with Gaussian distribution, mean of zero, and standard deviation equal
to the RMS voltage.

If I take samples of this waveform, I should be able to calculate the
noise power (given the resistance). The noise power is proportional to
the variance of these samples, and the constant of proportionality is
1/R.

Shannon's Information Theory says to me that I need to sample the
waveform at least at double the highest frequency of any component
(the break point of the low pass filter).

It seems to me that what I am doing in statistical terms is taking a
limited set of samples and using it to estimate the population
variance (and hence the noise power in a resistor).

So, I can never be absolutely certain that my set of samples will give
the same variance as the population that I sampled.

I should expect that on repeated measurement of the same source, that
there will be variation, and that a component of that variation is the
chance selection of set of samples on which the estimate was based.

It seems reasonable to assume that taking more samples should give me
higher confidence that my estimate is closer to the real phenomena,
the population variance.

So, I am looking for a predictor of the relationship between sample
variance (the "measured" power) and population variance (the "actual"
power), number of samples, and confidence level.

The statistic Chi^2=(N-1)*S^2/sigma^2 (where S^2 is the sample
variance and sigma^2 is the population variance) seems a possible
solution. The distribution of Chi^2 is well known.

So I have plotted values for the confidence limits indicated by that
approach, the plot is at
http://www.vk1od.net/fsm/RmsConfidenceLimit01.gif . The x axis value
of number of samples relates to the minimum number of samples to
capture the information in the filtered output (in the sense of
Shannon), ie bandwidth*2.

It seems to me that this should also apply to a noise source that has
been passed through a bandpass filter (with a lower break point 0),
so long as the sampling rate is sufficient for the highest frequency,
but that the number of samples used for the graph lookup is
bandwidth*2.

I understand that there are other sources of error, this note is
focused on choice of appropriate number of samples (or integration
time) to manage the variation in the sampling process due to chance.

Am I on the right track?

Comments appreciated.

Owen

PS: This is not entirely off topic, I am measuring ambient noise from
a receiving system, and antenna performance assessment is the purpose.
--
  #2   Report Post  
Old July 9th 06, 01:10 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 2,951
Default Confidence limits for noise measurement

On Sat, 08 Jul 2006 22:08:20 GMT, Owen Duffy wrote:


I am trying to estimate the confidence limits for measurement of white
noise passed through a limited band filter.

In the first instance, can we consider the filter to be an ideal low
pass filter.


Hi Owen,

This will possibly be your greatest source of error, the clipping of
the spectrum. In Fourier Analysis, the operation is called
"windowing" and there are a world of window shapes that offer either
excellent frequency resolution at the cost of amplitude accuracy, or
the t'other way 'round. Insofar as the window shape, this deviates
from an "ideal" filter response, but then an "ideal" filter response
(infinite skirt) does not guarantee accuracy.

Blackman and Tukey in their seminal work, "The Measurement of Power
Spectra" (1958) assert that
"a realistic white noise spectrum must be effectively
band-limited by an asymptotic falloff at least as fast as 1/f²."

Consider the discussion at:
http://www.lds-group.com/docs/site_d...%20Windows.pdf

Shannon's Information Theory says to me that I need to sample the
waveform at least at double the highest frequency of any component
(the break point of the low pass filter).


That's Nyquist sampling rate at slightly more than double than Fmax.
Shannon predicts the bit error rate for a signal to noise ratio.

It seems to me that what I am doing in statistical terms is taking a
limited set of samples and using it to estimate the population
variance (and hence the noise power in a resistor).


Seeing that the RMS voltage is applied fully to the resistance,
shouldn't that be signal + noise power in a resistor? The noise in
this sense only describes the deviation from the distributions' shape.

So I have plotted values for the confidence limits indicated by that
approach, the plot is at
http://www.vk1od.net/fsm/RmsConfidenceLimit01.gif . The x axis value
of number of samples relates to the minimum number of samples to
capture the information in the filtered output (in the sense of
Shannon), ie bandwidth*2.


When I've done brute force noise reduction through ever increasing
samples, it always appeared to follow a square law relationship.

Am I on the right track?


What are you using as a source of noise?

73's
Richard Clark, KB7QHC
  #3   Report Post  
Old July 9th 06, 01:44 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 168
Default Confidence limits for noise measurement

On Sat, 08 Jul 2006 17:10:34 -0700, Richard Clark
wrote:

On Sat, 08 Jul 2006 22:08:20 GMT, Owen Duffy wrote:


I am trying to estimate the confidence limits for measurement of white
noise passed through a limited band filter.

In the first instance, can we consider the filter to be an ideal low
pass filter.


Hi Owen,

This will possibly be your greatest source of error, the clipping of
the spectrum. In Fourier Analysis, the operation is called
"windowing" and there are a world of window shapes that offer either
excellent frequency resolution at the cost of amplitude accuracy, or
the t'other way 'round. Insofar as the window shape, this deviates
from an "ideal" filter response, but then an "ideal" filter response
(infinite skirt) does not guarantee accuracy.

Blackman and Tukey in their seminal work, "The Measurement of Power
Spectra" (1958) assert that
"a realistic white noise spectrum must be effectively
band-limited by an asymptotic falloff at least as fast as 1/f²."

Consider the discussion at:
http://www.lds-group.com/docs/site_d...%20Windows.pdf




Shannon's Information Theory says to me that I need to sample the
waveform at least at double the highest frequency of any component
(the break point of the low pass filter).


That's Nyquist sampling rate at slightly more than double than Fmax.
Shannon predicts the bit error rate for a signal to noise ratio.


Ok.


It seems to me that what I am doing in statistical terms is taking a
limited set of samples and using it to estimate the population
variance (and hence the noise power in a resistor).


Seeing that the RMS voltage is applied fully to the resistance,
shouldn't that be signal + noise power in a resistor? The noise in
this sense only describes the deviation from the distributions' shape.


In this case, the KTB noise due to the loads own resistance is so
small as to be insignificant and not require a correct to be applied.


So I have plotted values for the confidence limits indicated by that
approach, the plot is at
http://www.vk1od.net/fsm/RmsConfidenceLimit01.gif . The x axis value
of number of samples relates to the minimum number of samples to
capture the information in the filtered output (in the sense of
Shannon), ie bandwidth*2.


When I've done brute force noise reduction through ever increasing
samples, it always appeared to follow a square law relationship.


I expected that, and the radioastronomy folk seem to work on that
basis from Dicke's work, if I understand it correctly.


Am I on the right track?


What are you using as a source of noise?


Noise from the real world which I understand is not exactly white, but
I figure that if I understand the behavior from a white noise point of
view, the answer will be very close for noise that resembles white
noise.

The noise is audio output from an SSB receiver (operating below AGC
gain compression threshold) that, if you like, is acting as a linear
downconverter with a narrow pass band filter. Typically, the passband
is 300-2400Hz. The sampling is done in a PC sound card at a rate of
11kHz. The application here is using these samples to synthesise a
"true RMS voltmeter".

The question is how many samples, or how long an integration time at a
given bandwidth, is required to reduce the likely contribution of
chance to the sampling process below, say 0.1dB, at a confidence level
of, say 90% (in a two tailed test).

Thanks for your response Richard.

Owen
--
  #4   Report Post  
Old July 9th 06, 02:04 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 2,951
Default Confidence limits for noise measurement

On Sun, 09 Jul 2006 00:44:26 GMT, Owen Duffy wrote:

It seems to me that what I am doing in statistical terms is taking a
limited set of samples and using it to estimate the population
variance (and hence the noise power in a resistor).


Seeing that the RMS voltage is applied fully to the resistance,
shouldn't that be signal + noise power in a resistor? The noise in
this sense only describes the deviation from the distributions' shape.


In this case, the KTB noise due to the loads own resistance is so
small as to be insignificant and not require a correct to be applied.


Hi Owen,

I was not thinking of thermal noise (actually, I've been quite deeply
involved in studying phonon interaction, but not to this purpose).
Rather, I was offering that if you integrate under the curve of the
gaussian distribution, and compare to the computed/measured noise
power, then and only then through that difference would you resolve
error. Of course, this may be a description of what you have already
offered in previous discussion.

What are you using as a source of noise?


Noise from the real world which I understand is not exactly white, but
I figure that if I understand the behavior from a white noise point of
view, the answer will be very close for noise that resembles white
noise.

The noise is audio output from an SSB receiver (operating below AGC
gain compression threshold) that, if you like, is acting as a linear
downconverter with a narrow pass band filter. Typically, the passband
is 300-2400Hz. The sampling is done in a PC sound card at a rate of
11kHz. The application here is using these samples to synthesise a
"true RMS voltmeter".


I've used both biased Zeners and weakly illuminated Photomultiplier
Tubes. The PMT doesn't offer much power, but it is flat out to the
100s of MHz. Another precision method is to load each output of a
ring counter (as big a ring as possible) with a random selection of
resistor values and feed them into a summing junction. This is
especially useful in the AF region.

73's
Richard Clark, KB7QHC
  #5   Report Post  
Old July 9th 06, 02:26 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 167
Default Confidence limits for noise measurement

Owen, I can't understand your problem. Could you condense it? I
would just take every measurement to be correct at the time it was
made and stop worrying about it.

Have some faith in your measuring instruments. Variation in the
presense of noise can be expected. If you want to be more accurate,
sit and watch the meter for 30 seconds and make a mental average. If
the noise statistics are stable then you will obtain the same answer
five minutes later. This will give you confidence in the measurements.
Which is what you are looking for.

"Statistics" is amongst the most useful of the many branches of
mathematics. But it comes to an end when trying to estimate the
confidence to be placed in setting confidence limits. If you have a
few months to spare, refer to the works of Sir Ronald Arthur Fisher,
the greatest of all Statisticians. He was involved with genetics,
medicine, agriculture, weather, engineering, etc.

(You remind me of Gossett and the "t" distribution. Early in the 20th
century Gossett was a chemist/mathematician working in the quality
control department of the famous Guinness brewery in Dublin. In his
work he derived the distribution of "t" which allowed confidence
limits to be set up for the normal distribution based on the
measurements on small samples themselves. He realised he had invented
and mathematically proved a long-wanted, important, practical
procedure. But his powerful employer could not allow chemistry and
mathematics to be associated with yeast, hops and all the other
natural flavoured ingredients in their beer so they barred him from
publishing a learned paper on the subject under his own name. So he
used the nom-de-plume "Student". Ever since then the name of his
statistical distribution amongst scientists, engineers and everyone
involved with statistics has been known as "Student's t".

Guinness, untainted by Gossett, is still a popular drink in English
and Irish pubs.)

Student's "t" will very likely appear in the solution to your problem,
whatever it is.
----
Reg.




  #6   Report Post  
Old July 9th 06, 03:32 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 168
Default Confidence limits for noise measurement

On Sun, 9 Jul 2006 02:26:44 +0100, "Reg Edwards"
wrote:

Owen, I can't understand your problem. Could you condense it? I
would just take every measurement to be correct at the time it was
made and stop worrying about it.


I am designing the instrument. I am exploring the number of samples
required to reduce the effect of chance on the measurement result (in
respect of the sampling issue) to an acceptable figure.


Have some faith in your measuring instruments. Variation in the
presense of noise can be expected. If you want to be more accurate,
sit and watch the meter for 30 seconds and make a mental average. If
the noise statistics are stable then you will obtain the same answer
five minutes later. This will give you confidence in the measurements.
Which is what you are looking for.

"Statistics" is amongst the most useful of the many branches of
mathematics. But it comes to an end when trying to estimate the
confidence to be placed in setting confidence limits. If you have a
few months to spare, refer to the works of Sir Ronald Arthur Fisher,
the greatest of all Statisticians. He was involved with genetics,
medicine, agriculture, weather, engineering, etc.


Somehow, I guessed this would turn to alcohol!

(You remind me of Gossett and the "t" distribution. Early in the 20th
century Gossett was a chemist/mathematician working in the quality
control department of the famous Guinness brewery in Dublin. In his
work he derived the distribution of "t" which allowed confidence
limits to be set up for the normal distribution based on the
measurements on small samples themselves. He realised he had invented
and mathematically proved a long-wanted, important, practical
procedure. But his powerful employer could not allow chemistry and
mathematics to be associated with yeast, hops and all the other
natural flavoured ingredients in their beer so they barred him from
publishing a learned paper on the subject under his own name. So he
used the nom-de-plume "Student". Ever since then the name of his
statistical distribution amongst scientists, engineers and everyone
involved with statistics has been known as "Student's t".

Guinness, untainted by Gossett, is still a popular drink in English
and Irish pubs.)


Well, despite my name and Irish father, there is not a skerrick of
Irish in me, or Guiness for that matter.


Student's "t" will very likely appear in the solution to your problem,
whatever it is.


Student's t distribution is a probability distribution of the mean of
a sample of normally distributed random numbers.

The mean of noise is 0 (unless it contains DC, which is not noise).

The Chi-square distribution (which I proposed in the original post) is
the probability distribution of the variance of a sample of normally
distributed random numbers.

The variance of noise voltage is the RMS^2, or proportional to power,
and so is of interest.

Owen
--
  #7   Report Post  
Old July 9th 06, 09:33 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 167
Default Confidence limits for noise measurement


I am designing the instrument. I am exploring the number of samples
required to reduce the effect of chance on the measurement result

(in
respect of the sampling issue) to an acceptable figure.

========================================
The first thing to do is calibrate the instrument against a standard
noise source. Immediately, the uncertainty in the standard is
transferred to the instrument - plus some more uncertainty due to the
manner in which the standard and instrument are associated.

Does the instrument read in watts, decibels, or what?

The second thing to do is to be verbally and numerically more precise
about "to reduce the effect of chance on the measurement result to an
acceptable figure."

At the outset you should define the acceptable figure. What effects?
In what units is the acceptable figure?

It is then not a difficult matter to decide the number of
measurements, by taking samples, to give a predetermined level of
confidence in the average or mean. But I have the feeling you are
over-flogging the issue. You don't really have a problem.
----
Reg.


  #8   Report Post  
Old July 10th 06, 12:20 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 168
Default Confidence limits for noise measurement

On Sun, 9 Jul 2006 21:33:27 +0100, "Reg Edwards"
wrote:


I am designing the instrument. I am exploring the number of samples
required to reduce the effect of chance on the measurement result

(in
respect of the sampling issue) to an acceptable figure.

========================================
The first thing to do is calibrate the instrument against a standard
noise source. Immediately, the uncertainty in the standard is
transferred to the instrument - plus some more uncertainty due to the
manner in which the standard and instrument are associated.


Reg, I think you have missed my point. Because of the random nature of
white noise, an attempt to measure the noise source by sampling the
noise for a short period introduces an error due to the sampling
process. That sampling error is related to the quantity of
"information" gathered by the sampling process, ie the length of
"integration" or number of samples.

The issue is not about absolute calibration, it is about one source of
error in measuring a white noise source, and quantification of bounds
on that error to a level of confidence.


Does the instrument read in watts, decibels, or what?

The second thing to do is to be verbally and numerically more precise
about "to reduce the effect of chance on the measurement result to an
acceptable figure."


I am sorry if that is wordy, but I think it is precise in expressing
the problem.

To give a specific application, suppose that I want to do an receiver
system performance test by comparing noise from one cosmic noise
source with quiet sky, and I expect the variation with my G/T to be
0.5dB.


At the outset you should define the acceptable figure. What effects?
In what units is the acceptable figure?


The acceptable figure will depend on the application, I am trying to
understand the principle.

It is then not a difficult matter to decide the number of
measurements, by taking samples, to give a predetermined level of
confidence in the average or mean. But I have the feeling you are
over-flogging the issue. You don't really have a problem.


So, coming back to the application above, I note that successive
measurements of the same white noise source passed through a limited
bandwidth filter have variation from measurement to measurement, and
that variation is related to the length of time that length of
"integration" time or number of samples used for each measurement.

In trying to understand this relationship, I explored the use of the
Chi-square distribution as discussed in my initial posting.

In looking for more information on that relationship, I found Dicke
being quoted with an estimate of the sensitivity of a radiometer as
the minimum detectable signal being the one in which the mean
deflection of the output indicator is equal to the standard deviation
of the fluctuations about the mean deflection of the indicator. He is
quoted as saying:

mean(delta-T)= (Beta * Tn) /( delta-v * t)^0.5

where delta-T is the minimum detectable signal; Beta is a constant of
proportionality that depends on the receiver and is usually in the
range 1 to 2; Tn is the receiving system noise temperature; delta-v is
the pre-detection receiver bandwidth; and t is the post detection
integration time constant.

(I do not have a derivation of Dicke's formula.)

This suggests that an estimate of the error (in dB) due to the
sampling process is 10*log(1+Beta /( delta-v * t)^0.5).

I have plotted the above expression at Beta=2 over the plots that I
did based on the Chi-square distribution, they are at
http://www.vk1od.net/fsm/RmsConfidenceLimit03.gif . You will see that
the Dicke (Beta=2) line follows (ie it pretty much obscures by
overwriting) my Chi-square based 95% confidence line. It appears that
the two methods arrive at similar answers.

Dicke's Beta seems to be determined empiracally. Varying Beta has the
same effect as changing the confidence level in my Chi-square based
estimator.

Owen

PS: Still remains relevant to antennas, I am measuring the performance
of a receiver system, which includes the antenna and alll noise
sources.
--
  #9   Report Post  
Old July 10th 06, 12:56 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 168
Default Confidence limits for noise measurement

On Sun, 09 Jul 2006 23:20:39 GMT, Owen Duffy wrote:


mean(delta-T)= (Beta * Tn) /( delta-v * t)^0.5

where delta-T is the minimum detectable signal; Beta is a constant of
proportionality that depends on the receiver and is usually in the
range 1 to 2; Tn is the receiving system noise temperature; delta-v is
the pre-detection receiver bandwidth; and t is the post detection
integration time constant.


I should have mentioned that I don't understand why the Beta factor
would vary from receiver to receiver. It seems that it was determined
empirically. If the indicating instrument was a meter pointer observed
by a person, perhaps Beta might have captured the observer effects
more than the equipment.

Owen
--
  #10   Report Post  
Old August 21st 06, 04:09 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 172
Default Confidence limits for noise measurement

Dear Owen and Richard:
I have arrived at this set very late and the hour is late. However,
many, many years ago I was involved with the radio astronomy task of
measuring (always two major numbers involved) the flux from a strong source.
A reasonably predictable antenna gain was effected by using a very long horn
antenna with a rectangular (it might have been square) cross section. The
antenna was placed in a gully so that the source would pass through the beam
once a sidereal day.

The scheme used to construct the antenna was innovative and non-trivial.

A classic, comparison measurement was effected. Are not all
measurements comparisons? A dummy load was kept in ice water protected by a
condom. A switching scheme was used to switch between the dummy and the
antenna with an offset. I called it the HILLRAMS receiver. (High Isolation
Low Loss Radio Astronomy Microwave Switched Receiver)

Since the bandwidth was the same for both sources, once a day we
measured how much stronger the source was than the noise produced by a zero
centigrade source. All analog/

At Ohio State I did something similar with, probably for the first time,
actual digitizing that went to a computer (punched paper tape!). As I
recall, I did worry about sample rate, but it was much faster than any
changes being observed because a heavy LPF was used. (With the slow
computers in use, I needed not to overdesign the rate too much.) It seems
to me that if you have any reasonably fast filter fall-off, 11 kHz is plenty
fast enough. But then, I am not too sure that I understand your concern and
I am starting to ramble (though I am stone sober).

73 Mac N8TT


--
J. Mc Laughlin; Michigan U.S.A.
Home:


Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Resonant Stub Measurement Chuck Olson Antenna 0 July 3rd 06 07:14 PM
Resonant stub measurement Chuck Olson Antenna 0 June 16th 06 09:42 PM
FCC: Broadband Power Line Systems Paul Policy 0 January 10th 05 05:41 PM
Stunning Consumer Confidence Fall yojimbo Shortwave 0 October 26th 04 05:23 PM
Consumer confidence falls! Agent Smith Scanner 0 July 18th 03 12:28 AM


All times are GMT +1. The time now is 05:26 AM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 RadioBanter.
The comments are property of their posters.
 

About Us

"It's about Radio"

 

Copyright © 2017