Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #11   Report Post  
Old December 23rd 08, 01:53 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 2,951
Default NEC Evaluations

On Mon, 22 Dec 2008 20:15:09 -0500, "J. Mc Laughlin"
wrote:

Almost fifty years ago, I led a team who measured field strengths in the 100
to 250 MHz range (FM and TV broadcast transmitters) to verify (qualify) the
propagation model.


Hi Mac, and season's greetings,

Can you relate the specifics of the measurement? At a minimum, what
you would deem to be your best accuracy compared to an absolute
standard, or to a relative standard (instrumentation, not
computational).

73's
Richard Clark, KB7QHC
  #12   Report Post  
Old December 23rd 08, 03:38 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 172
Default NEC Evaluations

Dear Richard:

It was almost 50 years ago when the models were rather new.....

More background: the terrain was hilly - far from smooth earth - and
path profiles were a critical part of the information along with the
inherent uncertainties of using "analog" maps and along with the assumption
about almost-straight line propagation. (an aside: we found examples of
unpredictable propagation along string-like valleys that were aligned with
transmitters, but the protected site was in a bowl-like valley.) (I saw one
family in a valley using a rhombic antenna to receive TV signals. Their son
had been in the Signal Corps.)

We were using state-of-the-art Empire measuring systems (run off of a
portable gasoline generator) that were calibrated with an impulse generator
at each measurement. We selected paths that were similar to the expected
paths of interfering transmitters. In other words, the paths were
more-or-less normal to ridge lines not along string-like valleys.

One more qualification: one path was found to have knife-edge
diffraction discovered by the caution of taking measurements spaced a few
meters apart at each data point. It was absolutely classic, but that data
was not used because the protected site did not have such sharp ridges at
its periphery.

With those qualifications, my best recollection is that measurements and
predicted measurements were within something like 3 or 4 dB. I doubt that
repeating those measurements with a GPS receiver, digital topographical map,
averaging near straight-line paths, and using a computer to do the
arithmetic would be any better.

Another note: Because of the expected sensitivity to interference at
the site, I would drive over a few hills, erect a dipole in trees, and work
my father on HF from the back seat of my car. No cell phones in those days.
.... long distance was a big deal too

Let us know how your studies are going. Warm regards, Mac N8TT
--
J. McLaughlin; Michigan, USA
Home:
"Richard Clark" wrote in message
...
On Mon, 22 Dec 2008 20:15:09 -0500, "J. Mc Laughlin"
wrote:

Almost fifty years ago, I led a team who measured field strengths in the
100
to 250 MHz range (FM and TV broadcast transmitters) to verify (qualify)
the
propagation model.


Hi Mac, and season's greetings,

Can you relate the specifics of the measurement? At a minimum, what
you would deem to be your best accuracy compared to an absolute
standard, or to a relative standard (instrumentation, not
computational).

73's
Richard Clark, KB7QHC



  #13   Report Post  
Old December 23rd 08, 12:20 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jun 2006
Posts: 440
Default NEC Evaluations


Richard Clark wrote:
...what you would deem to be your best accuracy compared
to an absolute standard, or to a relative standard (instrumentation,
not computational).

______________

You weren't asking me, but still you may be interested in the link
below which leads to a good presentation of this by the NIST. A table
on Page 3 there shows a measurement uncertainty at the NIST test
facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency
range.

Field intensity measurements made using uncontrolled path conditions
are more a measure of the propagation environment and the pattern/
location of the receive antenna than they are of the absolute
performance of the transmitting antenna system. Such measurement
errors can be gross, and difficult to quantify.

http://ts.nist.gov/MeasurementServic...d/im-34-4b.pdf

RF
  #14   Report Post  
Old December 23rd 08, 03:25 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 172
Default NEC Evaluations

Dear Richard Fry:
Thank you for the 1985 reference, which I had not seen before. Too many
IEEE groups exist!

A closed-loop system much like that shown in Figure 15 was built by me
and a student and used by the mid 70s to subject DUTs to up to at least 200
v/m at frequencies up to about 200 MHz. This was for automated evaluation
of the EMC of relatively small DUTs and was the prototype of a much larger
system implemented by a major manufacturer that allowed the testing of
entire cars. This was well before PCs, but after 488 signal sources and
wattmeters were available. Confidence to about 1 dB was felt because of the
tight correlation with a short voltage probe extending into the TEM cell.
Unfortunately, the small effective volume of the TEM cell precluded
measurements of antennas. The large room at NBS allowed them to measure
antennas and I saw them measuring a large UHF antenna with a near-field
probe in the early 1970s.

Jumping to HF antennas of 0.5 WL size or mo I am convinced that even
with a helicopter being used to measure a pattern, one can have more
confidence in the result of the intelligent use of NEC4 than in any
measurements.

The measurements made in late 50s (to gain confidence with VHF
propagation models) involved cherry-picking the paths to correspond with the
goal of understanding propagation of possible interference into the
radio-astronomy site. They also involved averaging a series of measurements
taken within a few meters of each other. The measurement sites were all
very rural and free of significant reflecting surfaces.

Warm regards, Mac N8TT
--
J. McLaughlin; Michigan, USA
Home:
"Richard Fry" wrote in message
...

Richard Clark wrote:
...what you would deem to be your best accuracy compared
to an absolute standard, or to a relative standard (instrumentation,
not computational).

______________

You weren't asking me, but still you may be interested in the link
below which leads to a good presentation of this by the NIST. A table
on Page 3 there shows a measurement uncertainty at the NIST test
facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency
range.

Field intensity measurements made using uncontrolled path conditions
are more a measure of the propagation environment and the pattern/
location of the receive antenna than they are of the absolute
performance of the transmitting antenna system. Such measurement
errors can be gross, and difficult to quantify.

http://ts.nist.gov/MeasurementServic...d/im-34-4b.pdf

RF


  #15   Report Post  
Old December 23rd 08, 05:40 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 2,951
Default NEC Evaluations

On Tue, 23 Dec 2008 04:20:26 -0800 (PST), Richard Fry
wrote:

A table
on Page 3 there shows a measurement uncertainty at the NIST test
facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency
range.


Actually, ±1 dB would be the most likely error for instrumentation
error (±¼ dB could never be achieved); matching error would compound
that; the antenna would add another ±1 dB; path would scramble that
further if not performed in an anechoic chamber or on a calibrated
range.

Mac's test system (from fig. 15 he reports in other correspondence)
would accumulate up to the several dB he reported earlier. It would
exhibit a very good relative accuracy, but absolute accuracy would be
several dB error as he has already offered in prior correspondence.
Path problems would have to be hammered out on their own.

73's
Richard Clark, KB7QHC


  #16   Report Post  
Old December 23rd 08, 06:29 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Mar 2007
Posts: 801
Default NEC Evaluations

Richard Clark wrote:
On Tue, 23 Dec 2008 04:20:26 -0800 (PST), Richard Fry
wrote:

A table
on Page 3 there shows a measurement uncertainty at the NIST test
facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency
range.


Actually, ±1 dB would be the most likely error for instrumentation
error (±¼ dB could never be achieved); matching error would compound
that; the antenna would add another ±1 dB; path would scramble that
further if not performed in an anechoic chamber or on a calibrated
range.


At HF and VHF, you should be able to do power measurements to a tenth of
a dB, with moderate care. (obviously, you'd have to deal with
measuring the mismatch, etc.). A run of the mill power meter should
give you 5% accuracy (0.2 dB) without too much trouble. A 8902
measuring receiver can do substantially better. Even at microwave
frequencies, better than 0.1 dB uncertainty (2 sigma) are possible with
free space measurements (e.g. from an orbiting satellite to a ground
station), with all the uncertainties stacked up (atmospheric, radome
loss, antenna, electronics, etc.), although this is decidedly non-trivial.

As mentioned, site effects or chamber uncertainties might contribute more.

A typical anechoic chamber might have -20dB worst case reflections from
the walls, and -40dB as more typical. A single scattering path will
then contribute an uncertainty (worst case) of 1%, or 0.04 dB, although
modern measurement technique (using multiple probe positions) can
quantify this error and remove it, assuming the UUT and equipment are
stable enough over the measurement period.

The TEM cell is nice because it gives you a way to create a calibrated
field to characterize your probe.


Mac's test system (from fig. 15 he reports in other correspondence)
would accumulate up to the several dB he reported earlier. It would
exhibit a very good relative accuracy, but absolute accuracy would be
several dB error as he has already offered in prior correspondence.
Path problems would have to be hammered out on their own.

73's
Richard Clark, KB7QHC

  #17   Report Post  
Old December 23rd 08, 08:39 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 2,951
Default NEC Evaluations

On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux
wrote:

At HF and VHF, you should be able to do power measurements to a tenth of
a dB, with moderate care. (obviously, you'd have to deal with
measuring the mismatch, etc.). A run of the mill power meter should
give you 5% accuracy (0.2 dB) without too much trouble. A 8902
measuring receiver can do substantially better.


Nothing astonishes me more than the simple dash-off notes that claim
power measurement is a snap. I can well imagine, Jim, that you don't
do these measurements with traceability to the limits you suggest.

For the other readers:

We will specifically start with the 8902 measuring receiver. A
premier instrument indeed, but it falls fall FAR short of actually
measuring power without a considerable body of necessary
instrumentation (well illustrated by Mac's observation found in that
fig. 15 already cited). Most claimants peer at one line in a spec
sheet and figure that is the end of the discussion. Glances elsewhere
begins to build the actual accuracy obtainable through the chain of
errors that accumulate. For instance with a 1mW input in the VHF
band:

Internal power standard: ±1.2% and we have yet to look at the
measurement head's error contribution. The so-called "run of the mill
power meters" are drawing close, too close to this precision set's
expensive quality such that their estimation of 5% is already suspect
quality.

Scale error demands a full-scale indication to simple keep the error
contribution down to 0.1% (a 1/10th scale indication would jump that
error to 1%) ±1 digit.

Input SWR with the HP 11792 is rated at 1.15 at worst (I've measured
with far better matches) to that same source's 1.05 SWR adds 0.4%
error. If you are not measuring power at the specific frequency of
the internal source, add more error averaging onwards to 2%.

Things build up from here for just one instrument and its RF head to a
worst case valuation of 5% to 6% error. This further trashes the
observation of "run of the mill power meters" vaunted 5% accuracies.

Of course, in this computation of error neophytes are tempted to
employ the RMS estimation. This clearly reveals those untested in the
arts where bench techs who do their best understand that the RSS
estimation is what pays their salary. Taking a step above skilled
bench work to that of a Calibration lab, you buy all the error at face
value (hence the term "worst case" that is used by the professionals
employed in this art).

THEN we turn our attention to the rest of the bench that holds the
remaining components that support the measurement of a power level and
accuracy begins to slide drastically. I've been there, and I've been
trained to reduce the variables - not an easy task and one that the
march of time has NOT improved. Mismatch error climbs like the
Himalayas if you don't employ line conditioners (which bring their own
mismatch) and isolators (which bring their own mismatch) and so on
down the proverbial line towards the source being measured (that
antenna every one knows has X amount of power coming from it).

For those who are stunned by this bajillion dollar solution giving
them 14% best accuracy (and RSS at that) results, confer with:
http://www.home.agilent.com/upload/c...EPSG085840.pdf
and observe the commentary for slide 36.

See if you can cook up a method that doesn't hammer you into the
ground. I can anticipate some:

1. Throw a box car of money at the problem;

2. Buy lab time at NIST;

3. Write a report that runs to book length (I've carried most of that
water by providing the link above) - or xerox the book that already
exists: "Microwave Theory and Applications," Stephen F. Adam;

4. Do it with precision components employing best practices to the
best achievable accuracy - you will need further instruction into best
practices, however;

5. Ignore reality.

Only the last two options are achievable by the ordinary Ham. To
claim that "someone else" can do it better and is thus achievable is
sophistry serving ego in an argument.

73's
Richard Clark, KB7QHC
  #18   Report Post  
Old December 31st 08, 04:50 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jan 2007
Posts: 61
Default NEC Evaluations

On Dec 23, 12:39 pm, Richard Clark wrote:
On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux
wrote:

At HF and VHF, you should be able to do power measurements to a tenth of
a dB, with moderate care. (obviously, you'd have to deal with
measuring the mismatch, etc.). A run of the mill power meter should
give you 5% accuracy (0.2 dB) without too much trouble. A 8902
measuring receiver can do substantially better.


Nothing astonishes me more than the simple dash-off notes that claim
power measurement is a snap. I can well imagine, Jim, that you don't
do these measurements with traceability to the limits you suggest.


In point of fact, I *do* make measurements like that, and as I said,
it requires "moderate care" and good technique and instrumentation. A
random diode measured with your $5 Harbor Freight DMM isn't going to
hack it. Neither is most of the stuff sold to hams. It is hardly a
"snap", but it *is* within the reach of someone at home with a lot of
time and care to substitute for expensive gear and calibrations
(basically, you have to do your own calibration).



For the other readers:

We will specifically start with the 8902 measuring receiver. A
premier instrument indeed, but it falls fall FAR short of actually
measuring power without a considerable body of necessary
instrumentation (well illustrated by Mac's observation found in that
fig. 15 already cited). Most claimants peer at one line in a spec
sheet and figure that is the end of the discussion. Glances elsewhere
begins to build the actual accuracy obtainable through the chain of
errors that accumulate. For instance with a 1mW input in the VHF
band:

Internal power standard: ±1.2% and we have yet to look at the
measurement head's error contribution. The so-called "run of the mill
power meters" are drawing close, too close to this precision set's
expensive quality such that their estimation of 5% is already suspect
quality.


Standard power measuring head on a Agilent power meter is better than
5% at HF, probably in the range of 1% for one head in comparison
measurements over a short time. The 8902 is sort of a special case,
but can do very accurate relative measurements. FWIW, the 8902
calibrates out the measurement head effects.



Scale error demands a full-scale indication to simple keep the error
contribution down to 0.1% (a 1/10th scale indication would jump that
error to 1%) ±1 digit.

This oversimplifies a bit. Typically, you'll have some uncertainty
that is proportional to the signal measured (e.g. mismatch will affect
the signal the same way regardless of level) and some that is
absolute, independent of the signal level (e.g. the analog noise in
the voltmeter). As you say, bigger signals are easier to measure
precisely.. the real limiting factor is the accuracy with which you
know the attenuation of the attenuators you're using to get the steps.

With regard to mismatch, if you're interested in tenth dB accuracies,
you're going to have to measure the mismatch and account for it. It's
not that hard, just tedious. The typical power meter head doesn't
change it's Z very much, so once you've measured YOUR head and keep
the data around, you're good to go for the future. (and do your tests
at the same temperature, don't use the head for a door stop, etc.)

As far as calculating uncertainties.. you bet.. it's not just stacking
em up. But that's true of ANY precision measurement, so if one is
quoting better than half dB numbers (i.e. if you give any digits to
the right of the decimal point) one should be able to back it up with
the uncertainty analysis (which is all described on the NIST website
and in the tech notes). This isn't hard, it's just tedious. But the
whole thing about high quality amateur measurements is you're trading
off your time to do tedious extra measurements and analysis in
exchange for not sending a cal-lab a check.

The how to do it is all out there. What was "state of the art" for a
national laboratory in 1970 is fairly straightforward garage work
these days, and, a heck of a lot easier because you've got inexpensive
automation for making the measurements and inexpensive computer power
for doing the calibration calculations and uncertainty analysis.

The slide 36 discussion refers to measuring a signal at -110dBm, which
I would venture to say is well below the levels that most hams will be
interested in measuring. And, they are talking about where the source
Z is unconstrained. In a typical ham situation, these things probably
aren't the case. If you were interested in measuring, for instance,
the loss of a piece of coax or the output of a 0dBm buffer amplifier
to a tenth of a dB, that's a whole lot easier than a -110dBm signal
from some probe into a 8902. The context of this discussion was
making measurements of antennas, and for that, one can normally
arrange to have decent signal levels, etc. OR, one is interested in
relative measurements, rather than absolute calibration. It's a whole
lot easier to measure a 0.1 dB difference between two signals.

You suggested 5 alternatives:
ee if you can cook up a method that doesn't hammer you into the
ground. I can anticipate some:

1. Throw a box car of money at the problem;
Or, throw some time at the problem.. this is the classic ham tradeoff... "I don't have money, but I do have time" It's no different than grinding your own telescope mirrors, building your own Yagi or wire antenna, etc.



2. Buy lab time at NIST;
That's the money thing (and it doesn't require boxcar loads.. perhaps a kilobuck or two.. and for some folks, it's worth it.. although I can't see any amateur radio need. I can see doing it as part of a hobby involving precision, like the folks on time-nuts who operate multiple Cs clocks and build very high performance quartz oscillators for the thrill of getting to 1E-14 or 1E-16 Allan deviation.. Folks who do home nuclear fusion also might avail themselves of pro cal services for their neutron detectors, because there isn't a convenient way of doing home cals, unlike for RF power, where it's at least possible)


3. Write a report that runs to book length (I've carried most of that
water by providing the link above) - or xerox the book that already
exists: "Microwave Theory and Applications," Stephen F. Adam;

Or any of a variety of sources. One doesn't need a book for this, but one does need some care in technique and some background knowledge. It's like reading John Strong's book on building scientific instruments (back in the 40s, one built one's physics experimental gear and calibrated it yourself)


4. Do it with precision components employing best practices to the
best achievable accuracy - you will need further instruction into best
practices, however;

5. Ignore reality.

---

  #19   Report Post  
Old January 3rd 09, 12:59 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 2,951
Default NEC Evaluations

On Wed, 31 Dec 2008 08:50:28 -0800 (PST), wrote:

On Dec 23, 12:39 pm, Richard Clark wrote:
Nothing astonishes me more than the simple dash-off notes that claim
power measurement is a snap. I can well imagine, Jim, that you don't
do these measurements with traceability to the limits you suggest.

In point of fact, I *do* make measurements like that,


I note you slip loose from the constraint of "traceability." Doing
measurements "like that" is vastly different in outcome and holds
accuracy claims like a sieve holds water.

moving on to some very telling points offered in rebuttal to obtaining
0.1dB accurate power determinations:
... it *is* within the reach of someone at home with a lot of
time and care to substitute for expensive gear and calibrations
(basically, you have to do your own calibration).

And again with:
(back in the 40s, one built one's physics experimental gear and calibrated it yourself)

where both reveal a disastrously circular logic of what could only be
called "self determination" with a very tenuous grasp to accuracy.

It is pleasing to the Ham community to appeal to self reliance, and
construction projects, but this doesn't bootstrap accuracy. I can see
any number of gaping holes where standards are lacking and ignored
entirely.

Yes, it is both commendable and achievable to obtain precision (or its
half-brother, resolution), but this is not the absolute accuracy of a
0.1dB power determination.

Standard power measuring head on a Agilent power meter is better than
5% at HF, probably in the range of 1% for one head in comparison
measurements over a short time. The 8902 is sort of a special case,
but can do very accurate relative measurements. FWIW, the 8902
calibrates out the measurement head effects.


I have already cited accuracies and errors that conflict with your
supposition. You are taking characteristics in isolation and citing
them as being representative of the whole scope of determination of
power to a stated accuracy. The single example of your stating:
probably in the range of 1% for one head in comparison
measurements over a short time.

is relative accuracy, not absolute accuracy, and the relative accuracy
point is arguable by your own language of "probably." Probability is
expressed in numbers in Metrology. Basically you have injected an
additional error into the mix. In my short recitation of the
accumulation of error, I omitted this discussion as your original
off-hand statement was far outweighed by the preponderance of error
from a single common example. I chose to allow the cited literature
to describe the complete formal METHOD of determination to reveal the
full scope of accumulations of error rather than recite them here.

Scale error demands a full-scale indication to simple keep the error
contribution down to 0.1% (a 1/10th scale indication would jump that
error to 1%) ±1 digit.

This oversimplifies a bit.


I fail to see how this "simplicity" constitutes an objection that
nullifies the accumulation of errors.

the real limiting factor is the accuracy with which you
know the attenuation of the attenuators you're using to get the steps.


And that too has been covered.

With regard to mismatch, if you're interested in tenth dB accuracies,


I am interested in tenth dB accuracy, aren't you? Let's recall where
this began:
On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux wrote:

At HF and VHF, you should be able to do
power measurements to a tenth of a dB, with moderate care.


This statement has now been dismissed by your inclusion (supporting my
observation) of mismatch error - which you subsequently diminish:

you're going to have to measure the mismatch and account for it. It's
not that hard, just tedious.


Accounting for mismatch does not correct it. The error it contributes
remains. This is not an actuarial gimmick of Enron bookkeeping. The
off-hand hard/tedious baggage appears to be another objection without
substance.

The typical power meter head doesn't change it's Z very much,


And yet it still is NOT the Z you would like it to be, except by some
margin of error. Change is not the issue, absolute value is. This
appears to be yet another manufactured objection that points out error
only to dismiss it with a cavalier diminution of "very much."
Metrology doesn't accept adjectives in place of measurables.

Let's return to the claim, however. In fact, a typical power meter
head DOES change its Z, and it is by this very physical reality that
it performs power determination. You may be relying on a specific and
rather atypical head to support your argument. As you offer nothing
in the way of your typical head's design to support another off-hand
comment, we will have to wait for that coverage.

so once you've measured YOUR head and keep
the data around, you're good to go for the future. (and do your tests
at the same temperature, don't use the head for a door stop, etc.)


No, you do not make traceable measurements. Your statement of
futurity is an illusion only. Within the context of HP's fine
craftsmanship, it is a fairly safe illusion, but not into the
unlimited future. The calibration cycle for an RF head AND its
reference source (two sources of error) is 3 to 6 months where that
calibration data would be amended and changed to follow the natural
variation in characteristics. A skilled bench tech might trust the
instrument out for several years for relative loss measurements, but
absolute power determinations will have long lost their credentials.

If you don't care for the credentials of absolute power determination,
then you shouldn't be arguing the precision of accuracy.

There is a world of difference between absolute and relative accuracy.

As far as calculating uncertainties.. you bet.. it's not just stacking
em up. But that's true of ANY precision measurement, so if one is
quoting better than half dB numbers (i.e. if you give any digits to
the right of the decimal point) one should be able to back it up with
the uncertainty analysis (which is all described on the NIST website
and in the tech notes). This isn't hard, it's just tedious.


This repetition of the hard/tedious mantra has the odd appearance of
an objection diminishing the importance of established procedures of
power determination. It reduces the profession of precision
electronics to the repetitive motions of a trained monkey.

By their parts, hard and tedious, the METHODS of making accurate power
determination may lend that appearance to the unskilled worker. The
METHODS of making accurate power determination have a minimum
architecture that defines the limits of accuracy. The METHODS of
making accurate power determination have a minimum number of steps to
keep error low. Substituting a lot of architecture and adding a lot
of steps necessarily increases error. The optimum METHOD and the
ad-hoc method may each appear to be simple but tedious, but the
quality of their results, and hence their accuracy, are not equal by
that crude metric of adjectives. Throwing more gear and procedure
that is freely available (rather than as costly necessary) at the
problem, compounds error outrageously.

To suggest that the HP8902 can be replaced by the combination of
accumulated obsolete gear and elbow grease invites profound problems
if you don't know the limitation of the METHOD. The so-called "hard
and tedious" part of it contains a considerable body of knowledge and
experience. I have never encountered its discussion in this group
beyond these two adjectives and few correspondents rarely belly up to
the bench to even measure Q (which has its own body of instruction).

But the
whole thing about high quality amateur measurements is you're trading
off your time to do tedious extra measurements and analysis in
exchange for not sending a cal-lab a check.


This, again, is an appeal to substituting just-plain-hard-work for
accuracy. You fail to show any correlation to standards or their
necessity. The effort you describe may well pay off in superlative
precision; again, investing in resolution without paying for the cost
of accuracy.

The how to do it is all out there. What was "state of the art" for a
national laboratory in 1970 is fairly straightforward garage work
these days, and, a heck of a lot easier because you've got inexpensive
automation for making the measurements and inexpensive computer power
for doing the calibration calculations and uncertainty analysis.


This is nothing more than sentimentality.

The slide 36 discussion refers to measuring a signal at -110dBm, which
I would venture to say is well below the levels that most hams will be
interested in measuring. And, they are talking about where the source
Z is unconstrained. In a typical ham situation, these things probably
aren't the case.


This appears to be yet another objection: "probably aren't the case."

The METHOD dominates power determination accuracy. This is a fact of
life. The METHOD mandates the level at which it will be measured.
This is not about personal choice and preferences. More than several
instances offered in the reference as an example of error analysis is
described as a "special case." I see none in the literature that
returns us to a sample antenna in an high power RF field, but I see
plenty of METHODS that very nearly approximate it, and substantiate
considerable (compared to 0.1dB) accumulations of error.

If you were interested in measuring, for instance,
the loss of a piece of coax or the output of a 0dBm buffer amplifier
to a tenth of a dB, that's a whole lot easier than a -110dBm signal
from some probe into a 8902. The context of this discussion was
making measurements of antennas, and for that, one can normally
arrange to have decent signal levels, etc. OR, one is interested in
relative measurements, rather than absolute calibration. It's a whole
lot easier to measure a 0.1 dB difference between two signals.


This was your choice of instrumentation; this was your choice of power
determination. I have provided an incontrovertible example of how
your off-hand assertion failed from your own choices. It doesn't get
remarkably better however you decide to amend the conditions and those
amendments certainly won't come close to your original off-hand
observation of power determination of 0.1dB or less error.

For my simple analysis, I accepted your choice of instrument, I
selected the optimal power level, I had to infer your choice of band -
selected to favor you in the least error. I then compounded the
errors with a generous choice towards the LESSER RSS error that still
resolved to more than 0.1dB error in power determination. This
encompassed nothing other than the Instrument without discussion of
the necessary components that supported the mystery power coming from
the antenna in question. Their inclusion could have been fudged
outrageously to devastate the 0.1dB power determination by huge leaps.
I chose, instead, to travel a different path seeing that no one was
going to embrace that tar baby.

The analysis in Slide 36 illustrates a more complete METHOD and you
complain of range. It is but one of many METHODS. This METHOD
encompasses the entire scope of determination, and METHOD defines the
limits of accuracy. You have not offered us a better METHOD that thus
reduces the limits of accuracy to 0.1dB power determination or better
as your off-hand comment promised.

* * * * * * * * * * * * * * *

At this point, as an aside, I will offer some subtle, but important
definitions from the field of Metrology that weigh heavily on making
highly resolved, precise, accurate measurements. First off, those
very words I ended the last sentence with (and more):

Resolution - this is the number of digits in your determination.

Precision - this is also the resolution, but to the extent that it is
repeatable. If you take 10 measurements of power and get readings
that repeat for only 2 of the 7 digits of resolutions you have, your
precision is only 3 places at best. Resolution does not equal
quality. Basically, the remaining 4 places of resolution are noise
and their display is added for the entertainment of intellectual
tourists who do not understand science.

Accuracy - this is precision, but only where it agrees with actual
value. That is to say, you can successfully get 10 measurements that
repeat their determinations to 0.5%, and yet be 100% off of the actual
value. Your resolution is 0.1ppm, your precision is 1% and your
accuracy may be -50+100% The resolution is now certified crapola.

Relative Accuracy - this is the same as the accuracy statement above,
where you are comparing two measurables (two different powers, which
then becomes attenuation). It is typically the accuracy of ratios and
substitution. It is easier to achieve to higher resolution and your
standard need not be traceable, but it should be stable. If you don't
care what the actual value is, but only how much it changed, you may
find you can claim 0.1ppm error (insofar as this measurement is
repeatable to the same digit). There is a large body of
instrumentation that so qualifies.

Absolute Accuracy - this is the same as Relative Accuracy against a
known standard. It is typically the accuracy of determining an
unknown against a physical or electrical standard such as the Ampere,
Volt, Farad, Meter (and so on), and by abstraction, Power (you have to
measure two things where the accuracy has to degrade by virtue of
additional variables).

Some may appeal that their Relative Accuracy has the merit of Absolute
Accuracy - until you ask them to measure the actual, absolute value of
their ad-hoc standard. I have built primary standards and measured
them to 7 places. Over several years they migrated in value by 2 of
those least significant digits, but only in comparison to standards
that had "aged" and been calibrated at a national primary standards
laboratory. By themselves, I could have fooled myself (and perhaps
others less sophisticated) that they were absolutely accurate to the
extent of the number of digits my instruments could resolve.

73's
Richard Clark, KB7QHC
  #20   Report Post  
Old January 4th 09, 12:50 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jan 2007
Posts: 61
Default NEC Evaluations

On Jan 2, 4:59 pm, Richard Clark wrote:
On Wed, 31 Dec 2008 08:50:28 -0800 (PST), wrote:
On Dec 23, 12:39 pm, Richard Clark wrote:
Nothing astonishes me more than the simple dash-off notes that claim
power measurement is a snap. I can well imagine, Jim, that you don't
do these measurements with traceability to the limits you suggest.


In point of fact, I *do* make measurements like that,


I note you slip loose from the constraint of "traceability." Doing
measurements "like that" is vastly different in outcome and holds
accuracy claims like a sieve holds water.


Traceability only applies when you need absolute measurements, and
there you will need something as a standard, since most RF power
measurements are really more of "transfer" measurements. Since power
is basically an energy measurement, then all manner of calorimetric
approaches will work (or radiometric, if you're working in
microwaves).. then it comes down to how accurately you can measure/
hold temperature. It comes back to what's a reasonable thing for a
ham to have that can serve as a local standard. Time/frequency are
clearly the easiest to get to high precision (1E-10 is straightforward
these days with a GPS disciplined oscillator), voltage is a bit
tougher.. 1ppm would be very, very good for hams, 1e-4 seems plausible
with decent high quality voltage references that cost $100.
Temperature to 0.1 degree should be doable, so that gives you a part
in 3000, roughly.

A ham seriously interested in 0.1 dB measurements will probably be
able to scrounge up something to use as a transfer standard and
scrounge up a way to get it calibrated. For instance, Noise/Com used
to offer a discounted calibration service for sources based on their
noise diodes. Once you've got a standard, if you take care of it, then
you can use it for lots of things.

The original question had to do with accuracy of measurements vs NEC,
and those would be relative, and, I maintain, not too tough to do to
0.1 dB, because you're making comparison measurements with the same
sensor, at pretty much the same level, etc.




moving on to some very telling points offered in rebuttal to obtaining
0.1dB accurate power determinations:... it *is* within the reach of someone at home with a lot of
time and care to substitute for expensive gear and calibrations
(basically, you have to do your own calibration).

And again with:
(back in the 40s, one built one's physics experimental gear and calibrated it yourself)


where both reveal a disastrously circular logic of what could only be
called "self determination" with a very tenuous grasp to accuracy.


All calibrations, whether in a cal lab or your garage start somewhere.
It's how much trouble are you going to go to for that first standard.
Do you do it calorimetrically and use water triple point and boiling
point as references? Do you trust a good calibrated DVM? 0.1dB means
2% in power.. not exactly gnats eyelash precision (e.g. measuring
temperature to 1 degree C out of a change of 100 degrees is 1%)



Standard power measuring head on a Agilent power meter is better than
5% at HF, probably in the range of 1% for one head in comparison
measurements over a short time. The 8902 is sort of a special case,
but can do very accurate relative measurements. FWIW, the 8902
calibrates out the measurement head effects.


I have already cited accuracies and errors that conflict with your
supposition. You are taking characteristics in isolation and citing
them as being representative of the whole scope of determination of
power to a stated accuracy. The single example of your stating: probably in the range of 1% for one head in comparison
measurements over a short time.


Sure.. if you are concerned about 0.1dB, then you're going to need to
calculate for yourself, and not take an offhand assertion. That said,
I still think that 0.1dB is reasonable after you take into account all
the uncertainties (and eliminate things that add to the error.. mate/
demate, temperature changes, equipment changes, etc.).

I am interested in tenth dB accuracy, aren't you? Let's recall where
this began:

On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux wrote:


At HF and VHF, you should be able to do
power measurements to a tenth of a dB, with moderate care.


This statement has now been dismissed by your inclusion (supporting my
observation) of mismatch error - which you subsequently diminish:

you're going to have to measure the mismatch and account for it. It's
not that hard, just tedious.


Accounting for mismatch does not correct it. The error it contributes
remains. This is not an actuarial gimmick of Enron bookkeeping. The
off-hand hard/tedious baggage appears to be another objection without
substance.


No.. if you KNOW the mismatch, it's not an uncertainty anymore. There
is an uncertainty in the amount of mismatch, but in a relative
measurement (e.g. received power from a probe on an antenna range..
the original question) the mismatch doesn't change from measurement to
measurement, so it doesn't contribute uncertainty to the measurement.
Likewise, if you actually measure the reflected power (e.g. in a VNA)
then you don't have to use the "power uncertainty due to mismatch"
equation which assumes that the reflection coefficient is of unknown
angle. Yes, the reflected power measurement will have an uncertainty,
but that is a smaller contributor to the overall uncertainty than the
"unknown phase of reflection" uncertainty.



The typical power meter head doesn't change it's Z very much,


And yet it still is NOT the Z you would like it to be, except by some
margin of error. Change is not the issue, absolute value is. This
appears to be yet another manufactured objection that points out error
only to dismiss it with a cavalier diminution of "very much."
Metrology doesn't accept adjectives in place of measurables.


For a relative measurement the source and load Zs are constant, and
whatever mismatch there is will be the same for all measurements (e.g.
in an IF substitution measurement, it's the Z looking into the
measurement system's attenuator or amplifier). If you're trying to
measure the output of a source with varying Z, then, of course, the
mismatch will affect the net amount of power crossing the reference
plane into the sensor, and you'll need to do that. But there are lots
and lots of cases where a ham might make measurements to 0.1dB where
the source Z is constant. (say, making noise temperature measurements
of the sun with an antenna, or measuring the received signal from a
distant transmitter.. same antenna, same physical location, same
receiving system.. the measurement system isn't changing)


Let's return to the claim, however. In fact, a typical power meter
head DOES change its Z, and it is by this very physical reality that
it performs power determination. You may be relying on a specific and
rather atypical head to support your argument. As you offer nothing
in the way of your typical head's design to support another off-hand
comment, we will have to wait for that coverage.


The datasheet for the head gives this. If you're looking at the
thermistor/thermocouple mount style head, the Z looking into the head
is basically that of the load resistor, which, if held at constant
temperature (constant = within a few degrees), I doubt it changes more
than a fraction of a percent. A diode head (like the 8481,8487, etc.
for HP/Agilent meters) is also going to be pretty good. Agillent
claims the increase in uncertainty for ALL causes from an extended
temperature (0-50) over the specified 25 +/- 5 is something like
0.9%. Obviously, if want to dot is and cross ts, then you can
actually measure it, but you'd only need to characterize it once
(that's the moderate care thing.. a lot of good metrology is just
record keeping).

There's also a change in Z with frequency (an issue if your transfer
cal standard is at a different frequency than your measurement
frequency; not the case in a relative measurement on an antenna
range), but again, you can either take the worst case in the data
sheet, or measure it yourself. That's sort of the difference beween
modern VNAs and old style measurements. The modern VNA uses a set of
cal standards that has properties determined by its mechanical
construction (e.g. short, open, thru) and does the arithmetic for
you. Even the $1000 N2PK and TAPR VNAs do this. As long as you're
at the same temperature, it should be good.


so once you've measured YOUR head and keep
the data around, you're good to go for the future. (and do your tests
at the same temperature, don't use the head for a door stop, etc.)


No, you do not make traceable measurements. Your statement of
futurity is an illusion only. Within the context of HP's fine
craftsmanship, it is a fairly safe illusion, but not into the
unlimited future. The calibration cycle for an RF head AND its
reference source (two sources of error) is 3 to 6 months where that
calibration data would be amended and changed to follow the natural
variation in characteristics. A skilled bench tech might trust the
instrument out for several years for relative loss measurements, but
absolute power determinations will have long lost their credentials.


I think that reasonable folks could differ on the concept of "required
calibration cycle" and "aging life"... It's not like the thermocouple
or load resistor in a power meter head has an aging process that
causes it to suddenly go out of cal after 6 months. More likely, the
cycle is a good blend of economics and the expected variation from a
typical rough and tumble bench environment. Most of the time, the
calibration cycle is more to make sure that the device hasn't
"broken". If your device is using something like a crystal
oscillator, then there IS an aging thing to worry about.

You have raised an interesting question, though, so I'll have to go
ask the folks at the cal lab to see if we have any long term data on,
say, a 848x head to see what sort of aging or changes there are.




This repetition of the hard/tedious mantra has the odd appearance of
an objection diminishing the importance of established procedures of
power determination. It reduces the profession of precision
electronics to the repetitive motions of a trained monkey.


No, I think you misunderstand "hard" in this context. It doesn't
require any special new thinking to do accurate power measurements.
The methodology is well known, as are the error sources, and the
evaluation of uncertainty. The "hard" part is reducing the
uncertainties (i.e. the equipment design) in the first place or in
choosing a measurement method that tends to cancel errors (e.g. Dicke
switch radiometers use the same sensor for both reference and unknown
measurement, eliminating sensor/sensor uncertainty, at the expense of
the uncertainty due to the switch). The tedious part is in being
careful, doing repeated measurements, controlling the environment, and
then grinding the math.


that crude metric of adjectives. Throwing more gear and procedure
that is freely available (rather than as costly necessary) at the
problem, compounds error outrageously.


I'd say "may" compound error.


This, again, is an appeal to substituting just-plain-hard-work for
accuracy. You fail to show any correlation to standards or their
necessity. The effort you describe may well pay off in superlative
precision; again, investing in resolution without paying for the cost
of accuracy.


Or, perhaps, getting your accuracy from standards you DO have handy,
rather than relying on the instrument's internal transfer standard. An
example might be using thermal hot/cold loads to calibrate a
radiometer rather than a calibrated diode noise source (where the
diode was calibrated somewhere else against a thermal standard). If
you don't have the diode (or the wherewithal to send it to NIST for
calibration on their radiometer), then perhaps hard work and time can
get you a comparison against something you CAN measure accurately
(boiling LN2, boiling water, etc.).


Z is unconstrained. In a typical ham situation, these things probably
aren't the case.


This appears to be yet another objection: "probably aren't the case."


The original post had to do with comparing measured antenna patterns
against NEC models. That IS the case there.




This was your choice of instrumentation; this was your choice of power
determination. I have provided an incontrovertible example of how
your off-hand assertion failed from your own choices. It doesn't get
remarkably better however you decide to amend the conditions and those
amendments certainly won't come close to your original off-hand
observation of power determination of 0.1dB or less error.


I note that a later page in the same presentation shows absolute power
measurement worst case uncertainty at 30 MHz of 0.02dB over a power
range of -10 to -70 dBm (slide 47).. I just picked the 8902
arbitrarily as an example of something that I know can do relative
measurements to this sort of accuracy (as opposed to, say, a Bird 43
watt meter, which cannot) to refute your original statement
(paraphrased) that measurements to better than 0.5 dB were
impractical.

Some may appeal that their Relative Accuracy has the merit of Absolute

Accuracy - until you ask them to measure the actual, absolute value of
their ad-hoc standard. I have built primary standards and measured
them to 7 places. Over several years they migrated in value by 2 of
those least significant digits, but only in comparison to standards
that had "aged" and been calibrated at a national primary standards
laboratory. By themselves, I could have fooled myself (and perhaps
others less sophisticated) that they were absolutely accurate to the
extent of the number of digits my instruments could resolve.

--
So, you measured to 1E-7, and over several years, they changed by
1E-5? That's a whole heck of a lot better than 1E-2 (which is what
0.1dB implies). Those 3 orders of magnitude are why I think it's
reasonable and plausible for hams to make measurements to 0.1dB.

--

http://hdl.handle.net/2014/18497 describes one measurement system I
designed, built and calibrated. That paper was aimed at a more
general audience and doesn't give much of the uncertainty analysis,
but it can be found in the several dissertations based on the
calibration station. There's nothing special in this system that
couldn't be duplicated by a ham for HF or VHF use. Certainly, the NIST
Type IV power meter (used in the system to measure the level of the
reference source used to calibrate the receiver chain) is something
eminently doable for ham use (See Larsen's paper from 1975) and mostly
depends on a "really good" DVM for its accuracy.

Jim
Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT +1. The time now is 01:06 AM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 RadioBanter.
The comments are property of their posters.
 

About Us

"It's about Radio"

 

Copyright © 2017