Thread: NEC Evaluations
View Single Post
  #19   Report Post  
Old January 3rd 09, 12:59 AM posted to rec.radio.amateur.antenna
Richard Clark Richard Clark is offline
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 2,951
Default NEC Evaluations

On Wed, 31 Dec 2008 08:50:28 -0800 (PST), wrote:

On Dec 23, 12:39 pm, Richard Clark wrote:
Nothing astonishes me more than the simple dash-off notes that claim
power measurement is a snap. I can well imagine, Jim, that you don't
do these measurements with traceability to the limits you suggest.

In point of fact, I *do* make measurements like that,


I note you slip loose from the constraint of "traceability." Doing
measurements "like that" is vastly different in outcome and holds
accuracy claims like a sieve holds water.

moving on to some very telling points offered in rebuttal to obtaining
0.1dB accurate power determinations:
... it *is* within the reach of someone at home with a lot of
time and care to substitute for expensive gear and calibrations
(basically, you have to do your own calibration).

And again with:
(back in the 40s, one built one's physics experimental gear and calibrated it yourself)

where both reveal a disastrously circular logic of what could only be
called "self determination" with a very tenuous grasp to accuracy.

It is pleasing to the Ham community to appeal to self reliance, and
construction projects, but this doesn't bootstrap accuracy. I can see
any number of gaping holes where standards are lacking and ignored
entirely.

Yes, it is both commendable and achievable to obtain precision (or its
half-brother, resolution), but this is not the absolute accuracy of a
0.1dB power determination.

Standard power measuring head on a Agilent power meter is better than
5% at HF, probably in the range of 1% for one head in comparison
measurements over a short time. The 8902 is sort of a special case,
but can do very accurate relative measurements. FWIW, the 8902
calibrates out the measurement head effects.


I have already cited accuracies and errors that conflict with your
supposition. You are taking characteristics in isolation and citing
them as being representative of the whole scope of determination of
power to a stated accuracy. The single example of your stating:
probably in the range of 1% for one head in comparison
measurements over a short time.

is relative accuracy, not absolute accuracy, and the relative accuracy
point is arguable by your own language of "probably." Probability is
expressed in numbers in Metrology. Basically you have injected an
additional error into the mix. In my short recitation of the
accumulation of error, I omitted this discussion as your original
off-hand statement was far outweighed by the preponderance of error
from a single common example. I chose to allow the cited literature
to describe the complete formal METHOD of determination to reveal the
full scope of accumulations of error rather than recite them here.

Scale error demands a full-scale indication to simple keep the error
contribution down to 0.1% (a 1/10th scale indication would jump that
error to 1%) ±1 digit.

This oversimplifies a bit.


I fail to see how this "simplicity" constitutes an objection that
nullifies the accumulation of errors.

the real limiting factor is the accuracy with which you
know the attenuation of the attenuators you're using to get the steps.


And that too has been covered.

With regard to mismatch, if you're interested in tenth dB accuracies,


I am interested in tenth dB accuracy, aren't you? Let's recall where
this began:
On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux wrote:

At HF and VHF, you should be able to do
power measurements to a tenth of a dB, with moderate care.


This statement has now been dismissed by your inclusion (supporting my
observation) of mismatch error - which you subsequently diminish:

you're going to have to measure the mismatch and account for it. It's
not that hard, just tedious.


Accounting for mismatch does not correct it. The error it contributes
remains. This is not an actuarial gimmick of Enron bookkeeping. The
off-hand hard/tedious baggage appears to be another objection without
substance.

The typical power meter head doesn't change it's Z very much,


And yet it still is NOT the Z you would like it to be, except by some
margin of error. Change is not the issue, absolute value is. This
appears to be yet another manufactured objection that points out error
only to dismiss it with a cavalier diminution of "very much."
Metrology doesn't accept adjectives in place of measurables.

Let's return to the claim, however. In fact, a typical power meter
head DOES change its Z, and it is by this very physical reality that
it performs power determination. You may be relying on a specific and
rather atypical head to support your argument. As you offer nothing
in the way of your typical head's design to support another off-hand
comment, we will have to wait for that coverage.

so once you've measured YOUR head and keep
the data around, you're good to go for the future. (and do your tests
at the same temperature, don't use the head for a door stop, etc.)


No, you do not make traceable measurements. Your statement of
futurity is an illusion only. Within the context of HP's fine
craftsmanship, it is a fairly safe illusion, but not into the
unlimited future. The calibration cycle for an RF head AND its
reference source (two sources of error) is 3 to 6 months where that
calibration data would be amended and changed to follow the natural
variation in characteristics. A skilled bench tech might trust the
instrument out for several years for relative loss measurements, but
absolute power determinations will have long lost their credentials.

If you don't care for the credentials of absolute power determination,
then you shouldn't be arguing the precision of accuracy.

There is a world of difference between absolute and relative accuracy.

As far as calculating uncertainties.. you bet.. it's not just stacking
em up. But that's true of ANY precision measurement, so if one is
quoting better than half dB numbers (i.e. if you give any digits to
the right of the decimal point) one should be able to back it up with
the uncertainty analysis (which is all described on the NIST website
and in the tech notes). This isn't hard, it's just tedious.


This repetition of the hard/tedious mantra has the odd appearance of
an objection diminishing the importance of established procedures of
power determination. It reduces the profession of precision
electronics to the repetitive motions of a trained monkey.

By their parts, hard and tedious, the METHODS of making accurate power
determination may lend that appearance to the unskilled worker. The
METHODS of making accurate power determination have a minimum
architecture that defines the limits of accuracy. The METHODS of
making accurate power determination have a minimum number of steps to
keep error low. Substituting a lot of architecture and adding a lot
of steps necessarily increases error. The optimum METHOD and the
ad-hoc method may each appear to be simple but tedious, but the
quality of their results, and hence their accuracy, are not equal by
that crude metric of adjectives. Throwing more gear and procedure
that is freely available (rather than as costly necessary) at the
problem, compounds error outrageously.

To suggest that the HP8902 can be replaced by the combination of
accumulated obsolete gear and elbow grease invites profound problems
if you don't know the limitation of the METHOD. The so-called "hard
and tedious" part of it contains a considerable body of knowledge and
experience. I have never encountered its discussion in this group
beyond these two adjectives and few correspondents rarely belly up to
the bench to even measure Q (which has its own body of instruction).

But the
whole thing about high quality amateur measurements is you're trading
off your time to do tedious extra measurements and analysis in
exchange for not sending a cal-lab a check.


This, again, is an appeal to substituting just-plain-hard-work for
accuracy. You fail to show any correlation to standards or their
necessity. The effort you describe may well pay off in superlative
precision; again, investing in resolution without paying for the cost
of accuracy.

The how to do it is all out there. What was "state of the art" for a
national laboratory in 1970 is fairly straightforward garage work
these days, and, a heck of a lot easier because you've got inexpensive
automation for making the measurements and inexpensive computer power
for doing the calibration calculations and uncertainty analysis.


This is nothing more than sentimentality.

The slide 36 discussion refers to measuring a signal at -110dBm, which
I would venture to say is well below the levels that most hams will be
interested in measuring. And, they are talking about where the source
Z is unconstrained. In a typical ham situation, these things probably
aren't the case.


This appears to be yet another objection: "probably aren't the case."

The METHOD dominates power determination accuracy. This is a fact of
life. The METHOD mandates the level at which it will be measured.
This is not about personal choice and preferences. More than several
instances offered in the reference as an example of error analysis is
described as a "special case." I see none in the literature that
returns us to a sample antenna in an high power RF field, but I see
plenty of METHODS that very nearly approximate it, and substantiate
considerable (compared to 0.1dB) accumulations of error.

If you were interested in measuring, for instance,
the loss of a piece of coax or the output of a 0dBm buffer amplifier
to a tenth of a dB, that's a whole lot easier than a -110dBm signal
from some probe into a 8902. The context of this discussion was
making measurements of antennas, and for that, one can normally
arrange to have decent signal levels, etc. OR, one is interested in
relative measurements, rather than absolute calibration. It's a whole
lot easier to measure a 0.1 dB difference between two signals.


This was your choice of instrumentation; this was your choice of power
determination. I have provided an incontrovertible example of how
your off-hand assertion failed from your own choices. It doesn't get
remarkably better however you decide to amend the conditions and those
amendments certainly won't come close to your original off-hand
observation of power determination of 0.1dB or less error.

For my simple analysis, I accepted your choice of instrument, I
selected the optimal power level, I had to infer your choice of band -
selected to favor you in the least error. I then compounded the
errors with a generous choice towards the LESSER RSS error that still
resolved to more than 0.1dB error in power determination. This
encompassed nothing other than the Instrument without discussion of
the necessary components that supported the mystery power coming from
the antenna in question. Their inclusion could have been fudged
outrageously to devastate the 0.1dB power determination by huge leaps.
I chose, instead, to travel a different path seeing that no one was
going to embrace that tar baby.

The analysis in Slide 36 illustrates a more complete METHOD and you
complain of range. It is but one of many METHODS. This METHOD
encompasses the entire scope of determination, and METHOD defines the
limits of accuracy. You have not offered us a better METHOD that thus
reduces the limits of accuracy to 0.1dB power determination or better
as your off-hand comment promised.

* * * * * * * * * * * * * * *

At this point, as an aside, I will offer some subtle, but important
definitions from the field of Metrology that weigh heavily on making
highly resolved, precise, accurate measurements. First off, those
very words I ended the last sentence with (and more):

Resolution - this is the number of digits in your determination.

Precision - this is also the resolution, but to the extent that it is
repeatable. If you take 10 measurements of power and get readings
that repeat for only 2 of the 7 digits of resolutions you have, your
precision is only 3 places at best. Resolution does not equal
quality. Basically, the remaining 4 places of resolution are noise
and their display is added for the entertainment of intellectual
tourists who do not understand science.

Accuracy - this is precision, but only where it agrees with actual
value. That is to say, you can successfully get 10 measurements that
repeat their determinations to 0.5%, and yet be 100% off of the actual
value. Your resolution is 0.1ppm, your precision is 1% and your
accuracy may be -50+100% The resolution is now certified crapola.

Relative Accuracy - this is the same as the accuracy statement above,
where you are comparing two measurables (two different powers, which
then becomes attenuation). It is typically the accuracy of ratios and
substitution. It is easier to achieve to higher resolution and your
standard need not be traceable, but it should be stable. If you don't
care what the actual value is, but only how much it changed, you may
find you can claim 0.1ppm error (insofar as this measurement is
repeatable to the same digit). There is a large body of
instrumentation that so qualifies.

Absolute Accuracy - this is the same as Relative Accuracy against a
known standard. It is typically the accuracy of determining an
unknown against a physical or electrical standard such as the Ampere,
Volt, Farad, Meter (and so on), and by abstraction, Power (you have to
measure two things where the accuracy has to degrade by virtue of
additional variables).

Some may appeal that their Relative Accuracy has the merit of Absolute
Accuracy - until you ask them to measure the actual, absolute value of
their ad-hoc standard. I have built primary standards and measured
them to 7 places. Over several years they migrated in value by 2 of
those least significant digits, but only in comparison to standards
that had "aged" and been calibrated at a national primary standards
laboratory. By themselves, I could have fooled myself (and perhaps
others less sophisticated) that they were absolutely accurate to the
extent of the number of digits my instruments could resolve.

73's
Richard Clark, KB7QHC