Home |
Search |
Today's Posts |
#16
![]() |
|||
|
|||
![]()
On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux
wrote: At HF and VHF, you should be able to do power measurements to a tenth of a dB, with moderate care. (obviously, you'd have to deal with measuring the mismatch, etc.). A run of the mill power meter should give you 5% accuracy (0.2 dB) without too much trouble. A 8902 measuring receiver can do substantially better. Nothing astonishes me more than the simple dash-off notes that claim power measurement is a snap. I can well imagine, Jim, that you don't do these measurements with traceability to the limits you suggest. For the other readers: We will specifically start with the 8902 measuring receiver. A premier instrument indeed, but it falls fall FAR short of actually measuring power without a considerable body of necessary instrumentation (well illustrated by Mac's observation found in that fig. 15 already cited). Most claimants peer at one line in a spec sheet and figure that is the end of the discussion. Glances elsewhere begins to build the actual accuracy obtainable through the chain of errors that accumulate. For instance with a 1mW input in the VHF band: Internal power standard: ±1.2% and we have yet to look at the measurement head's error contribution. The so-called "run of the mill power meters" are drawing close, too close to this precision set's expensive quality such that their estimation of 5% is already suspect quality. Scale error demands a full-scale indication to simple keep the error contribution down to 0.1% (a 1/10th scale indication would jump that error to 1%) ±1 digit. Input SWR with the HP 11792 is rated at 1.15 at worst (I've measured with far better matches) to that same source's 1.05 SWR adds 0.4% error. If you are not measuring power at the specific frequency of the internal source, add more error averaging onwards to 2%. Things build up from here for just one instrument and its RF head to a worst case valuation of 5% to 6% error. This further trashes the observation of "run of the mill power meters" vaunted 5% accuracies. Of course, in this computation of error neophytes are tempted to employ the RMS estimation. This clearly reveals those untested in the arts where bench techs who do their best understand that the RSS estimation is what pays their salary. Taking a step above skilled bench work to that of a Calibration lab, you buy all the error at face value (hence the term "worst case" that is used by the professionals employed in this art). THEN we turn our attention to the rest of the bench that holds the remaining components that support the measurement of a power level and accuracy begins to slide drastically. I've been there, and I've been trained to reduce the variables - not an easy task and one that the march of time has NOT improved. Mismatch error climbs like the Himalayas if you don't employ line conditioners (which bring their own mismatch) and isolators (which bring their own mismatch) and so on down the proverbial line towards the source being measured (that antenna every one knows has X amount of power coming from it). For those who are stunned by this bajillion dollar solution giving them 14% best accuracy (and RSS at that) results, confer with: http://www.home.agilent.com/upload/c...EPSG085840.pdf and observe the commentary for slide 36. See if you can cook up a method that doesn't hammer you into the ground. I can anticipate some: 1. Throw a box car of money at the problem; 2. Buy lab time at NIST; 3. Write a report that runs to book length (I've carried most of that water by providing the link above) - or xerox the book that already exists: "Microwave Theory and Applications," Stephen F. Adam; 4. Do it with precision components employing best practices to the best achievable accuracy - you will need further instruction into best practices, however; 5. Ignore reality. Only the last two options are achievable by the ordinary Ham. To claim that "someone else" can do it better and is thus achievable is sophistry serving ego in an argument. 73's Richard Clark, KB7QHC |