Home |
Search |
Today's Posts |
|
#1
![]() |
|||
|
|||
![]()
Dear Group: What a delight it is to see a computer doing the calculations
for VHF propagation. Almost fifty years ago, I led a team who measured field strengths in the 100 to 250 MHz range (FM and TV broadcast transmitters) to verify (qualify) the propagation model. Of course, I used a slide rule and log tables to perform the calculations and manually extracted path profiles from topo. maps. The goal was to place confidence in the model for estimating expected interference levels at a radio-astronomy site located in a valley. The result from extensive filed measurements and data reduction was that we could be confident in the model. I recall also doing some comparisons of predicted and measured strengths involving scattering (over quite long distances) in the VHF range with good correlation. IONCAP, and its predecessors and successors, I have used to good effect for almost as many years. In short, the developed propagation methods have been proven by me, and many others, to provide reasonably small uncertainties. Of course, the critical element is knowing which tool to use. That, I believe, is part of the point brought forward by Richard Fry and others. But put yet another way, any dam fool can (now) put numbers into a computer and get numbers back out of the computer - experience and judgment is needed to have significance accrue to the results of such calculations. Central to all of the propagation models is the need to understand what the antenna and its environment actually does. I am also delighted that several of you are providing the education to the silent so that they do not fall into the traps that are always present. Warm regards and season's greetings, Mac N8TT -- J. McLaughlin; Michigan, USA Home: "Richard Fry" wrote in message ... On Dec 22, 11:13 am, "Frank" wrote: In this example the vertical half wave dipole, with the base 30 ft above an average ground, on 147.3 MHz, shows a field strength at ground level of: 0.418 uV/m from 30 W into the antenna. And, obviously, at 50 km. ________________ Here is another method (Longley-Rice) for calculating the field intensity produced at the receive site by your model. But the NEC approach is less accurate than L-R for long path lengths (due to earth curvature), and for specific terrain contours. In your model the path loss calculated using L-R is about 68.8 dB more than the free space loss. The peak, free space field produced by a 1/2-wave, linear dipole radiating 30 watts over a 50 km path is about 770 uV/m. This voltage reduction of 68.8 dB is a field multiplier of about 0.00036, so the 770 uV/m field is reduced to about 0.28 uV/m -- a bit less than your NEC model predicts. Agreement probably would be better over shorter paths (as long as no specific terrain profile needed to be applied), and worse for longer paths. In the L-R example I set the path over the middle of Lake Michigan in order to get a smooth earth contour, such as used in NEC models. This all just illustrates that analyses made using NEC and any other method need to consider the limits inherent in their algorithms with respect to the physical reality being analyzed. http://i62.photobucket.com/albums/h8...strialPath.gif RF |
#2
![]() |
|||
|
|||
![]()
On Mon, 22 Dec 2008 20:15:09 -0500, "J. Mc Laughlin"
wrote: Almost fifty years ago, I led a team who measured field strengths in the 100 to 250 MHz range (FM and TV broadcast transmitters) to verify (qualify) the propagation model. Hi Mac, and season's greetings, Can you relate the specifics of the measurement? At a minimum, what you would deem to be your best accuracy compared to an absolute standard, or to a relative standard (instrumentation, not computational). 73's Richard Clark, KB7QHC |
#3
![]() |
|||
|
|||
![]()
Dear Richard:
It was almost 50 years ago when the models were rather new..... More background: the terrain was hilly - far from smooth earth - and path profiles were a critical part of the information along with the inherent uncertainties of using "analog" maps and along with the assumption about almost-straight line propagation. (an aside: we found examples of unpredictable propagation along string-like valleys that were aligned with transmitters, but the protected site was in a bowl-like valley.) (I saw one family in a valley using a rhombic antenna to receive TV signals. Their son had been in the Signal Corps.) We were using state-of-the-art Empire measuring systems (run off of a portable gasoline generator) that were calibrated with an impulse generator at each measurement. We selected paths that were similar to the expected paths of interfering transmitters. In other words, the paths were more-or-less normal to ridge lines not along string-like valleys. One more qualification: one path was found to have knife-edge diffraction discovered by the caution of taking measurements spaced a few meters apart at each data point. It was absolutely classic, but that data was not used because the protected site did not have such sharp ridges at its periphery. With those qualifications, my best recollection is that measurements and predicted measurements were within something like 3 or 4 dB. I doubt that repeating those measurements with a GPS receiver, digital topographical map, averaging near straight-line paths, and using a computer to do the arithmetic would be any better. Another note: Because of the expected sensitivity to interference at the site, I would drive over a few hills, erect a dipole in trees, and work my father on HF from the back seat of my car. No cell phones in those days. .... long distance was a big deal too Let us know how your studies are going. Warm regards, Mac N8TT -- J. McLaughlin; Michigan, USA Home: "Richard Clark" wrote in message ... On Mon, 22 Dec 2008 20:15:09 -0500, "J. Mc Laughlin" wrote: Almost fifty years ago, I led a team who measured field strengths in the 100 to 250 MHz range (FM and TV broadcast transmitters) to verify (qualify) the propagation model. Hi Mac, and season's greetings, Can you relate the specifics of the measurement? At a minimum, what you would deem to be your best accuracy compared to an absolute standard, or to a relative standard (instrumentation, not computational). 73's Richard Clark, KB7QHC |
#4
![]() |
|||
|
|||
![]() Richard Clark wrote: ...what you would deem to be your best accuracy compared to an absolute standard, or to a relative standard (instrumentation, not computational). ______________ You weren't asking me, but still you may be interested in the link below which leads to a good presentation of this by the NIST. A table on Page 3 there shows a measurement uncertainty at the NIST test facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency range. Field intensity measurements made using uncontrolled path conditions are more a measure of the propagation environment and the pattern/ location of the receive antenna than they are of the absolute performance of the transmitting antenna system. Such measurement errors can be gross, and difficult to quantify. http://ts.nist.gov/MeasurementServic...d/im-34-4b.pdf RF |
#5
![]() |
|||
|
|||
![]()
Dear Richard Fry:
Thank you for the 1985 reference, which I had not seen before. Too many IEEE groups exist! A closed-loop system much like that shown in Figure 15 was built by me and a student and used by the mid 70s to subject DUTs to up to at least 200 v/m at frequencies up to about 200 MHz. This was for automated evaluation of the EMC of relatively small DUTs and was the prototype of a much larger system implemented by a major manufacturer that allowed the testing of entire cars. This was well before PCs, but after 488 signal sources and wattmeters were available. Confidence to about 1 dB was felt because of the tight correlation with a short voltage probe extending into the TEM cell. Unfortunately, the small effective volume of the TEM cell precluded measurements of antennas. The large room at NBS allowed them to measure antennas and I saw them measuring a large UHF antenna with a near-field probe in the early 1970s. Jumping to HF antennas of 0.5 WL size or mo I am convinced that even with a helicopter being used to measure a pattern, one can have more confidence in the result of the intelligent use of NEC4 than in any measurements. The measurements made in late 50s (to gain confidence with VHF propagation models) involved cherry-picking the paths to correspond with the goal of understanding propagation of possible interference into the radio-astronomy site. They also involved averaging a series of measurements taken within a few meters of each other. The measurement sites were all very rural and free of significant reflecting surfaces. Warm regards, Mac N8TT -- J. McLaughlin; Michigan, USA Home: "Richard Fry" wrote in message ... Richard Clark wrote: ...what you would deem to be your best accuracy compared to an absolute standard, or to a relative standard (instrumentation, not computational). ______________ You weren't asking me, but still you may be interested in the link below which leads to a good presentation of this by the NIST. A table on Page 3 there shows a measurement uncertainty at the NIST test facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency range. Field intensity measurements made using uncontrolled path conditions are more a measure of the propagation environment and the pattern/ location of the receive antenna than they are of the absolute performance of the transmitting antenna system. Such measurement errors can be gross, and difficult to quantify. http://ts.nist.gov/MeasurementServic...d/im-34-4b.pdf RF |
#6
![]() |
|||
|
|||
![]()
On Tue, 23 Dec 2008 04:20:26 -0800 (PST), Richard Fry
wrote: A table on Page 3 there shows a measurement uncertainty at the NIST test facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency range. Actually, ±1 dB would be the most likely error for instrumentation error (±¼ dB could never be achieved); matching error would compound that; the antenna would add another ±1 dB; path would scramble that further if not performed in an anechoic chamber or on a calibrated range. Mac's test system (from fig. 15 he reports in other correspondence) would accumulate up to the several dB he reported earlier. It would exhibit a very good relative accuracy, but absolute accuracy would be several dB error as he has already offered in prior correspondence. Path problems would have to be hammered out on their own. 73's Richard Clark, KB7QHC |
#7
![]() |
|||
|
|||
![]()
Richard Clark wrote:
On Tue, 23 Dec 2008 04:20:26 -0800 (PST), Richard Fry wrote: A table on Page 3 there shows a measurement uncertainty at the NIST test facilities of ±1/4 to ±1 dB, depending on the DUT and the frequency range. Actually, ±1 dB would be the most likely error for instrumentation error (±¼ dB could never be achieved); matching error would compound that; the antenna would add another ±1 dB; path would scramble that further if not performed in an anechoic chamber or on a calibrated range. At HF and VHF, you should be able to do power measurements to a tenth of a dB, with moderate care. (obviously, you'd have to deal with measuring the mismatch, etc.). A run of the mill power meter should give you 5% accuracy (0.2 dB) without too much trouble. A 8902 measuring receiver can do substantially better. Even at microwave frequencies, better than 0.1 dB uncertainty (2 sigma) are possible with free space measurements (e.g. from an orbiting satellite to a ground station), with all the uncertainties stacked up (atmospheric, radome loss, antenna, electronics, etc.), although this is decidedly non-trivial. As mentioned, site effects or chamber uncertainties might contribute more. A typical anechoic chamber might have -20dB worst case reflections from the walls, and -40dB as more typical. A single scattering path will then contribute an uncertainty (worst case) of 1%, or 0.04 dB, although modern measurement technique (using multiple probe positions) can quantify this error and remove it, assuming the UUT and equipment are stable enough over the measurement period. The TEM cell is nice because it gives you a way to create a calibrated field to characterize your probe. Mac's test system (from fig. 15 he reports in other correspondence) would accumulate up to the several dB he reported earlier. It would exhibit a very good relative accuracy, but absolute accuracy would be several dB error as he has already offered in prior correspondence. Path problems would have to be hammered out on their own. 73's Richard Clark, KB7QHC |
#8
![]() |
|||
|
|||
![]()
On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux
wrote: At HF and VHF, you should be able to do power measurements to a tenth of a dB, with moderate care. (obviously, you'd have to deal with measuring the mismatch, etc.). A run of the mill power meter should give you 5% accuracy (0.2 dB) without too much trouble. A 8902 measuring receiver can do substantially better. Nothing astonishes me more than the simple dash-off notes that claim power measurement is a snap. I can well imagine, Jim, that you don't do these measurements with traceability to the limits you suggest. For the other readers: We will specifically start with the 8902 measuring receiver. A premier instrument indeed, but it falls fall FAR short of actually measuring power without a considerable body of necessary instrumentation (well illustrated by Mac's observation found in that fig. 15 already cited). Most claimants peer at one line in a spec sheet and figure that is the end of the discussion. Glances elsewhere begins to build the actual accuracy obtainable through the chain of errors that accumulate. For instance with a 1mW input in the VHF band: Internal power standard: ±1.2% and we have yet to look at the measurement head's error contribution. The so-called "run of the mill power meters" are drawing close, too close to this precision set's expensive quality such that their estimation of 5% is already suspect quality. Scale error demands a full-scale indication to simple keep the error contribution down to 0.1% (a 1/10th scale indication would jump that error to 1%) ±1 digit. Input SWR with the HP 11792 is rated at 1.15 at worst (I've measured with far better matches) to that same source's 1.05 SWR adds 0.4% error. If you are not measuring power at the specific frequency of the internal source, add more error averaging onwards to 2%. Things build up from here for just one instrument and its RF head to a worst case valuation of 5% to 6% error. This further trashes the observation of "run of the mill power meters" vaunted 5% accuracies. Of course, in this computation of error neophytes are tempted to employ the RMS estimation. This clearly reveals those untested in the arts where bench techs who do their best understand that the RSS estimation is what pays their salary. Taking a step above skilled bench work to that of a Calibration lab, you buy all the error at face value (hence the term "worst case" that is used by the professionals employed in this art). THEN we turn our attention to the rest of the bench that holds the remaining components that support the measurement of a power level and accuracy begins to slide drastically. I've been there, and I've been trained to reduce the variables - not an easy task and one that the march of time has NOT improved. Mismatch error climbs like the Himalayas if you don't employ line conditioners (which bring their own mismatch) and isolators (which bring their own mismatch) and so on down the proverbial line towards the source being measured (that antenna every one knows has X amount of power coming from it). For those who are stunned by this bajillion dollar solution giving them 14% best accuracy (and RSS at that) results, confer with: http://www.home.agilent.com/upload/c...EPSG085840.pdf and observe the commentary for slide 36. See if you can cook up a method that doesn't hammer you into the ground. I can anticipate some: 1. Throw a box car of money at the problem; 2. Buy lab time at NIST; 3. Write a report that runs to book length (I've carried most of that water by providing the link above) - or xerox the book that already exists: "Microwave Theory and Applications," Stephen F. Adam; 4. Do it with precision components employing best practices to the best achievable accuracy - you will need further instruction into best practices, however; 5. Ignore reality. Only the last two options are achievable by the ordinary Ham. To claim that "someone else" can do it better and is thus achievable is sophistry serving ego in an argument. 73's Richard Clark, KB7QHC |
#9
![]() |
|||
|
|||
![]()
On Dec 23, 12:39 pm, Richard Clark wrote:
On Tue, 23 Dec 2008 10:29:05 -0800, Jim Lux wrote: At HF and VHF, you should be able to do power measurements to a tenth of a dB, with moderate care. (obviously, you'd have to deal with measuring the mismatch, etc.). A run of the mill power meter should give you 5% accuracy (0.2 dB) without too much trouble. A 8902 measuring receiver can do substantially better. Nothing astonishes me more than the simple dash-off notes that claim power measurement is a snap. I can well imagine, Jim, that you don't do these measurements with traceability to the limits you suggest. In point of fact, I *do* make measurements like that, and as I said, it requires "moderate care" and good technique and instrumentation. A random diode measured with your $5 Harbor Freight DMM isn't going to hack it. Neither is most of the stuff sold to hams. It is hardly a "snap", but it *is* within the reach of someone at home with a lot of time and care to substitute for expensive gear and calibrations (basically, you have to do your own calibration). For the other readers: We will specifically start with the 8902 measuring receiver. A premier instrument indeed, but it falls fall FAR short of actually measuring power without a considerable body of necessary instrumentation (well illustrated by Mac's observation found in that fig. 15 already cited). Most claimants peer at one line in a spec sheet and figure that is the end of the discussion. Glances elsewhere begins to build the actual accuracy obtainable through the chain of errors that accumulate. For instance with a 1mW input in the VHF band: Internal power standard: ±1.2% and we have yet to look at the measurement head's error contribution. The so-called "run of the mill power meters" are drawing close, too close to this precision set's expensive quality such that their estimation of 5% is already suspect quality. Standard power measuring head on a Agilent power meter is better than 5% at HF, probably in the range of 1% for one head in comparison measurements over a short time. The 8902 is sort of a special case, but can do very accurate relative measurements. FWIW, the 8902 calibrates out the measurement head effects. Scale error demands a full-scale indication to simple keep the error contribution down to 0.1% (a 1/10th scale indication would jump that error to 1%) ±1 digit. This oversimplifies a bit. Typically, you'll have some uncertainty that is proportional to the signal measured (e.g. mismatch will affect the signal the same way regardless of level) and some that is absolute, independent of the signal level (e.g. the analog noise in the voltmeter). As you say, bigger signals are easier to measure precisely.. the real limiting factor is the accuracy with which you know the attenuation of the attenuators you're using to get the steps. With regard to mismatch, if you're interested in tenth dB accuracies, you're going to have to measure the mismatch and account for it. It's not that hard, just tedious. The typical power meter head doesn't change it's Z very much, so once you've measured YOUR head and keep the data around, you're good to go for the future. (and do your tests at the same temperature, don't use the head for a door stop, etc.) As far as calculating uncertainties.. you bet.. it's not just stacking em up. But that's true of ANY precision measurement, so if one is quoting better than half dB numbers (i.e. if you give any digits to the right of the decimal point) one should be able to back it up with the uncertainty analysis (which is all described on the NIST website and in the tech notes). This isn't hard, it's just tedious. But the whole thing about high quality amateur measurements is you're trading off your time to do tedious extra measurements and analysis in exchange for not sending a cal-lab a check. The how to do it is all out there. What was "state of the art" for a national laboratory in 1970 is fairly straightforward garage work these days, and, a heck of a lot easier because you've got inexpensive automation for making the measurements and inexpensive computer power for doing the calibration calculations and uncertainty analysis. The slide 36 discussion refers to measuring a signal at -110dBm, which I would venture to say is well below the levels that most hams will be interested in measuring. And, they are talking about where the source Z is unconstrained. In a typical ham situation, these things probably aren't the case. If you were interested in measuring, for instance, the loss of a piece of coax or the output of a 0dBm buffer amplifier to a tenth of a dB, that's a whole lot easier than a -110dBm signal from some probe into a 8902. The context of this discussion was making measurements of antennas, and for that, one can normally arrange to have decent signal levels, etc. OR, one is interested in relative measurements, rather than absolute calibration. It's a whole lot easier to measure a 0.1 dB difference between two signals. You suggested 5 alternatives: ee if you can cook up a method that doesn't hammer you into the ground. I can anticipate some: 1. Throw a box car of money at the problem; Or, throw some time at the problem.. this is the classic ham tradeoff... "I don't have money, but I do have time" It's no different than grinding your own telescope mirrors, building your own Yagi or wire antenna, etc. 2. Buy lab time at NIST; That's the money thing (and it doesn't require boxcar loads.. perhaps a kilobuck or two.. and for some folks, it's worth it.. although I can't see any amateur radio need. I can see doing it as part of a hobby involving precision, like the folks on time-nuts who operate multiple Cs clocks and build very high performance quartz oscillators for the thrill of getting to 1E-14 or 1E-16 Allan deviation.. Folks who do home nuclear fusion also might avail themselves of pro cal services for their neutron detectors, because there isn't a convenient way of doing home cals, unlike for RF power, where it's at least possible) 3. Write a report that runs to book length (I've carried most of that water by providing the link above) - or xerox the book that already exists: "Microwave Theory and Applications," Stephen F. Adam; Or any of a variety of sources. One doesn't need a book for this, but one does need some care in technique and some background knowledge. It's like reading John Strong's book on building scientific instruments (back in the 40s, one built one's physics experimental gear and calibrated it yourself) 4. Do it with precision components employing best practices to the best achievable accuracy - you will need further instruction into best practices, however; 5. Ignore reality. --- |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|