View Single Post
  #16   Report Post  
Old December 5th 05, 06:39 PM posted to rec.radio.amateur.antenna
Wes Stewart
 
Posts: n/a
Default how to measure antenna impedance ?

On Sun, 04 Dec 2005 07:57:55 -0800, dansawyeror
wrote:

Wes,

Your answer to the question about bidirectional couplers was they do not
compensate for phase shift. Let me ask it again:

Do the measuring ports of a bi-directional coupler accurately represent or
preserve the relative phases of the signal?

To put it another way is the phase shift of the driving and reflected signals
changed by the same about?

Thanks - Dan kb0qil


I'm not sure I understand the question(s) but in the case of a vector
reflectometer using a dual directional coupler maybe this will help.

Here is a dual directional coupler.


Reverse Forward
| |
| |
|----------R R ---------|
X X
Input --A-----------------------B--C Load


Let's say that at frequency, F, the coupling factor (X) is -10 dB with
no phase shift between point B and the forward port and between point
A and the reverse port to keep it simple.

So a wave propagating in the forward direction (Input -- Load)
induces a signal at the forward port that is 10 dB below the input at
0 degrees phase with respect to point B. A wave propagating in the
opposite direction has the same relationship at the reverse port; 10
dB down and 0 degrees phase with respect to point A.

A to Reverse and B to Forward -might- track reasonably well in both
magnitude and phase, but in this case, it's immaterial.

Because B-A and C-B 0 there will be a frequency dependent phase
difference between A, B and C.

When we calibrate using a short on the load port here's what happens.

The signal at the forward port becomes the reference, i.e., unity
amplitude and 0 degrees phase.

The short creates a 100% reflection and -180 degree phase shift. This
signal propagates back down the main line to the source, which is
assumed to be a perfect match, so there is no re-reflection. A -10 dB
sample (by definition: unity) is coupled to the reverse port, with a
phase shift, theta(F), determined by the electrical length of the line
C - B - A.

Unless we are lucky enough to be Lotto winners, the signal at the
reflected port -will not- be 1 @ ang-180 deg. So our calibration
routine must do whatever math is necessary to make the ratio B/A = 1 @
ang-180. This fudge factor is then applied to all subsequent
measurements to "correct" the data.

Now to address (I think) your question. If we change frequencies,
theta(F) changes and the fudge factor no longer corrects for it.
While the coupling factors might track, it is of little consolation
because the calibration is good only at the frequency where it was
performed. Automatic network analyzers perform calibration at each
test frequency, or at least enough points to interpolate between.