RadioBanter

RadioBanter (https://www.radiobanter.com/)
-   Shortwave (https://www.radiobanter.com/shortwave/)
-   -   AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency (https://www.radiobanter.com/shortwave/121254-am-electromagnetic-waves-20-khz-modulation-frequency-astronomically-low-carrier-frequency.html)

Bob Myers July 6th 07 03:02 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 

"John Fields" wrote in message
...

You missed my point, which was that in a mixer (which the ear is,
since its amplitude response is nonlinear) as the two carriers
approach each other the difference frequency will go to zero and the
sum frequency will go to the second harmonic of either carrier,
making it largely appear to vanish into the fundamental.


Sorry, John - while the ear's amplitude response IS nonlinear, it
does not act as a mixer. "Mixing" (multiplication) occurs when
a given nonlinear element (in electronics, a diode or transistor, for
example) is presented with two signals of different frequencies.
But the human ear doesn't work in that manner - there is no single
nonlinear element which is receiving more than one signal.

Frequency discrimination in the ear occurs through the resonant
frequencies of the 20-30,000 fibers which make up the basilar
membrane within the cochlea. Each fiber responds only to those
tones which are at or very near its resonant frequency. While
the response of each fiber to the amplitude of the signal is nonliner,
no mixing occurs because each responds, in essence, only to a
single tone. A model for the hearing process might be 30,000 or
so non-linear meters, each seeing the output of a very narrow-band
bandpass filter covering a specific frequency within the audio
range. There is clearly no mixing, at least as the term is commonly
used in electronics, going on in such a situation, even though there
is non-linearity in some aspect of the system's response.

Audible "beats" are perceived not because there is mixing going on
within the ear, but instead are due to cycles of constructive and
destructive
interference going on in the air between the two original tones.

Bob M.



isw July 6th 07 04:59 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 
In article ,
John Fields wrote:

On Thu, 05 Jul 2007 10:00:33 -0700, isw wrote:

In article ,
John Fields wrote:

On Thu, 05 Jul 2007 00:06:02 -0700, isw wrote:

In article ,
John Fields wrote:

On Wed, 04 Jul 2007 09:11:58 -0700, isw wrote:

In article ,
"Ron Baker, Pluralitas!" wrote:

You win. :)

When I conceived the problem I was thinking
cosines actually. In which case there are no
phase shifts to worry about in the result.

I also forgot the half amplitude factor.

While it might not be obvious, the two cases I
described are basically identical. And this
situation occurs in real life, i.e. in radio signals,
oceanography, and guitar tuning.

The beat you hear during guitar tuning is not modulation; there is no
non-linear process involved (i.e. no multiplication).

---
That's not true.

The human ear has a logarithmic amplitude response and the beat note
(the difference frequency) is generated there. The sum frequency is
too, but when unison is achieved it'll be at precisely twice the
frequency of either fundamental and won't be noticed.

Now you get to explain why the beat is measurable with instrumentation,
and can can be viewed in the waveform of a high-quality recording.

---
Simple. The process isn't totally linear, starting with the musical
instrument itself, so some heterodyning will inevitably occur which
will be detected by the measuring instrumentation.


That would suggest that there could be "low IM" instruments which would
be very difficult to tune, since they would produce undetectably small
beats;


---
Not at all. Since tuning is the act of comparing the acoustic
output of a musical instrument to a reference, the "IM" of the
instrument would be relatively unimportant, with a totally linear
device giving the best output. For tuning, anyway. Then, the
output of the instrument and the reference would be mixed, in the
ear, with zero beat indicating when the instrument's output matched
the reference.
---

in fact that does not happen. It would also suggest that it would
be difficult or impossible to create beats between two
very-low-distortion signal generators, which is also not the case.


---
That is precisely the case. Connect the outputs of two zero
distortion signal generators so they add, like this, in a perfect
opamp, (View in Courier)


+-----+ +--------+ +---------+ +-----+
| SG1 |---[R]--+----[R]---+--| POWER |--| SPEAKER |--| EAR |
+-----+ | | | AMP | +---------+ +-----+
| +V | +--------+
+-----+ | | |
| SG2 |---[R]--+----|-\ | +----------+
+-----+ | --+--| SPECTRUM |
+----|+/ | ANALYZER |
| | +----------+
GND -V

and the spectrum analyzer will resolve the signals as two separate
spectral lines,


And when the two frequencies are very close to being equal, the spectrum
analyzer will only be able to resolve one frequency, and it will vary
between a maximum of amplitude and zero at a rate which is precisely
related to the difference between the two frequencies. If you get an
analyzer with finer resolution, I can always reduce the difference
frequency sufficiently to produce the described effect, which does not
in any way require a nonlinear process.


Other than the nonlinearity of the air (which is very small for
"ordinary" SPL, there's no mechanism to cause IM between two different
instruments, although beats are still generated. The beat is simply a
vector summation of two nearly identical signals; no modulation needs to
take place.


---
I understand your point and, while it may be true, the
incontrovertible fact remains that the ear is a non-linear detector
and will generate sidebands when it's presented with multiple
frequencies.


OK, but off subject. We were discussing whether a "zero beat" while
tuning an instrument requires a non-linear process (i.e. "real"
modulation. It does not.

What remains to be done then, is the determination of whether the
beat effect is due to heterodyning, or vector summation, or both.


Yup. And since the beat is easily observable using instrumentation of
measurably high linearity, whether or not ears have some IM is of no
matter. In fact, I agree that IM is produced in ears; just not at
significant levels for anything short of pathological SPL -- upwards of
120 dB, say.

Or consider this: At true "zero beat" with the signals exactly 180
degrees out, no energy is avaliable for any non-linear process to act on.


---
Or any other process for that matter, except the conversion of that
acoustic energy into heat. That is, with the signals 180° out of
phase and precisely the same amplitude, didn't you mean?


Yes. The 180 degree situation is just a special case that very obviously
produces a change in output level in a linear environment. IOW it shows
that a linear combination of two nearly equal tones will cause a "beat"
in amplitude.

Then go on to show why all other multi-frequency-component signals

(e.g.
a full orchestra) don't produce similar intermodulation effects in ears
under normal conditions.

---
They do


Well, no, mostly they don't, until you get to really high SPL.


---
That's not true. Why do you think some harmonies sound better than
others? Because the heterodyning occurring at those frequencies
causes complementary sidebands to be generated which sound good, and
that happens at most SPL's because of the ear's nonlinear
characteristics.


For your argument to be true, there should be harmonies that can be
shown to "sound better" when played at a lower SPL (or better,
auditioned through a passive acoustical attenuator). Avoiding
pathological sound levels, I am not aware of any such thing ever being
demonstrated. Do you have any examples?

In fact, I believe it is the case that in "musical frequency space"
virtually every IM product of significance, regardless of where it
arises, is considered unpleasant.


and why don't you try being a little less of a pompous ass?


Exposing claims to conditions they have difficulty with is a good way to
understand why those claims are invalid -- so long as the claimant
actually explains what's going on, and doesn't just make up answers that
fit the previously stated beliefs.


---
I wasn't talking about making and/or debating claims, I was talking
about your smartass "Now you get to explain" and "Then go on to show
why" cracks.


And I still don't think you have adequately explained the things I was
referring to.

Do you have any references?

Isaac

isw July 6th 07 05:03 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 
In article ,
Rich Grise wrote:

On Tue, 03 Jul 2007 22:42:20 -0700, isw wrote:

After you get done talking about modulation and sidebands, somebody
might want to take a stab at explaining why, if you tune a receiver to
the second harmonic (or any other harmonic) of a modulated carrier (AM
or FM; makes no difference), the audio comes out sounding exactly as it
does if you tune to the fundamental? That is, while the second harmonic
of the carrier is twice the frequency of the fundamental, the sidebands
of the second harmonic are *not* located at twice the frequencies of the
sidebands of the fundamental, but rather precisely as far from the
second harmonic of the carrier as they are from the fundamental.


Have you ever actually observed this effect?


Sure. (In a previous life, I designed AM and FM transmitters for RCA).
Just get a short-wave radio, locate yourself fairly close to a standard
AM transmitter, and tune to the harmonics. you'll find, in every case,
that the audio sounds just the same as if you were listening to the
fundamental.

Works for FM, too, but the situation is somewhat more complex.

Isaac

isw July 6th 07 05:09 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 
In article ,
John Fields wrote:

On Thu, 05 Jul 2007 13:48:04 -0700, Jim Kelley
wrote:

John Fields wrote:
On Wed, 04 Jul 2007 09:11:58 -0700, isw wrote:


In article ,
"Ron Baker, Pluralitas!" wrote:


The beat you hear during guitar tuning is not modulation; there is no
non-linear process involved (i.e. no multiplication).


---
That's not true.


But it is true.

The human ear has a logarithmic amplitude response and the beat note
(the difference frequency) is generated there.


The ear does happen to have a logarithmic amplitude response as a
function of frequency, but that has nothing to do with this
phenomenon.


---
Regardless of the frequency response characteristics of the ear, its
response to amplitude changes _is_ logarithmic.

For instance:

CHANGE APPARENT CHANGE
IN SPL IN LOUDNESS
---------+------------------
3 dB Just noticeable

5 dB Clearly noticeable

10 dB Twice or half as loud

20 dB 4 times or 1/4 as loud

---

(It relates only to the aural sensitivity of the ear at
different frequencies.) What the ear responds to is the sound pressure
wave that results from the superposition of the two waves. The effect
in air is measurable with a microphone as well as by ear. The same
thing can be seen purely electrically in the time domain on an
oscilloscope, and does appear exactly as Ron Baker described in the
frequency domain on a spectrum analyzer.

The sum frequency is
too, but when unison is achieved it'll be at precisely twice the
frequency of either fundamental and won't be noticed.


The ear does not hear the sum of two waves as the sum of the
frequencies, but rather as the sum of their instantaneous amplitudes.
When the pitches are identical, the instantaneous amplitude varies
with time at the fundamental frequency. When they are identical and
in-phase, the instantaneous amplitude varies at the fundamental
frequency with twice the peak amplitude.


---
You missed my point, which was that in a mixer (which the ear is,
since its amplitude response is nonlinear) as the two carriers
approach each other the difference frequency will go to zero and the
sum frequency will go to the second harmonic of either carrier,
making it largely appear to vanish into the fundamental.
---

When the two pitches are different, the sum of the instantaneous
amplitudes at a fixed point varies with time at a frequency equal to
the difference between pitches.


---
But the resultant waveform will be distorted and contain additional
spectral components if that summation isn't done linearly. This is
precisely what happens in the ear when equal changes in SPL don't
result in equal outputs to the 8th cranial nerve.
---

This does have an envelope-like
effect, but it is a different effect than the case of amplitude
modulation. In this case we actually have two pitches, each with
constant amplitude, whereas with AM we have only one pitch, but with
time varying amplitude.


---
That's not true. In AM we have two pitches, but one is used to
control the amplitude of the other, which generates the sidebands.
---

The terms in the trig identity are open to a bit of misinterpretation.
At first glance it does look as though we have a wave sin(a+b) which
is being modulated by a wave sin(a-b). But what we have is a more
complex waveform than a pure sine wave with a modulated amplitude.


---
No, it's much simpler since you haven't created the sum and
difference frequencies and placed them in the spectrum.
---

There exists no sine wave with a frequency of a+b in the frequency
spectrum of beat modulated sine waves a and b. As has been noted
previously, this is the sum of two waves not the product.


---
"Beat modulated" ??? LOL, if you're talking about the linear
summation of a couple of sine waves, then there is _no_ modulation
of any type taking place and the instantaneous voltage (or whatever)
out of the system will be the simple algebraic sum of the inputs
times whatever _linear_ gain there is in the system at that instant.


Absolutely correct. And as that "simple algebraic sum" varies with time,
which it will as the phases of the two signals slide past each other, it
produces the tuning "beat" we've been talking about. Totally linear.

Isaac

isw July 6th 07 05:12 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 
In article ,
"Bob Myers" wrote:

Bob M.


(Personal message; sorry, but e-mail wouldn't work.)

Hi, Bob. It's been a long time since we used to correspond on rec.audio.
Nice to hear from you.

Isaac

John Fields July 6th 07 05:38 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 
On Thu, 05 Jul 2007 18:37:21 -0700, Jim Kelley
wrote:

John Fields wrote:

You missed my point, which was that in a mixer (which the ear is,
since its amplitude response is nonlinear) as the two carriers
approach each other the difference frequency will go to zero and the
sum frequency will go to the second harmonic of either carrier,
making it largely appear to vanish into the fundamental.


Hi John -

Given two sources of pure sinusoidal tones whose individual amplitudes
are constant, is it your claim that you have heard the sum of the two
frequencies?


---
I think so.

A year or so ago I did some casual experiments with pure tones being
fed simultaneously into individual loudspeakers to which I listened,
and I recall that I heard tones which were higher pitched than
either of the lower-frequency signals. Subjective, I know, but
still...

A microphone with an amplitude response following that of the human
ear might do better.

Interestingly, this afternoon I did the zero-beat thing with 1kHz
being fed to one loudspeaker and a variable frequency oscillator
being fed to a separate loudspeaker, with me as the detector.

I also connected each oscillator to one channel of a Tektronix
2215A, inverted channel B, set the vertical amps to "ADD", and
adjusted the frequency of the VFO for near zero beat as shown on the
scope.

Sure enough, I heard the beat even though it came from different
sources, but I couldn't quite get it down to DC even with the
scope's trace at 0V.

Close, though, and as it turned out it wasn't the zero output
amplitude as shown by the scope which made the difference, it was
the amplitude of the signals which got to my ear(s). As fate would
have it, I have two ears, with some distance between them, so
perfect cancellation in one left some uncancelled signal in the
other, obviating what otherwise might have been perfect silence.
Except, perhaps, for the heterodynes.

Anyway, I'm off to the 75th reunion of the Panama Canal Society and
the 50th reunion of the Cristobal High School Class of '57 in
Orlando, so I'll see y'all when I get back on Sunday, GLW.


--
JF

Ron Baker, Pluralitas![_2_] July 6th 07 06:01 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 

"isw" wrote in message
...
In article ,
"Ron Baker, Pluralitas!" wrote:

"isw" wrote in message
...
In article ,
"Ron Baker, Pluralitas!" wrote:

"isw" wrote in message
...

snip


After you get done talking about modulation and sidebands, somebody
might want to take a stab at explaining why, if you tune a receiver
to
the second harmonic (or any other harmonic) of a modulated carrier
(AM
or FM; makes no difference), the audio comes out sounding exactly as
it
does if you tune to the fundamental? That is, while the second
harmonic
of the carrier is twice the frequency of the fundamental, the
sidebands
of the second harmonic are *not* located at twice the frequencies of
the
sidebands of the fundamental, but rather precisely as far from the
second harmonic of the carrier as they are from the fundamental.

Isaac

Whoa. I thought you were smoking something but
my curiosity is piqued.
I tried shortwave stations and heard no harmonics.
But that could be blamed on propagation.
There is an AM station here at 1.21 MHz that is s9+20dB.
Tuned to 2.42 MHz. Nothing. Generally the lowest
harmonics should be strongest. Then I remembered
that many types of non-linearity favor odd harmonics.
Tuned to 3.63 MHz. Holy harmonics, batman.
There it was and the modulation was not multiplied!
Voices sounded normal pitch. When music was
played the pitch was the same on the original and
the harmonic.

One clue is that the effect comes and goes rather
abruptly. It seems to switch in and out rather
than fade in an out. Maybe the coming and going
is from switching the audio material source?

This is strange. If a signal is multiplied then the sidebands
should be multiplied too.
Maybe the carrier generator is generating a
harmonic and the harmonic is also being modulated
with the normal audio in the modulator.
But then that signal would have to make it through
the power amp and the antenna. Possible, but
why would it come and go?
Strange.

Hint: Modulation is a "rate effect".

Isaac


Please elaborate. I am so eager to hear the
explanation.


The sidebands only show up because there is a rate of change of the
carrier -- amplitude or frequency/phase, depending; they aren't
separate, stand-alone signals. Since the rate of change of the amplitude
of the second harmonic is identical to that of the fundamental, the
sidebands show up the same distance away, not twice as distant.

Isaac


That doesn't explain why the effect would
come and go.
But once again you have surprised me.
Your explanation of the non-multiplied sidebands,
while qualitative and incomplete, is sound.
It looks to me that the tripple frequency sidebands
are there but the basic sidebands dominate.
Especially at lower modulation indexes.




Ron Baker, Pluralitas![_2_] July 6th 07 06:13 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 

"isw" wrote in message
...
In article ,
"Ron Baker, Pluralitas!" wrote:

"isw" wrote in message
...
In article ,
"Ron Baker, Pluralitas!" wrote:


snip


While it might not be obvious, the two cases I
described are basically identical. And this
situation occurs in real life, i.e. in radio signals,
oceanography, and guitar tuning.

The beat you hear during guitar tuning is not modulation; there is
no
non-linear process involved (i.e. no multiplication).

Isaac

In short, the human auditory system is not linear.
It has a finite resolution bandwidth. It can't resolve
two tones separted by a few Hertz as two separate tones.
(But if they are separted by 100 Hz they can easily
be separated without hearing a beat.)

Two tones 100 Hz apart may or may not be perceived separately; depends
on a lot of other factors. MP3 encoding, for example, depends on the
ear's (very predictable) inability to discern tones "nearby" to other,
louder ones.


I'll remember that the next time I'm tuning
an MP3 guitar.


The same affect can be seen on a spectrum analyzer.
Give it two frequencies separated by 1 Hz. Set the
resolution bandwidth to 10 Hz. You'll see the peak
rise and fall at 1 Hz.

Yup. And the spectrum analyzer is (hopefully) a very linear system,
producing no intermodulation of its own.

Isaac


What does a spectrum analyzer use to arive at
amplitude values? An envelope detector?
Is that linear?


I'm sure there's more than one way to do it, but I feel certain that any


Which of them is linear?

competently designed unit will not add any signals of its own to what it
is being used to analyze.

Isaac




John Fields July 6th 07 06:24 AM

AM electromagnetic waves: 20 KHz modulation frequency on an astronomically-low carrier frequency
 
On Thu, 5 Jul 2007 20:02:15 -0600, "Bob Myers"
wrote:


"John Fields" wrote in message
.. .

You missed my point, which was that in a mixer (which the ear is,
since its amplitude response is nonlinear) as the two carriers
approach each other the difference frequency will go to zero and the
sum frequency will go to the second harmonic of either carrier,
making it largely appear to vanish into the fundamental.


Sorry, John - while the ear's amplitude response IS nonlinear, it
does not act as a mixer.


---
Sorry, Bob, If the ear's amplitude response is nonlinear, it has no
choice _but_ to act as a mixer.
---

"Mixing" (multiplication) occurs when
a given nonlinear element (in electronics, a diode or transistor, for
example) is presented with two signals of different frequencies.
But the human ear doesn't work in that manner - there is no single
nonlinear element which is receiving more than one signal.


---
Not true.

Just look at the tympanic membrane, for example.

Consider it a drumhead stretched across a restraining ring and it
becomes obvious that the excursion of its center with respect to the
pressure exerted on its surface won't be constant for _any_ range of
sound pressure levels it experiences.

Consequently, when it's hit with two different frequencies, its
displacement will vary non-linearly with the pressures they exert
and sidebands will be generated.
---

Frequency discrimination in the ear occurs through the resonant
frequencies of the 20-30,000 fibers which make up the basilar
membrane within the cochlea. Each fiber responds only to those
tones which are at or very near its resonant frequency. While
the response of each fiber to the amplitude of the signal is nonliner,
no mixing occurs because each responds, in essence, only to a
single tone. A model for the hearing process might be 30,000 or
so non-linear meters, each seeing the output of a very narrow-band
bandpass filter covering a specific frequency within the audio
range. There is clearly no mixing, at least as the term is commonly
used in electronics, going on in such a situation, even though there
is non-linearity in some aspect of the system's response.

Audible "beats" are perceived not because there is mixing going on
within the ear, but instead are due to cycles of constructive and
destructive
interference going on in the air between the two original tone


---
Not necessarily.

More on Sunday.



--
JF

Ron Baker, Pluralitas![_2_] July 6th 07 06:27 AM

AM electromagnetic waves: 20 KHz modulationfrequencyonanastronomically-low carrier frequency
 

"Don Bowey" wrote in message
...
On 7/5/07 12:00 AM, in article ,
"Ron Baker, Pluralitas!" wrote:


"Don Bowey" wrote in message
...
On 7/4/07 8:42 PM, in article ,
"Ron
Baker, Pluralitas!" wrote:


"Don Bowey" wrote in message
...
On 7/4/07 10:16 AM, in article
,
"Ron Baker, Pluralitas!" wrote:


"Don Bowey" wrote in message
...
On 7/4/07 7:52 AM, in article
,
"Ron
Baker, Pluralitas!" wrote:

snip


cos(a) * cos(b) = 0.5 * (cos[a+b] + cos[a-b])

Basically: multiplying two sine waves is
the same as adding the (half amplitude)
sum and difference frequencies.

No, they aren't the same at all, they only appear to be the same
before
they are examined. The two sidebands will not have the correct phase
relationship.

What do you mean? What is the "correct"
relationship?


One could, temporarily, mistake the added combination for a full
carrier
with independent sidebands, however.




(For sines it is
sin(a) * sin(b) = 0.5 * (cos[a-b]-cos[a+b])
= 0.5 * (sin[a-b+90degrees] -
sin[a+b+90degrees])
= 0.5 * (sin[a-b+90degrees] +
sin[a+b-90degrees])
)

--
rb





When AM is correctly accomplished (a single voiceband signal is
modulated

The questions I posed were not about AM. The
subject could have been viewed as DSB but that
wasn't the specific intent either.

What was the subject of your question?


Copying from my original post:

Suppose you have a 1 MHz sine wave whose amplitude
is multiplied by a 0.1 MHz sine wave.
What would it look like on an oscilloscope?
What would it look like on a spectrum analyzer?

Then suppose you have a 1.1 MHz sine wave added
to a 0.9 MHz sine wave.
What would that look like on an oscilloscope?
What would that look like on a spectrum analyzer?




So the first (1) is an AM question and the second (2) is a non-AM
question......


What is the difference between AM and DSB?





All times are GMT +1. The time now is 07:24 AM.

Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
RadioBanter.com