RadioBanter

RadioBanter (https://www.radiobanter.com/)
-   Antenna (https://www.radiobanter.com/antenna/)
-   -   Noise figure paradox (https://www.radiobanter.com/antenna/141934-noise-figure-paradox.html)

Joel Koltner[_2_] March 24th 09 05:19 PM

Noise figure paradox
 
"Jim Lux" wrote in message
...
For example, an Agilent N5181 looks like the noise floor is
around -160dBc/Hz well away from the carrier (e.g. 10MHz). That's probably
representative of the overall noise floor with the carrier at some level
like 0dBm. If we take that level, then it's 14 dB above kTB of -174 dBm/Hz


Thanks. Presumably then if you dial the carrier power down much below -14dBm
you're then going to have a -174dBm/Hz noise floor due to the output
attenuators as Owen mentioned...

---Joel



Joel Koltner[_2_] March 24th 09 07:17 PM

Noise figure paradox
 
"Owen Duffy" wrote in message
...
Joel, you misunderstood my calc.


Yeah, Ian pointed that out to me. My apologies...



Richard Clark March 24th 09 11:02 PM

Noise figure paradox
 
On Mon, 23 Mar 2009 23:39:42 GMT, Owen Duffy wrote:

My observation is that convention is the use the antenna connector or w/g
flange as a reference point for such calcs. It may even be laid down in
standards... but I am not sure. Someone else may know?


Hi Owen,

What you describe is typically called the "reference plane" in
metrology. This is a term that is found in many standard methods of
RF measurement. Most often it is a point that is neutral to the
introduction of new variables (and concommittant error). To achieve
this neutrality, it must be an access point that is reproducible -
hence the association with the connector or flange as these are
controlled points of access. Connectors can be measured separately to
validate their contribution to error and variability as they can
typically be mated to instrumentation whose own connectors have been
validated by more rigorous means.

There are other issues with the reference plane, one of which is heat
transfer through it which should ring bells here.

73's
Richard Clark, KB7QHC

Joel Koltner[_2_] March 24th 09 11:08 PM

Noise figure paradox
 
Hi Richard,
"Richard Clark" wrote in message
...
Deep space communications proceeds many dB below the noise floor
enabled through technology that has become ubiquitous in cell phones -
Spread Spectrum.


Aka, "averaging" (for the direct-sequence style of spread spectrum, ala CDMA
or GPS).

Still, AFAIK all that averaging can do is to effectively lower the noise floor
at the expense of bandwidth (effectively data rate), and the more benefit
you'd like to get from that averaging, the more important your sample points
become, which gets back to needing as little phase noise as possible on your
oscillators. Is there some other angle here?

---Joel



Richard Clark March 24th 09 11:18 PM

Noise figure paradox
 
On Mon, 23 Mar 2009 15:28:51 -0700, "Joel Koltner"
wrote:

I realized awhile back that noise is the primary factor that limits how far
you can transmit a signal and still recovery it successfully. (Granted, these
days it's often phase noise in oscillators rather than the noise figures in
amplifiers that determines this, but still.)


Hi Joel,

This is an antiquated consideration limited to amplitude modulation,
the same specie as noise. I suppose there are noise products that
fall into the phase/frequency category that lay claim to "primary
factor," but that is a rather limited appeal.

Deep space communications proceeds many dB below the noise floor
enabled through technology that has become ubiquitous in cell phones -
Spread Spectrum. I have developed pulsed measurement applications for
which any single pulse has a poor S+N/N, but through repetition
improves S+N/N response with the square root increase of samples
taken.

73's
Richard Clark, KB7QHC

Harold E. Johnson March 24th 09 11:47 PM

Noise figure paradox
 
Deep space communications proceeds many dB below the noise floor
enabled through technology that has become ubiquitous in cell phones -
Spread Spectrum. I have developed pulsed measurement applications for
which any single pulse has a poor S+N/N, but through repetition
improves S+N/N response with the square root increase of samples
taken.

73's
Richard Clark, KB7QHC


And others call it autocorrelation?

W4ZCB



Richard Clark March 25th 09 01:45 AM

Noise figure paradox
 
On Tue, 24 Mar 2009 23:47:52 GMT, "Harold E. Johnson"
wrote:

Deep space communications proceeds many dB below the noise floor
enabled through technology that has become ubiquitous in cell phones -
Spread Spectrum. I have developed pulsed measurement applications for
which any single pulse has a poor S+N/N, but through repetition
improves S+N/N response with the square root increase of samples
taken.

73's
Richard Clark, KB7QHC


And others call it autocorrelation?


Which?

73's
Richard Clark, KB7QHC

Richard Clark March 25th 09 01:54 AM

Noise figure paradox
 
On Tue, 24 Mar 2009 16:08:46 -0700, "Joel Koltner"
wrote:

Still, AFAIK all that averaging can do is to effectively lower the noise floor
at the expense of bandwidth (effectively data rate), and the more benefit
you'd like to get from that averaging, the more important your sample points
become, which gets back to needing as little phase noise as possible on your
oscillators. Is there some other angle here?


That was a curious objection to a solution answering a problem as it
was specifically stated. Are there angles to showing noise being
overcome by several means when you offered none?

noise is the primary factor that limits how far
you can transmit a signal and still recovery it successfully.


Again, "qualified" statements such as that should be able to support
themselves with quantifiables.

What "noise" were you speaking about when through the course of this
thread it has most often been confined to kTB than, say, cross-talk,
splatter, spurs, whistlers, howlers, jamming, and a host of others?

What constitutes "successfully?" Is this a personal sense of well
being, or is it supported by a metric?

What constitutes "far"ness? At one point you offered from hilltop to
hilltop with antennas combining to 60dB directivity (but curiously
expressed as SNR). Lacking any quantifiable of what size hills,
shouting can be competitive at some range, and reasonably assured of
being understood to the first order of approximation without need for
averaging.

Spread Spectrum is so ubiquitous that waiting on anticipated exotic
failures of phase noise, on the face of an overwhelming absence of
problems, is wasted time indeed. Communication problems via the Cell
technology are more likely packets going astray on the network than
through the air. Perhaps it is with the clock chips of network
routers where phase noise fulfill this perception of sampling error -
but even there, collision is more likely and the system merely
abandons retries for the sake of economy not for overwhelming noise.

As to sampling error via the net. Time was when 16x over-sampling for
RS-232 was the norm. Going to one sample and transmitting at the base
frequency didn't help until quadrature and more complex phase systems
emerged. Same infrastructure, same noise, faster and more efficient
(not less) communication. Shannon wrote about the bit error rate in
the presence of noise, modulation techniques have improved throughput,
not lowered it.

73's
Richard Clark, KB7QHC

JIMMIE March 25th 09 06:00 AM

Noise figure paradox
 
On Mar 24, 9:45*pm, Richard Clark wrote:
On Tue, 24 Mar 2009 23:47:52 GMT, "Harold E. Johnson"

wrote:
Deep space communications proceeds many dB below the noise floor
enabled through technology that has become ubiquitous in cell phones -
Spread Spectrum. *I have developed pulsed measurement applications for
which any single pulse has a poor S+N/N, but through repetition
improves S+N/N response with the square root increase of samples
taken.


73's
Richard Clark, KB7QHC


And others call it autocorrelation?


Which?

73's
Richard Clark, KB7QHC


Radar people for one, also known as pulse-pair radar where data from
multiple returns are compared. The data can be from multiple hits on a
target using the same radar or the data can come from multiple radars.
MDS level improvement below the noise level can be achieved. Its also
used for transmitting data.One other specific use I am familiar with
involves transmition of radar data via radio. So the radar uses it as
well as the mode of transmission of the radar data from the radar to
the user.




Jimmie

Richard Clark March 25th 09 07:29 AM

Noise figure paradox
 
On Fri, 20 Mar 2009 19:46:53 -0700, "Joel Koltner"
wrote:

Say I have an antenna that I know happens to provide an SNR
of 60dB...


Returning to one of the few quantifiables, it would be instructive to
judge why it is so astonishing as a point to begin a dive into the
discussion of noise figure. In other posts related to deep space
probe's abilities to recover data from beneath the noise floor, much
less cell phones to operate in a sea of congestion, I encountered the
economic objection that such methods cost too much - expense of
bandwidth.

Well, not having seen anything more than yet another qualification -
how much is "too much?" It is time to draw back and ask how much is
enough? What would NOT be too expensive? Replacing qualitative
objections with quantitative objections sometimes evokes a horse laugh
when the magnitude of the qualitative issue ceases to exhibit much
quality.

I won't open this round of enquiry with exotic Spread Spectrum which
portends the objection of phase issues with clocks (even knowing that
such modulation techniques automatically incorporate slipping to
adjust for just such problems). Instead I will slip back some 60
years to the seminal paper published by Claude Shannon who figured
this all out (with H.W. Bode) and quote some metrics for various
coding (modulation) schemes. Search for "Communication in the
Presence of Noise." When you google, search in its image data space
for the cogent chart that I will be drawing on, below. Obtaining the
paper may take more effort (or simply email me for a copy).

Starting with BPSK and a S+N/N of roughly 10.5 dB, the bit error rate
is one bad bit in one million bits. This is probably the most
plug-ordinary form of data communication coming down the pike; so one
has to ask:
"is this good enough?"
If not, then "SNR of 60dB" is going to have to demand some really
astonishing expectations to push system designers to ante up the
additional 49.5 dB.

Well, let's say those astonishing expectations are as wild as
demanding proof that you won't contribute to global warming if you
chip an ice cube off of a glacier - such are the issues of scale when
you chug the numbers for BPSK.

OK, so as to not melt down the planet, we step up the complexity of
modulation to better than the last solution for "good enough." Let's
take the Voyager probes of the deep planets where at a S+N/N of 2.53
dB (in what is called 8 dB coding gain) the same error rate of 1 bit
in 1 million is achieved. One has to ask:
"is this good enough?"
If not, then "SNR of 60dB" is going to have to demand some really
astronomical expectations.

OK, perhaps this is a problem demanding really deep pockets that
exceed the several $Trillion being spent on the past 8 years of
Reaganomic neglect. (Why else pound the desk for that extra 57 dB?)
Let's go the full distance to the Shannon limit. It will give us that
same 1 bit error for every 1,000,000 at -1.5 dB S+N/N. If this isn't
below the noise floor, then the problem demanding 60 dB will never
find the solution to positively answer:
"is this good enough?"

73's
Richard Clark, KB7QHC

Richard Clark March 25th 09 07:41 AM

Noise figure paradox
 
On Tue, 24 Mar 2009 23:00:14 -0700 (PDT), JIMMIE
wrote:

On Mar 24, 9:45*pm, Richard Clark wrote:
On Tue, 24 Mar 2009 23:47:52 GMT, "Harold E. Johnson"

wrote:
Deep space communications proceeds many dB below the noise floor
enabled through technology that has become ubiquitous in cell phones -
Spread Spectrum. *I have developed pulsed measurement applications for
which any single pulse has a poor S+N/N, but through repetition
improves S+N/N response with the square root increase of samples
taken.


73's
Richard Clark, KB7QHC


And others call it autocorrelation?


Which?

73's
Richard Clark, KB7QHC


Radar people for one, also known as pulse-pair radar where data from
multiple returns are compared. The data can be from multiple hits on a
target using the same radar or the data can come from multiple radars.
MDS level improvement below the noise level can be achieved. Its also
used for transmitting data.One other specific use I am familiar with
involves transmition of radar data via radio. So the radar uses it as
well as the mode of transmission of the radar data from the radar to
the user.




Jimmie


My question of Which? was directed to Harold's broad brush painting
two different illustrations. Spread spectrum incorporates cross
correlation through slipping the gold code to find a flash. My design
performed a form of forced auto correlation (much like your radar
example, perhaps) but reduced noise as a function of that noise being
uncorrelated to the pulse.

Perhaps this is all saying the same thing at a very fundamental level.
However, I would guess this all hinges on the reduction of noise
following the square root of the ratio of the sample counts.
Conceptually, the distinction between auto or cross correlation is
really of minor consequence.

73's
Richard Clark, KB7QHC

J. Mc Laughlin March 25th 09 12:28 PM

Noise figure paradox
 
Dear Group:

Three conclusions/observations:
1. Noise temperature is the unambiguous way of specifying/describing the
noise performance of a receiver.

2. The paper by Costas in December 1959 Proc of IRE is also valuable to
this discussion. Be sure to read the follow-up comments.

3. I heard with my own ears Shannon observe that, from an engineering point
of view, if one did not have an occasional transmission error one was using
a wasteful amount of power. Shannon was a Michigan boy. 60 dB SNR??? Not
in fly-over land.

73, Mac N8TT

--
J. McLaughlin; Michigan, USA
Home:
"Richard Clark" wrote in message
...
On Fri, 20 Mar 2009 19:46:53 -0700, "Joel Koltner"
wrote:

Say I have an antenna that I know happens to provide an SNR
of 60dB...


Returning to one of the few quantifiables, it would be instructive to
judge why it is so astonishing as a point to begin a dive into the
discussion of noise figure. In other posts related to deep space
probe's abilities to recover data from beneath the noise floor, much
less cell phones to operate in a sea of congestion, I encountered the
economic objection that such methods cost too much - expense of
bandwidth.

Well, not having seen anything more than yet another qualification -
how much is "too much?" It is time to draw back and ask how much is
enough? What would NOT be too expensive? Replacing qualitative
objections with quantitative objections sometimes evokes a horse laugh
when the magnitude of the qualitative issue ceases to exhibit much
quality.

I won't open this round of enquiry with exotic Spread Spectrum which
portends the objection of phase issues with clocks (even knowing that
such modulation techniques automatically incorporate slipping to
adjust for just such problems). Instead I will slip back some 60
years to the seminal paper published by Claude Shannon who figured
this all out (with H.W. Bode) and quote some metrics for various
coding (modulation) schemes. Search for "Communication in the
Presence of Noise." When you google, search in its image data space
for the cogent chart that I will be drawing on, below. Obtaining the
paper may take more effort (or simply email me for a copy).

Starting with BPSK and a S+N/N of roughly 10.5 dB, the bit error rate
is one bad bit in one million bits. This is probably the most
plug-ordinary form of data communication coming down the pike; so one
has to ask:
"is this good enough?"
If not, then "SNR of 60dB" is going to have to demand some really
astonishing expectations to push system designers to ante up the
additional 49.5 dB.

Well, let's say those astonishing expectations are as wild as
demanding proof that you won't contribute to global warming if you
chip an ice cube off of a glacier - such are the issues of scale when
you chug the numbers for BPSK.

OK, so as to not melt down the planet, we step up the complexity of
modulation to better than the last solution for "good enough." Let's
take the Voyager probes of the deep planets where at a S+N/N of 2.53
dB (in what is called 8 dB coding gain) the same error rate of 1 bit
in 1 million is achieved. One has to ask:
"is this good enough?"
If not, then "SNR of 60dB" is going to have to demand some really
astronomical expectations.

OK, perhaps this is a problem demanding really deep pockets that
exceed the several $Trillion being spent on the past 8 years of
Reaganomic neglect. (Why else pound the desk for that extra 57 dB?)
Let's go the full distance to the Shannon limit. It will give us that
same 1 bit error for every 1,000,000 at -1.5 dB S+N/N. If this isn't
below the noise floor, then the problem demanding 60 dB will never
find the solution to positively answer:
"is this good enough?"

73's
Richard Clark, KB7QHC




Joel Koltner[_2_] March 25th 09 04:44 PM

Noise figure paradox
 
Hi Richard,

"Richard Clark" wrote in message
...
That was a curious objection to a solution answering a problem as it
was specifically stated. Are there angles to showing noise being
overcome by several means when you offered none?


My means were "reduce the noise figure of the amplifiers in your front-end"
and "reduce the phase noise of your oscillators/PLLs/etc." "Averaging the
input" is a clear winner here too.

What "noise" were you speaking about when through the course of this
thread it has most often been confined to kTB than, say, cross-talk,
splatter, spurs, whistlers, howlers, jamming, and a host of others?


For the sake of this thread, it's been just thermal and oscillator nose since
these are -- AIUI -- what limit traditional analog (AM/FM/PM) communication
systems. Most of the rest of what you've listed are certainly real-world
problems, but they're (hopefully) somewhat transient in nature and -- as
AIUI -- often all lumped into a single "fade margin" when designing the an
end-to-end system. E.g., the transmission medium is often modeled with
something no more complex than, say, the Friis equation and Rayleigh fading.
I do realize that in the real world things like spurs or splatter can end up
being very expensive (frequency changes, high-order/high-power filters, etc.)
if you're co-locating your radio with many others on a hilltop -- I've been
told that if you take a run-of-the-mill radio to a place like Mt. Diablo in
California, many of them just fall over from front end overload and cease to
function at all.

What constitutes "successfully?" Is this a personal sense of well
being, or is it supported by a metric?


Usually something like a 12dB SINAD standard is used for analog modulation
schemes or a 1e-3 bit-error rate for digital modulation techniques (before any
error correction coding is applied).

Spread Spectrum is so ubiquitous that waiting on anticipated exotic
failures of phase noise, on the face of an overwhelming absence of
problems, is wasted time indeed.


It's not ubiquitous on amateur radio, though.

But yeah, commercially it certainly is, and my understanding is that phase
noise in oscillators in a Big Deal for cell sites, requiring much more
strigent standards than what a 2m/440 HT's oscillator is likely to provide.
The network timing of cell sites is sync'd to atomic clocks via
GPS-disciplined oscillators system as well.

As to sampling error via the net. Time was when 16x over-sampling for
RS-232 was the norm.


I've meet many RS-232 routines that don't do any over-sampling at all -- I've
even written a few. :-) For most applications the SNR of an RS-232 signal is
typically well in excess of 20dB if you don't exceed the original specs for
cable length of bit rate. (Granted, as least historically before RS-232
starting falling out of use, it was probably one of the most "abused"
electrical interconnect standards in the world, and 16x oversampling certainly
would let you go further than simple-minded receivers with no oversampling.)

---Joel



Joel Koltner[_2_] March 25th 09 06:11 PM

Noise figure paradox
 
Hi Richard,

"Richard Clark" wrote in message
...
In other posts related to deep space
probe's abilities to recover data from beneath the noise floor, much
less cell phones to operate in a sea of congestion, I encountered the
economic objection that such methods cost too much - expense of
bandwidth.


I don't think anyone stated they cost "too much," just that there is a cost in
increased bandwidth, and bandwidth isn't free.

In general the spread spectrum processing gain is proportional to the
bandwidth increase over what the original data stream would require without
any spreading.

Well, not having seen anything more than yet another qualification -
how much is "too much?"


Definitely depends on "the market." You can bet the cell phone developers
have sophisticated models of possible radios and the channel and associate
with each piece a cost (e.g., bandwidth = $xx/Hz, improving close-in phase
noise of master oscillator = $xx/dBc, etc.), and then run a lot of simulations
to try to make the average cost of each bit as low as possible. Of course,
there are many variables that are impossible to ascertain precisely such as
how quickly uptake of new cell services (e.g., 3G data) will be in a given
area (as this drives how many towers you put there initially and how quickly
you roll out more), how fast fab yields will improve that lower your costs and
improve RF performance, etc.

Starting with BPSK and a S+N/N of roughly 10.5 dB, the bit error rate
is one bad bit in one million bits. This is probably the most
plug-ordinary form of data communication coming down the pike; so one
has to ask:
"is this good enough?"
If not, then "SNR of 60dB" is going to have to demand some really
astonishing expectations to push system designers to ante up the
additional 49.5 dB.


Why Richard, I'm starting to think you don't spend thousands of dollars per
meter on your speaker cables. :-) Hey, see this:
http://www.noiseaddicts.com/2008/11/...st-audiophile/ -
- $7,000/m speaker cables! Includes, "Spread Spectrum Technology!" :-)

That being said, back in the analog broadcast TV days (oh, wait, not all of
them are gone yet, but they will be soon), I believe that "studio quality"
NTSC is considered to be 50dB SNR (for the video), whereas people would start
to notice the noise if the received signal's SNR had dropped below 30ish dB,
and 10dB produces an effectively unwatchable pictures. This reinforces your
point that "good enough" is highly subjective depending on how the
"information" transmitted is actually used.

You make a good point that the Shannon limit gives a good quantitative measure
of how you go about trading off bandwidth for SNR (effectively power if your
noise if fixed by, e.g., atmospheric noise coming into an antenna). Shannong
doesn't give any hint as to how to achieve the limits specified, although I've
read that with fancy digital modulation techniques and "turbo"
error-correcting codes, one can come very close to the limit.

---Joel



Joel Koltner[_2_] March 25th 09 06:19 PM

Noise figure paradox
 
"J. Mc Laughlin" wrote in message
.. .
2. The paper by Costas in December 1959 Proc of IRE is also valuable to
this discussion. Be sure to read the follow-up comments.


Is that available publicly anywhere?

3. I heard with my own ears Shannon observe that, from an engineering point
of view, if one did not have an occasional transmission error one was using
a wasteful amount of power. Shannon was a Michigan boy. 60 dB SNR??? Not
in fly-over land.


I think the counterpoint is that, particularly in mobile environments, you
often needed huge fade margins, e.g., 20-40dB wasn't uncommon for pager
systems. Hence in systems designed to have, say, an "average" of 30dB SNR
(same audio quality as the telephone system, assuming 3kHz bandwidth as well),
it wouldn't be surprising to occasionally find you're actually getting 60dB
SNR in the most ideal scenario.

Although perhaps designing for an average of 30dB SNR is a little high for a
paging system... anyone know? (I'm thinking 20dB might be a bit more
realistic.)

---Joel



Joel Koltner[_2_] March 25th 09 06:20 PM

Noise figure paradox
 
"Joel Koltner" wrote in message
...
Is that available publicly anywhere?


What I really meant here was, "Is that available *to download from the
Internet* publicly anywhere?"



Joel Koltner[_2_] March 25th 09 06:22 PM

Noise figure paradox
 
That's great information, Jim, thanks!



Joel Koltner[_2_] March 25th 09 09:13 PM

Noise figure paradox
 
Hi Richard,

"Richard Clark" wrote in message
...
I don't think anyone stated they cost "too much," just that there is a cost
in
increased bandwidth, and bandwidth isn't free.

Um, this last statement seems to be hedging by saying the same thing
in reverse order.


No, they really are different. What costs too much for me might very not cost
too much for the military or NASA, for instance.

It would be more compelling if you simply stated the cost for ANY
market.


The original example was meant to be more of a "textbook" problem, hence the
lack of elaboration on the specifics of the "market" involved.

I would suspect that "studio quality" observes other characteristics
of the signal.


Agreed, I would too.

A multipath reception could easily absorb a
considerable amount of interfering same-signal to abyssmal results. It
would take a very sophisticated "noise" meter to perform the correct
S+N/N.


Yep, very true -- I think this is why you see people legtimately complaining
about the quality of their cable TV even though the cable installation tech
whips out his SINAD meter and verifies it meets spec; the quality of a
transmission can't always be boiled down to just one number.

The "Turbo" codes are achievable in silicon with moderate effort. A
work going back a dozen years or more can be found at:
http://sss-mag.com/G3RUH/index2.html


Great link, thanks!

---Joel



Richard Clark March 25th 09 09:30 PM

Noise figure paradox
 
On Wed, 25 Mar 2009 11:11:48 -0700, "Joel Koltner"
wrote:

Hi Richard,

"Richard Clark" wrote in message
.. .
In other posts related to deep space
probe's abilities to recover data from beneath the noise floor, much
less cell phones to operate in a sea of congestion, I encountered the
economic objection that such methods cost too much - expense of
bandwidth.


I don't think anyone stated they cost "too much," just that there is a cost in
increased bandwidth, and bandwidth isn't free.


Um, this last statement seems to be hedging by saying the same thing
in reverse order.

Well, not having seen anything more than yet another qualification -
how much is "too much?"


Definitely depends on "the market."


It would be more compelling if you simply stated the cost for ANY
market. Qualified statements are suitable for Madison Avenue to sell
cheese, but it doesn't make for an informed cost-based decision.

That being said, back in the analog broadcast TV days (oh, wait, not all of
them are gone yet, but they will be soon), I believe that "studio quality"
NTSC is considered to be 50dB SNR (for the video), whereas people would start
to notice the noise if the received signal's SNR had dropped below 30ish dB,
and 10dB produces an effectively unwatchable pictures. This reinforces your
point that "good enough" is highly subjective depending on how the
"information" transmitted is actually used.


I would suspect that "studio quality" observes other characteristics
of the signal. A multipath reception could easily absorb a
considerable amount of interfering same-signal to abyssmal results. It
would take a very sophisticated "noise" meter to perform the correct
S+N/N.

You make a good point that the Shannon limit gives a good quantitative measure
of how you go about trading off bandwidth for SNR (effectively power if your
noise if fixed by, e.g., atmospheric noise coming into an antenna). Shannong
doesn't give any hint as to how to achieve the limits specified, although I've
read that with fancy digital modulation techniques and "turbo"
error-correcting codes, one can come very close to the limit.


The "Turbo" codes are achievable in silicon with moderate effort. A
work going back a dozen years or more can be found at:
http://sss-mag.com/G3RUH/index2.html
(consult the adjoining pages for fuller discussion)

73's
Richard Clark, KB7QHC

Joel Koltner[_2_] March 25th 09 09:45 PM

Noise figure paradox
 
"Richard Clark" wrote in message
...
Which is no more complex than setting 4 register bits - I wouldn't
call that a "routine," however.
-- I've
even written a few. :-)

Why more than one? Were the rest undersampling routines?


These were software RS-232 receivers, so you make use of whatever timers, edge
interrupts, etc. that you have sitting around to first the start bit, load up
a timer to then trigger in (what should be) the middle of the bit time for the
sample, etc. I've written pretty much the same routines a small handful of
times on different CPUs and in different languages.

The first ones I wrote were on ~1MIP CPUs in assembly and were limited to
about 2400bps full-duplex if you were also trying to run a reasonably
responsive terminal emulator (e.g., wanted to still have 50% of the CPU
available for the emulator), whereas more recently I've written them on ~20MIP
CPUs in C and can easily do 9600bps full-duplex with only a small impact on
CPU usage.

---Joel



Richard Clark March 25th 09 10:31 PM

Noise figure paradox
 
On Wed, 25 Mar 2009 09:44:45 -0700, "Joel Koltner"
wrote:

As to sampling error via the net. Time was when 16x over-sampling for
RS-232 was the norm.


I've meet many RS-232 routines that don't do any over-sampling at all


Which is no more complex than setting 4 register bits - I wouldn't
call that a "routine," however.

-- I've
even written a few. :-)


Why more than one? Were the rest undersampling routines?

Fuzzy-232? That copyrighted form of communication is an information
network layer supporting Cecil's (r) "standing wave current" (c)
explanation with answers that appear first, tailor-fitted to the
strawman question that follows - otherwise known as the Sub-optimal
Conjugated Hypothesis Information Transform (SCHIT) routine found in
quantum babbelizers everywhere. Discarding random bytes improves the
intelligibility and will whiten teeth.

73's
Richard Clark, KB7QHC

J. Mc Laughlin March 25th 09 11:22 PM

Noise figure paradox
 
Dear Joel Koltner (no call sign):

I know of no site where the classic paper may be downloaded. The paper
had a significant influence on how people thought about modulation and
frequency allocation. "Shannon, Poison (I can not think how to spell his
name) and the Radio Amateur" is the title of the paper. A good library
should be able to get you a copy. The same issue had a paper on small,
loaded cavities, which became the norm for front end selectivity in VHF
communication receivers.

Regards, Mac N8TT

--
J. McLaughlin; Michigan, USA
Home:
"Joel Koltner" wrote in message
...
"Joel Koltner" wrote in message
...
Is that available publicly anywhere?


What I really meant here was, "Is that available *to download from the
Internet* publicly anywhere?"





Joel Koltner[_2_] March 25th 09 11:29 PM

Noise figure paradox
 
Thanks Mac, I'll take a look next time I'm near a university library. (I'm in
southern Oregon and there aren't any engineering schools down here...)

---Joel (KE7CDV)



Richard Clark March 26th 09 12:07 AM

Noise figure paradox
 
On Wed, 25 Mar 2009 14:45:33 -0700, "Joel Koltner"
wrote:

These were software RS-232 receivers, so you make use of whatever timers, edge
interrupts, etc. that you have sitting around to first the start bit, load up
a timer to then trigger in (what should be) the middle of the bit time for the
sample, etc. I've written pretty much the same routines a small handful of
times on different CPUs and in different languages.


Hi Joel,

Pretty deep in the basement, there. Sounds like the way Apple used to
run their floppy disc, read it, and write it with a minimum of parts
(and expense).

73's
Richard Clark, KB7QHC

gwatts March 26th 09 12:27 AM

Noise figure paradox
 
Joel Koltner wrote:
"Joel Koltner" wrote in message
...
Is that available publicly anywhere?


What I really meant here was, "Is that available *to download from the
Internet* publicly anywhere?"


Yes.

To get more than the abstract for free you have to be an IEEE member and
a member of MTTS or otherwise subscribed to the online system.

For the abstract start at:
http://ieeexplore.ieee.org/xpl/Recen...number=4547924

and for $29...
http://ieeexplore.ieee.org/guide/g_tools_apo.jsp

Happy trails!
- Galen, W8LNA

Jim-NN7K[_2_] March 26th 09 01:10 AM

Noise figure paradox
 
Joel-- think you might be impressed with the collection in K.Fall's
(Oregon Institute of Technology) they run EE course's there! Jim NN7K

Joel Koltner wrote:
Thanks Mac, I'll take a look next time I'm near a university library. (I'm in
southern Oregon and there aren't any engineering schools down here...)

---Joel (KE7CDV)



Jim Lux March 26th 09 01:12 AM

Noise figure paradox
 
J. Mc Laughlin wrote:
Dear Joel Koltner (no call sign):

I know of no site where the classic paper may be downloaded. The paper
had a significant influence on how people thought about modulation and
frequency allocation. "Shannon, Poison (I can not think how to spell his
name) and the Radio Amateur" is the title of the paper.


Actually, the title is "Poisson, Shannon, and the Radio Amateur",
Proceedings of the IRE, Dec 1959, Vol 47, Issue 12, pages 2058-2068. The
abstract is:


Congested band operation as found in the amateur service presents an
interesting problem in analysis which can only be solved by statistical
methods. Consideration is given to the relative merits of two currently
popular modulation techniques, SSB and DSB. It is found that in spite of
the bandwidth economy of SSB this system can claim no over-all advantage
with respect to DSB for this service. It is further shown that there are
definite advantages to the use of very broadband techniques in the
amateur service. The results obtained from the analysis of the radio
amateur service are significant, for they challenge the intuitively
obvious and universally accepted thesis that congestion in the radio
frequency spectrum can only be relieved by the use of progressively
smaller transmission bandwidths obtained by appropriate coding and
modulation techniques. In order to study the general problem of spectrum
utilization, some basic results of information theory are required. Some
of the significant work of Shannon is reviewed with special emphasis on
his channel capacity formula. It is shown that this famous formula, in
spite of its deep philosophical significance, cannot be used
meaningfully in the analysis and design of practical, present day
communications systems. A more suitable channel capacity formula is
derived for the practical case. The analytical results thus obtained are
used to show that broadband techniques have definite merit for both
civil and military applications.

Phil Karn (KA9Q) had some comments:
http://www.ka9q.net/vmsk/shannon.html

Jim Lux March 26th 09 01:24 AM

Noise figure paradox
 
Joel Koltner wrote:
Hi Richard,

"Richard Clark" wrote in message
...
In other posts related to deep space
probe's abilities to recover data from beneath the noise floor, much
less cell phones to operate in a sea of congestion, I encountered the
economic objection that such methods cost too much - expense of
bandwidth.


I don't think anyone stated they cost "too much," just that there is a cost in
increased bandwidth, and bandwidth isn't free.

In general the spread spectrum processing gain is proportional to the
bandwidth increase over what the original data stream would require without
any spreading.



For very low level signals spread spectrum doesn't necessarily buy you
much. If you use 10 times the BW, you have 10 times the noise, so your
received SNR is worse by a factor of 10dB. But you get 10dB of
processing gain when you despread, and your output SNR is the same as it
was before.

Of course, you consumed some electrical power on both ends to spread and
despread things. wideband amplifiers are less efficient than narrow band
ones, as well. Saturated amplifiers are more efficient than non
saturated amplifiers.

In general, the most efficient (considering power consumed on both ends)
transmission is a very narrow band signal, where the bandwidth is just
wide enough to contain the required data rate.

This drives you to things like BPSK, GMSK, and QPSK. Ideally, the
signal spectrum would be a nice fat rectangular pulse.

In the deep space probe business, watts count at every step of the way.


You make a good point that the Shannon limit gives a good quantitative measure
of how you go about trading off bandwidth for SNR (effectively power if your
noise if fixed by, e.g., atmospheric noise coming into an antenna). Shannong
doesn't give any hint as to how to achieve the limits specified, although I've
read that with fancy digital modulation techniques and "turbo"
error-correcting codes, one can come very close to the limit.


Actually, state of the art is probably Low Density Parity Check (LDPC)
codes, as far as approaching the limit. They've become more practical
because digital logic is becoming a lot cheaper (in a nanowatts per bit
sense) to do the coding/decoding. They're also unencumbered by the
patents for turbo codes.

http://en.wikipedia.org/wiki/Low-den...ity-check_code

Jim Lux March 26th 09 01:47 AM

Noise figure paradox
 

The "Turbo" codes are achievable in silicon with moderate effort.


And the payment of a suitable fee to the folks who OWN the turbo codes
at France Telecom
http://www.francetelecom.com/en_EN/i...y/turbo_codes/
http://www.spectralicensing.com/licfaq.htm


Note also that turbo and LDPC are really suited to longer block lengths
(1000 bits and bigger). For small block lengths, codes like Hamming
might be better.

Reed-Solomon combined with Viterbi decoders of convolutional codes are
also popular.

Note that in deep space, at a bit rate of 8 bps, you might not want to
use a code with a 1000 bit codeblock..

A
work going back a dozen years or more can be found at:
http://sss-mag.com/G3RUH/index2.html
(consult the adjoining pages for fuller discussion)

73's
Richard Clark, KB7QHC


Joel Koltner[_2_] March 26th 09 04:23 PM

Noise figure paradox
 
"Jim-NN7K" . wrote in message
...
Joel-- think you might be impressed with the collection in K.Fall's
(Oregon Institute of Technology) they run EE course's there! Jim NN7K


Thanks, I'll have to make a trip over. Heck of a lot closer than Corvallis...



Joel Koltner[_2_] March 26th 09 04:35 PM

Noise figure paradox
 
Hi Jim,

"Jim Lux" wrote in message
...
Actually, state of the art is probably Low Density Parity Check (LDPC)
codes, as far as approaching the limit.


I hadn't heard of them; thanks for the link. Can you comment on how they
compare to Reed-Solomon codes?

I have just enough background in error-correcting codes to spout off the right
keywords in nearly-coherent Usenet posts and wait for people to point out my
errors. :-)

---Joel



J. Mc Laughlin March 26th 09 05:33 PM

Noise figure paradox
 
Dear Jim:

I am obliged to you for the title correction. My memory might not be
what it was. Thanks also for the abstract.
While the paper (again from memory and poked by the abstract) is written
in terms of SSB/DSB, a main issue is that of vector addition of copies of a
signal with noise and interference not being coherent. One ends up with a
system that is resistive to noise and interference up to a point where it
stops working.
73, Mac N8TT
--
J. McLaughlin; Michigan, USA
Home:
"Jim Lux" wrote in message
...
J. Mc Laughlin wrote:
Dear Joel Koltner (no call sign):

I know of no site where the classic paper may be downloaded. The
paper had a significant influence on how people thought about modulation
and frequency allocation. "Shannon, Poison (I can not think how to spell
his name) and the Radio Amateur" is the title of the paper.


Actually, the title is "Poisson, Shannon, and the Radio Amateur",
Proceedings of the IRE, Dec 1959, Vol 47, Issue 12, pages 2058-2068. The
abstract is:


Congested band operation as found in the amateur service presents an
interesting problem in analysis which can only be solved by statistical
methods. Consideration is given to the relative merits of two currently
popular modulation techniques, SSB and DSB. It is found that in spite of
the bandwidth economy of SSB this system can claim no over-all advantage
with respect to DSB for this service. It is further shown that there are
definite advantages to the use of very broadband techniques in the amateur
service. The results obtained from the analysis of the radio amateur
service are significant, for they challenge the intuitively obvious and
universally accepted thesis that congestion in the radio frequency
spectrum can only be relieved by the use of progressively smaller
transmission bandwidths obtained by appropriate coding and modulation
techniques. In order to study the general problem of spectrum utilization,
some basic results of information theory are required. Some of the
significant work of Shannon is reviewed with special emphasis on his
channel capacity formula. It is shown that this famous formula, in spite
of its deep philosophical significance, cannot be used meaningfully in the
analysis and design of practical, present day communications systems. A
more suitable channel capacity formula is derived for the practical case.
The analytical results thus obtained are used to show that broadband
techniques have definite merit for both civil and military applications.

Phil Karn (KA9Q) had some comments:
http://www.ka9q.net/vmsk/shannon.html



J. Mc Laughlin March 26th 09 05:36 PM

Noise figure paradox
 
Dear Joel - KE7CDV:

Sounds as if Jim has you pointed to a source. When you are in the library,
do find the follow-up comments. They are a big window, as I recall, into
the mind-set of the time. 73, Mac N8TT

--
J. McLaughlin; Michigan, USA
Home:
"Jim-NN7K" . wrote in message
...
Joel-- think you might be impressed with the collection in K.Fall's
(Oregon Institute of Technology) they run EE course's there! Jim NN7K

Joel Koltner wrote:
Thanks Mac, I'll take a look next time I'm near a university library.
(I'm in southern Oregon and there aren't any engineering schools down
here...)

---Joel (KE7CDV)




[email protected] March 28th 09 01:57 PM

Noise figure paradox
 
On Mar 26, 9:35*am, "Joel Koltner"
wrote:
Hi Jim,

"Jim Lux" wrote in message

...

Actually, state of the art is probably Low Density Parity Check (LDPC)
codes, as far as approaching the limit.


I hadn't heard of them; thanks for the link. *Can you comment on how they
compare to Reed-Solomon codes?

I have just enough background in error-correcting codes to spout off the right
keywords in nearly-coherent Usenet posts and wait for people to point out my
errors. :-)


Much longer block length than typical R-S. Also, the way the check
bits are generated is different. Each check bit isn't formed by
combining ALL the source bits, just some of them (obviously a
different "some" for each of the check bits), hence the name Low
Density. Both are usually roughly rate 1/2.


All times are GMT +1. The time now is 04:02 PM.

Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
RadioBanter.com