![]() |
Noise figure paradox
On Tue, 24 Mar 2009 23:00:14 -0700 (PDT), JIMMIE
wrote: On Mar 24, 9:45*pm, Richard Clark wrote: On Tue, 24 Mar 2009 23:47:52 GMT, "Harold E. Johnson" wrote: Deep space communications proceeds many dB below the noise floor enabled through technology that has become ubiquitous in cell phones - Spread Spectrum. *I have developed pulsed measurement applications for which any single pulse has a poor S+N/N, but through repetition improves S+N/N response with the square root increase of samples taken. 73's Richard Clark, KB7QHC And others call it autocorrelation? Which? 73's Richard Clark, KB7QHC Radar people for one, also known as pulse-pair radar where data from multiple returns are compared. The data can be from multiple hits on a target using the same radar or the data can come from multiple radars. MDS level improvement below the noise level can be achieved. Its also used for transmitting data.One other specific use I am familiar with involves transmition of radar data via radio. So the radar uses it as well as the mode of transmission of the radar data from the radar to the user. Jimmie My question of Which? was directed to Harold's broad brush painting two different illustrations. Spread spectrum incorporates cross correlation through slipping the gold code to find a flash. My design performed a form of forced auto correlation (much like your radar example, perhaps) but reduced noise as a function of that noise being uncorrelated to the pulse. Perhaps this is all saying the same thing at a very fundamental level. However, I would guess this all hinges on the reduction of noise following the square root of the ratio of the sample counts. Conceptually, the distinction between auto or cross correlation is really of minor consequence. 73's Richard Clark, KB7QHC |
Noise figure paradox
Hi Richard,
"Richard Clark" wrote in message ... That was a curious objection to a solution answering a problem as it was specifically stated. Are there angles to showing noise being overcome by several means when you offered none? My means were "reduce the noise figure of the amplifiers in your front-end" and "reduce the phase noise of your oscillators/PLLs/etc." "Averaging the input" is a clear winner here too. What "noise" were you speaking about when through the course of this thread it has most often been confined to kTB than, say, cross-talk, splatter, spurs, whistlers, howlers, jamming, and a host of others? For the sake of this thread, it's been just thermal and oscillator nose since these are -- AIUI -- what limit traditional analog (AM/FM/PM) communication systems. Most of the rest of what you've listed are certainly real-world problems, but they're (hopefully) somewhat transient in nature and -- as AIUI -- often all lumped into a single "fade margin" when designing the an end-to-end system. E.g., the transmission medium is often modeled with something no more complex than, say, the Friis equation and Rayleigh fading. I do realize that in the real world things like spurs or splatter can end up being very expensive (frequency changes, high-order/high-power filters, etc.) if you're co-locating your radio with many others on a hilltop -- I've been told that if you take a run-of-the-mill radio to a place like Mt. Diablo in California, many of them just fall over from front end overload and cease to function at all. What constitutes "successfully?" Is this a personal sense of well being, or is it supported by a metric? Usually something like a 12dB SINAD standard is used for analog modulation schemes or a 1e-3 bit-error rate for digital modulation techniques (before any error correction coding is applied). Spread Spectrum is so ubiquitous that waiting on anticipated exotic failures of phase noise, on the face of an overwhelming absence of problems, is wasted time indeed. It's not ubiquitous on amateur radio, though. But yeah, commercially it certainly is, and my understanding is that phase noise in oscillators in a Big Deal for cell sites, requiring much more strigent standards than what a 2m/440 HT's oscillator is likely to provide. The network timing of cell sites is sync'd to atomic clocks via GPS-disciplined oscillators system as well. As to sampling error via the net. Time was when 16x over-sampling for RS-232 was the norm. I've meet many RS-232 routines that don't do any over-sampling at all -- I've even written a few. :-) For most applications the SNR of an RS-232 signal is typically well in excess of 20dB if you don't exceed the original specs for cable length of bit rate. (Granted, as least historically before RS-232 starting falling out of use, it was probably one of the most "abused" electrical interconnect standards in the world, and 16x oversampling certainly would let you go further than simple-minded receivers with no oversampling.) ---Joel |
Noise figure paradox
Hi Richard,
"Richard Clark" wrote in message ... In other posts related to deep space probe's abilities to recover data from beneath the noise floor, much less cell phones to operate in a sea of congestion, I encountered the economic objection that such methods cost too much - expense of bandwidth. I don't think anyone stated they cost "too much," just that there is a cost in increased bandwidth, and bandwidth isn't free. In general the spread spectrum processing gain is proportional to the bandwidth increase over what the original data stream would require without any spreading. Well, not having seen anything more than yet another qualification - how much is "too much?" Definitely depends on "the market." You can bet the cell phone developers have sophisticated models of possible radios and the channel and associate with each piece a cost (e.g., bandwidth = $xx/Hz, improving close-in phase noise of master oscillator = $xx/dBc, etc.), and then run a lot of simulations to try to make the average cost of each bit as low as possible. Of course, there are many variables that are impossible to ascertain precisely such as how quickly uptake of new cell services (e.g., 3G data) will be in a given area (as this drives how many towers you put there initially and how quickly you roll out more), how fast fab yields will improve that lower your costs and improve RF performance, etc. Starting with BPSK and a S+N/N of roughly 10.5 dB, the bit error rate is one bad bit in one million bits. This is probably the most plug-ordinary form of data communication coming down the pike; so one has to ask: "is this good enough?" If not, then "SNR of 60dB" is going to have to demand some really astonishing expectations to push system designers to ante up the additional 49.5 dB. Why Richard, I'm starting to think you don't spend thousands of dollars per meter on your speaker cables. :-) Hey, see this: http://www.noiseaddicts.com/2008/11/...st-audiophile/ - - $7,000/m speaker cables! Includes, "Spread Spectrum Technology!" :-) That being said, back in the analog broadcast TV days (oh, wait, not all of them are gone yet, but they will be soon), I believe that "studio quality" NTSC is considered to be 50dB SNR (for the video), whereas people would start to notice the noise if the received signal's SNR had dropped below 30ish dB, and 10dB produces an effectively unwatchable pictures. This reinforces your point that "good enough" is highly subjective depending on how the "information" transmitted is actually used. You make a good point that the Shannon limit gives a good quantitative measure of how you go about trading off bandwidth for SNR (effectively power if your noise if fixed by, e.g., atmospheric noise coming into an antenna). Shannong doesn't give any hint as to how to achieve the limits specified, although I've read that with fancy digital modulation techniques and "turbo" error-correcting codes, one can come very close to the limit. ---Joel |
Noise figure paradox
"J. Mc Laughlin" wrote in message
.. . 2. The paper by Costas in December 1959 Proc of IRE is also valuable to this discussion. Be sure to read the follow-up comments. Is that available publicly anywhere? 3. I heard with my own ears Shannon observe that, from an engineering point of view, if one did not have an occasional transmission error one was using a wasteful amount of power. Shannon was a Michigan boy. 60 dB SNR??? Not in fly-over land. I think the counterpoint is that, particularly in mobile environments, you often needed huge fade margins, e.g., 20-40dB wasn't uncommon for pager systems. Hence in systems designed to have, say, an "average" of 30dB SNR (same audio quality as the telephone system, assuming 3kHz bandwidth as well), it wouldn't be surprising to occasionally find you're actually getting 60dB SNR in the most ideal scenario. Although perhaps designing for an average of 30dB SNR is a little high for a paging system... anyone know? (I'm thinking 20dB might be a bit more realistic.) ---Joel |
Noise figure paradox
"Joel Koltner" wrote in message
... Is that available publicly anywhere? What I really meant here was, "Is that available *to download from the Internet* publicly anywhere?" |
Noise figure paradox
That's great information, Jim, thanks!
|
Noise figure paradox
Hi Richard,
"Richard Clark" wrote in message ... I don't think anyone stated they cost "too much," just that there is a cost in increased bandwidth, and bandwidth isn't free. Um, this last statement seems to be hedging by saying the same thing in reverse order. No, they really are different. What costs too much for me might very not cost too much for the military or NASA, for instance. It would be more compelling if you simply stated the cost for ANY market. The original example was meant to be more of a "textbook" problem, hence the lack of elaboration on the specifics of the "market" involved. I would suspect that "studio quality" observes other characteristics of the signal. Agreed, I would too. A multipath reception could easily absorb a considerable amount of interfering same-signal to abyssmal results. It would take a very sophisticated "noise" meter to perform the correct S+N/N. Yep, very true -- I think this is why you see people legtimately complaining about the quality of their cable TV even though the cable installation tech whips out his SINAD meter and verifies it meets spec; the quality of a transmission can't always be boiled down to just one number. The "Turbo" codes are achievable in silicon with moderate effort. A work going back a dozen years or more can be found at: http://sss-mag.com/G3RUH/index2.html Great link, thanks! ---Joel |
Noise figure paradox
On Wed, 25 Mar 2009 11:11:48 -0700, "Joel Koltner"
wrote: Hi Richard, "Richard Clark" wrote in message .. . In other posts related to deep space probe's abilities to recover data from beneath the noise floor, much less cell phones to operate in a sea of congestion, I encountered the economic objection that such methods cost too much - expense of bandwidth. I don't think anyone stated they cost "too much," just that there is a cost in increased bandwidth, and bandwidth isn't free. Um, this last statement seems to be hedging by saying the same thing in reverse order. Well, not having seen anything more than yet another qualification - how much is "too much?" Definitely depends on "the market." It would be more compelling if you simply stated the cost for ANY market. Qualified statements are suitable for Madison Avenue to sell cheese, but it doesn't make for an informed cost-based decision. That being said, back in the analog broadcast TV days (oh, wait, not all of them are gone yet, but they will be soon), I believe that "studio quality" NTSC is considered to be 50dB SNR (for the video), whereas people would start to notice the noise if the received signal's SNR had dropped below 30ish dB, and 10dB produces an effectively unwatchable pictures. This reinforces your point that "good enough" is highly subjective depending on how the "information" transmitted is actually used. I would suspect that "studio quality" observes other characteristics of the signal. A multipath reception could easily absorb a considerable amount of interfering same-signal to abyssmal results. It would take a very sophisticated "noise" meter to perform the correct S+N/N. You make a good point that the Shannon limit gives a good quantitative measure of how you go about trading off bandwidth for SNR (effectively power if your noise if fixed by, e.g., atmospheric noise coming into an antenna). Shannong doesn't give any hint as to how to achieve the limits specified, although I've read that with fancy digital modulation techniques and "turbo" error-correcting codes, one can come very close to the limit. The "Turbo" codes are achievable in silicon with moderate effort. A work going back a dozen years or more can be found at: http://sss-mag.com/G3RUH/index2.html (consult the adjoining pages for fuller discussion) 73's Richard Clark, KB7QHC |
Noise figure paradox
"Richard Clark" wrote in message
... Which is no more complex than setting 4 register bits - I wouldn't call that a "routine," however. -- I've even written a few. :-) Why more than one? Were the rest undersampling routines? These were software RS-232 receivers, so you make use of whatever timers, edge interrupts, etc. that you have sitting around to first the start bit, load up a timer to then trigger in (what should be) the middle of the bit time for the sample, etc. I've written pretty much the same routines a small handful of times on different CPUs and in different languages. The first ones I wrote were on ~1MIP CPUs in assembly and were limited to about 2400bps full-duplex if you were also trying to run a reasonably responsive terminal emulator (e.g., wanted to still have 50% of the CPU available for the emulator), whereas more recently I've written them on ~20MIP CPUs in C and can easily do 9600bps full-duplex with only a small impact on CPU usage. ---Joel |
All times are GMT +1. The time now is 03:45 PM. |
Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
RadioBanter.com