Home |
Search |
Today's Posts |
|
#1
![]() |
|||
|
|||
![]()
Richard Clark wrote:
Hi Ian, This is called "conflict of interest," which discounts those same lecturers' and professors' credentials. Well, I for one am not going to instantly take the view that my lecturers' credentials are not solid. Of course they're going to put positive spin on it, mainly because for their part (physical layer and comms protocol stuff) they've been successful so far. Trials in Sweden were a total success, albeit on a smaller scale. I'm still not convinced that the idea will ever actually be realised, but nevertheless it's still a very interesting one! You've missed the point Roy made. Adding connections (more HAPs) does not add more bandwidth. Those extra HAPs will be competing for the same (now diminishing by proportion) spectrum. That'd be why there is ongoing research into the various multiplexing techniques so many users can use the same piece of spectrum and not cause *too much* interference with each other. Using spreading codes etc the other signals just appear as a little bit of extra background noise so I am led to believe. (I will get the full story on this sort of stuff in the next academic year). Surely though even for broadband internet home users will not need to exceed 10Mbps speeds, what would be the point? Who needs to get a web page served a second faster, bearing in mind bandwidth limits at the server end as well as the end user connection. The 3rd generation mobiles were not looking to exceed 5Mbps per handset (which is a hell of a lot of data) at the very most and that is more than capable of streaming video etc (albeit at lower resolutions for the handsets). The bandwidth requirements for cellular voice calls is minimal in comparison to data requirements. A phone line is only 64kbps, and cellular (gsm) data rates are less than that even and still provide good (enough) voice reproduction! When there's existing hardware (after all, no one is telling the consumers to throw away their phones and buy HAP versions), and Hindenberg technology is a century old; then any proviso "there is still a lot of work to be done" translates into SEND MORE MONEY - a message tape with an infinite loop. Basically as I understand it they'd be looking to use current Wi-Fi, Wi-Max and GSM technologies etc so why would there be a requirement to change hardware? The only thing that would need changing with the broadband data downlinks to serve internet would be gateways with directional antennae to serve buildings etc. Ask researcher1: "can I float a balloon?" researcher1: "Sure, no problem." Ask researcher2: "can I transmit and receive from a height?" researcher2: "Sure, no problem." Ask researcher3: "can I find a stabilizing platform?" researcher3: "Sure, no problem." Ask researcher4: "can more connections serve more customers?" researcher4: "Sure, no problem." The sum is not equal to the whole: Ask customers: "can you still hear me?" customers: "What the ****! My line is dead." This is the same with any new technology! Just look at 3G services in the UK, it's taken them a while to get network coverage any where near comparable to the already existing 2 and 2.5G network infrastructure. Do you suggest that we just give up carrying out research into this sort of thing? Maybe we should have stuck with the original optical telegraph rather than develop methods of signalling using electricity... I have no personal/pecuniary connections with this project, as I'm only an undergrad student, but I think to dismiss it out of hand as a non starter is a bit harsh. It does have the potential to work, whether it ever gets deployed is another matter... -- 73, Iain M0PCB/P |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|