Home |
Search |
Today's Posts |
#9
![]() |
|||
|
|||
![]()
On 2/24/2015 12:10 PM, Brian Morrison wrote:
On Tue, 24 Feb 2015 16:32:21 -0000 FranK Turner-Smith G3VKI wrote: Bandwidth reduction for one. If you can encode and compress speech sufficiently then you can use less bandwidth in transmission. That's the bit I have trouble getting my head around. Back in the 1970s and 1980s digital transmissions used a much greater bandwidth than their analogue equivalents. Sampling at 2.2 x max frequency x number of bits plus housekeeping bits etc. etc. Have a look at David Rowe's web site about Codec 2 and his work on it. http://rowetel.com Most of the codec development effort goes into voice modelling that allows redundant information to be thrown away without making the encoded speech sound too horrible when decoded. And on working out which bits in the encoded frame need to be better protected and which don't, this is especially important when considering what encodes voiced and non-voiced speech and ensuring it doesn't get mixed up. Other than uLaw/Alaw, voice for telephony is not compressed in the same ways that a zip file is. As you say, they model the vocal tract and send the parameters for the sounds that are to be produced along with error information to make it more intelligible. For sounds that aren't voice or voice like, they are reproduced poorly. This is why low bit rate compression on cell phones doesn't convey music very well and background noise messes up the intelligibility much more than with uLaw or ALaw compression which are just ways of compressing the waveform without knowing anything about the content. -- Rick |
Thread Tools | Search this Thread |
Display Modes | |
|
|