Home |
Search |
Today's Posts |
#8
![]() |
|||
|
|||
![]()
On Feb 18, 11:41 am, Michael Coslo wrote:
Doug Smith W9WI wrote: I wonder to what degree the improvement in Morse sending has made this kind of project more effective? The quality of transmitted Morse (in terms of spacing & element lengths being correct - and in terms of fewer errors) has improved considerably since I got my license in 1973. Certainly the ascendancy of keyers has helped, but even then, older software had some issues with noise, signal level, and adjacent signals. I'm a real dilletente on the subject, but I think that the older versions of CW decoding software relied heavily on timing to try to emulate the human brain's decoding of Morse. Trouble is, I don't think our brains work that way, because humans can decode some Morse that is sent pretty badly. But the old software could have big problems when the sender didn't use the proper space timing, or when the dashes or dots were significantly long or short. The human just adapted in real time. I think that is what the new software is starting to tackle. Available softwares for such time-related, adaptive programming have been aided by technology such as flash memory and larger memories in microprocessors (as stand-alone decoders). Adaptive programming has been known in computer programming for at least 50 years but hasn't had a resurgence until about a decade ago. The main thing about its 'non-use' for morse code is that there really isn't a big market for it outside of amateur radio. Elsewhere there is the speech decoder used with a very few telephone menu robots that can recognize numbers and certain letters or words. A bundle with my WordPerfect 8 upgrade word processor (slightly over 8 years old now) was 'Dragon Naturally Speaking' which would process word sounds and convert them to text. 'Naturally Speaking' would 'learn' the sound patterns associated with a particular voice (repetition required to have the adaptive programming do the 'learning'), then go to a look-up table in memory and do the conversion into speech. Outside of trying out, I found that my faster typing skills (learned over six decades) would serve me better...the little free microphone was useful for other things...:-) Adaptive programming is found in some higher-level visual graphics processors used in motion picture and television production around this corner of the USA. Those allow 'in-between' frame merging of movements similar to what was done in cartoon animation in the early 1930s. [lower-rank animators were assigned the tasks of making the 'in-between' drawings of major animator's drawings for the final inking and painting, hence the name 'in-betweeners'] There has been a MAJOR field of work in motion graphics software in the last couple of decades, but that is a niche activity, although a much more profitable one. The little credit-card-sized MFJ 'morse reader' is more of a toy since it has rather simplistic adaptive programming ('learning' involved only in setting the approximate rate of words sensed) but is somewhat successful in that. More advanced adaptive programming would require more memory and processing of relative space-dot-dash sensing on-the-fly to determine the 'bad fists' of certain morse senders. As of the end of 2007, Microchip has brought out several newer microcontroller models with much more memory and faster operating clock speeds for those wanting to experiment with useful adaptations. It is less of a 'softwares' comparison of 'old' v. 'new' but rather an intellectual experimentation project of applying adapative programming methods to such 'learned human' activities. It is probably NOT 'the way the brain works' (nobody is really certain of that anyway) but that is irrelevant in the task of determining the time-related sound patterns of on-off keying beeps and translating that back to some form of text that anyone (who knows the western alphabet) can read. It is an eminently POSSIBLE thing to do and I'm glad that some are willing to tackle the task. 73, Len AF6AY |