![]() |
John Smith wrote:
No, I am wrong about that being a problem... I should have looked first... Here is another guy with a problem with EZNEC 2.0 on Windows XP--however, not the same problem... http://lists.contesting.com/pipermai...ry/043902.html . . . That posting is over three years old, and the poster has since upgraded to a Windows version. There are numerous possible problems in running DOS EZNEC under WinXP, and I've seen most of them more than once, so I can usually recommend a cure. (There are, however, a few systems that won't run it at all, most commonly due to not liking the old graphics modes.) I'd like to think I'm in a bit better position to address problems than someone who's never used the program, and I hate to see obscure problems with EZNEC taking up space on public newsgroups. I encourage anyone who has any kind of trouble with any version of EZNEC to contact me directly. I do support the program. And I do gladly honor my unconditional satisfaction guarantee, with no time limit. Roy Lewallen, W7EL |
The C "float" type is equivalent to the VB Single and Fortran Real
variable types, which are single precision (4 bytes) -- I forget what they call the double precision real variable in C. Integers are another matter -- all have the same precision, and the only difference between different sizes is the size of number they can contain. It's hard to believe a compiler can tell when you'll lose precision, since it depends on many factors in the course of the calculation including the order of calculation, as well as the actual variable values. I don't know what kind of "math errors" you got when you tried to use some other language, but it's not because the numerical precision is any less with one language than another. Careless programming in any language can cause errors and loss of accuracy. I'm not aware of any problem with Basic or VB with regard to variable precision or other issues with variables. I've programmed in HP, DEC, GW, Quick, and other flavors of Basic since the mid '60s, and VB since v. 4. Every language has its strong and weak points, but for many years now mathematical calculation quality has been determined by the hardware, not the language. I suppose the language could have made a difference before the days of the coprocessor. However, my first commercial program, ELNEC, was introduced in early 1990 in coprocessor and non-coprocessor versions, and I never saw a significant difference in results between the two -- and it did some extremely intensive floating point calcualations. So if there was some problem, it must have occured before that. As a side note, I once fell for the alleged superiority of C with regard to speed compared to Basic, and reprogrammed the calculation portion of ELNEC with Quick C. The result was that the compiler generated about 30% more code than with the Basic PDS I was using, and it ran about 30% slower. Some genuine C gurus where I was working looked over the code and couldn't find anything I'd done which would cause it to run slower than optimum. So there are good and poor compilers in all languages. This has strayed way off topic, and the OP has contacted me directly, so I'll exit this thread now. Roy Lewallen, W7EL John Smith wrote: Yes, you are correct. In "C"/C++ conversion is automatic (or generates a compiler error prompting you to "cast" to another type) if there is the slightest chance you will unintentionally lose precision... If I go to VB or Fortran I tend to get a lot of math errors (which are not caught by the compiler, but in real world use!) until I remember to compensate and control my code better... double precision is used by "C"/C++ (the "double"(integer) and "float"(floating point) variables) also (you are right, it is related to the size, in bytes(bits), of the math variable(s) in question), no problem--it is just more transparent in C. And, you are correct again, "precision" is only a matter of where you wish to "quit", and "double-double-precsion" and greater are able to be done, either as a function of the compiler, hard code a routine directy in assembly language yourself, or the programmer can institute them in the high level code... Visual Basic, Fortan, COBOL (yuck!), Pascal, "C", etc, etc are usually only a matter of syntax, style, speed and preference... "C" is just my personal preference... Years ago it was common for Basic/VB to constantly have issues with math variables (actually, changes to the functions in the OS) in each new release of windows, I live in the past... frown |
Yes, I am sure. You are surely a gentleman who would not use a newsgroup to
market products... And neither what you suggest was going on either. So let's skip the petty stuff, I am sure the fellow who posted the question enjoyed the support and friendly exchange. No one would question the importance of an authority on the matter, and no one would question the use of civilized behavior here either--I suspect... Regards, John "Roy Lewallen" wrote in message ... John Smith wrote: No, I am wrong about that being a problem... I should have looked first... Here is another guy with a problem with EZNEC 2.0 on Windows XP--however, not the same problem... http://lists.contesting.com/pipermai...ry/043902.html . . . That posting is over three years old, and the poster has since upgraded to a Windows version. There are numerous possible problems in running DOS EZNEC under WinXP, and I've seen most of them more than once, so I can usually recommend a cure. (There are, however, a few systems that won't run it at all, most commonly due to not liking the old graphics modes.) I'd like to think I'm in a bit better position to address problems than someone who's never used the program, and I hate to see obscure problems with EZNEC taking up space on public newsgroups. I encourage anyone who has any kind of trouble with any version of EZNEC to contact me directly. I do support the program. And I do gladly honor my unconditional satisfaction guarantee, with no time limit. Roy Lewallen, W7EL |
Roy Lewallen wrote:
-- I forget what they call the double precision real variable in C. double -j |
Back in the dark ages, when I was in school, we were "encouraged" to take a
numerical analysis course if we were interested in computers. (I was an EE major.) It was not an easy topic, but it made us well aware of the difference between correct results and computational precision. I was recently astonished to find that most computer science students have no concept of this area and even less interest in it. These current thoughts extend to other areas: - C is more accurate than Fortran (or Basic, or what whatever) - Obtaining "stable" numeric results means you get the same answer if you run the program twice - C produces the fastest programs - if C is good then C++ is better - Using all the obscure C operators produces a better program (Anyone remember the IBM 7030 system? The user could control the rounding direction of the floating point LSB. In this case running a program twice (with different rounding options) really was relevant.) Bill W2WO |
Dear Bill:
I too am appalled at the abandonment of a solid numerical analysis course in engineering education. Consider the common problem of solving a set of linear, independent algebraic equations. Students have to be shown that Cramer's rule will not work when using the (inevitable) finite resolution of a computer or calculator. Of course, some of the time Cramer's rule does work so it is important to teach students why it does not work in general. This is relevant to antennas where we routinely need to solve large sets of equations. When using a computer to perform calculations, one needs to think differently about methods than in the day when one needed to use large sheets of paper and a pen. If one is to use numbers, one needs to know the limitations of methods of use. 73 Mac N8TT -- J. Mc Laughlin; Michigan U.S.A. Home: "Bill Ogden" wrote in message ... Back in the dark ages, when I was in school, we were "encouraged" to take a numerical analysis course if we were interested in computers. (I was an EE major.) It was not an easy topic, but it made us well aware of the difference between correct results and computational precision. I was recently astonished to find that most computer science students have no concept of this area and even less interest in it. snip Bill W2WO |
This doesn't really matter anymore in the U.S. but it is important that
other countries do not abandon it.We do not rely on home grown engineers as we have in the past since a simple telephone call offshore meets our economic needs. Art "J. Mc Laughlin" wrote in message ... Dear Bill: I too am appalled at the abandonment of a solid numerical analysis course in engineering education. Consider the common problem of solving a set of linear, independent algebraic equations. Students have to be shown that Cramer's rule will not work when using the (inevitable) finite resolution of a computer or calculator. Of course, some of the time Cramer's rule does work so it is important to teach students why it does not work in general. This is relevant to antennas where we routinely need to solve large sets of equations. When using a computer to perform calculations, one needs to think differently about methods than in the day when one needed to use large sheets of paper and a pen. If one is to use numbers, one needs to know the limitations of methods of use. 73 Mac N8TT -- J. Mc Laughlin; Michigan U.S.A. Home: "Bill Ogden" wrote in message ... Back in the dark ages, when I was in school, we were "encouraged" to take a numerical analysis course if we were interested in computers. (I was an EE major.) It was not an easy topic, but it made us well aware of the difference between correct results and computational precision. I was recently astonished to find that most computer science students have no concept of this area and even less interest in it. snip Bill W2WO |
"Bill Ogden" wrote in message
... Back in the dark ages, when I was in school, we were "encouraged" to take a numerical analysis course if we were interested in computers. (I was an EE major.) It was not an easy topic, but it made us well aware of the difference between correct results and computational precision. I was recently astonished to find that most computer science students have no concept of this area and even less interest in it. Realistically 90+% of CS students are going to end up in jobs programming web pages, databases, and other applications where it just isn't going to matter. There just isn't time in the curriculum these days to cover everything... For that matter, these days something like understanding the effects of finite precision in integer arithmetic and how it relates to fixed point DSP calcuations is probably applicable to a larger number of students! (OK, ok, I'd be the first to admit that college courses have been dumbed down over the years as well, but this is a direct reflection of the fact that industry just doesn't _think_ they need that many engineers who DO know the 'hard core' bits...) These current thoughts extend to other areas: - C is more accurate than Fortran (or Basic, or what whatever) Arbitrary statement (on the student's part). - Obtaining "stable" numeric results means you get the same answer if you run the program twice Ditto. - C produces the fastest programs There is some truth to this, perhaps if only because so much more work (as far as I can tell) has been done on C optimiziers than for other languages. Perhaps a better statement would be, "With novice programmers, C tends to produce the fastest programs." - if C is good then C++ is better C++ does have a lot of nice benefits over regular old C. The last time I programmed in FORTRAN it was FORTRAN 77, but I can only imagine that FORTRAN 90 has some nice improvements over FORTRAN 77 as well. (And Delphi is purpoertedly a nice improvement to Pascal, etc...) - Using all the obscure C operators produces a better program Uggh. ---Joel |
"J. Mc Laughlin" wrote in message
... I too am appalled at the abandonment of a solid numerical analysis course in engineering education. Consider the common problem of solving a set of linear, independent algebraic equations. Students have to be shown that Cramer's rule will not work when using the (inevitable) finite resolution of a computer or calculator. I was never shown that, but I do remember it being drilled into our heads that Cramer's rule was the bogosort of linear system solving -- just about the least efficient means you could possibly choose, and that it existed primarily because it can be useful to have a closed form solution to a system of equations. Numeric analysis of linear systems is an incredibly in-depth topic, as far as I can tell. Books such as SIAM's "Numerical Linear Algebra" spends hundreds of pages going over it all. ---Joel |
On Mon, 25 Apr 2005 17:47:55 -0700, "Joel Kolstad"
wrote: - C produces the fastest programs There is some truth to this, perhaps if only because so much more work (as far as I can tell) has been done on C optimiziers than for other languages. Perhaps a better statement would be, "With novice programmers, C tends to produce the fastest programs." Hi Joel, I skipped this groaner the first time through. You could program in almost any language to the same speed of performance if you simply focused on the 5% bottleneck and coded it in assembler. Nearly every "optimizer" consists of saving a lazy programmer's bacon when they sloppily write poor control structures and assignment statements. It should be called a de-babelizer. 73's Richard Clark, KB7QHC |
All times are GMT +1. The time now is 12:24 PM. |
Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
RadioBanter.com