Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Old April 22nd 05, 01:25 AM
Roy Lewallen
 
Posts: n/a
Default

John Smith wrote:
I could be wrong, and Roy will correct me if I am, but EZNEC seems to be
written in Visual Basic, or similar... might you need the run-time libraries
for an older edition?


No, that's not the problem -- it's due to DOS not being able to properly
determine the size of a large amount of RAM. And the DOS versions were
written with the MS BASIC Professional Development System, not Visual
Basic. Windows versions of EZNEC (v. 3.0 and 4.0) are written in Visual
Basic, except the calculating engines and a few speed-critical main
program routines which are written in Fortran.

And, he (Roy) mentions "double percision"--a nasty reality of basic (and
some Fortran compilers also), which seems to confirm my suspicions...


Double precision isn't a "nasty reality" -- it's simply a way of storing
floating point variables. Normal precision floating point variables are
stored in four byte words, and consequently have a resolution of about
seven significant decimal digits. Double precision variables require 8
bytes and have about 15 significant digits of resolution. Fortran
additionally has a complex data type which requires twice as much
storage space, since each variable of that type has two parts. Some
compilers have additional, higher precisions available. The program
author can choose which data type to use for each individual variable.

Roy Lewallen, W7EL
  #2   Report Post  
Old April 22nd 05, 02:00 AM
John Smith
 
Posts: n/a
Default

Yes, you are correct. In "C"/C++ conversion is automatic (or generates a
compiler error prompting you to "cast" to another type) if there is the
slightest chance you will unintentionally lose precision...
If I go to VB or Fortran I tend to get a lot of math errors (which are not
caught by the compiler, but in real world use!) until I remember to
compensate and control my code better... double precision is used by
"C"/C++ (the "double"(integer) and "float"(floating point) variables) also
(you are right, it is related to the size, in bytes(bits), of the math
variable(s) in question), no problem--it is just more transparent in C.
And, you are correct again, "precision" is only a matter of where you wish
to "quit", and "double-double-precsion" and greater are able to be done,
either as a function of the compiler, hard code a routine directy in
assembly language yourself, or the programmer can institute them in the high
level code...
Visual Basic, Fortan, COBOL (yuck!), Pascal, "C", etc, etc are usually only
a matter of syntax, style, speed and preference... "C" is just my personal
preference...

Years ago it was common for Basic/VB to constantly have issues with math
variables (actually, changes to the functions in the OS) in each new release
of windows, I live in the past... frown

Warmest regards,
John

"Roy Lewallen" wrote in message
...
John Smith wrote:
I could be wrong, and Roy will correct me if I am, but EZNEC seems to be
written in Visual Basic, or similar... might you need the run-time
libraries for an older edition?


No, that's not the problem -- it's due to DOS not being able to properly
determine the size of a large amount of RAM. And the DOS versions were
written with the MS BASIC Professional Development System, not Visual
Basic. Windows versions of EZNEC (v. 3.0 and 4.0) are written in Visual
Basic, except the calculating engines and a few speed-critical main
program routines which are written in Fortran.

And, he (Roy) mentions "double percision"--a nasty reality of basic (and
some Fortran compilers also), which seems to confirm my suspicions...


Double precision isn't a "nasty reality" -- it's simply a way of storing
floating point variables. Normal precision floating point variables are
stored in four byte words, and consequently have a resolution of about
seven significant decimal digits. Double precision variables require 8
bytes and have about 15 significant digits of resolution. Fortran
additionally has a complex data type which requires twice as much storage
space, since each variable of that type has two parts. Some compilers have
additional, higher precisions available. The program author can choose
which data type to use for each individual variable.

Roy Lewallen, W7EL



  #3   Report Post  
Old April 22nd 05, 03:36 AM
Roy Lewallen
 
Posts: n/a
Default

The C "float" type is equivalent to the VB Single and Fortran Real
variable types, which are single precision (4 bytes) -- I forget what
they call the double precision real variable in C. Integers are another
matter -- all have the same precision, and the only difference between
different sizes is the size of number they can contain. It's hard to
believe a compiler can tell when you'll lose precision, since it depends
on many factors in the course of the calculation including the order of
calculation, as well as the actual variable values. I don't know what
kind of "math errors" you got when you tried to use some other language,
but it's not because the numerical precision is any less with one
language than another. Careless programming in any language can cause
errors and loss of accuracy.

I'm not aware of any problem with Basic or VB with regard to variable
precision or other issues with variables. I've programmed in HP, DEC,
GW, Quick, and other flavors of Basic since the mid '60s, and VB since
v. 4. Every language has its strong and weak points, but for many years
now mathematical calculation quality has been determined by the
hardware, not the language. I suppose the language could have made a
difference before the days of the coprocessor. However, my first
commercial program, ELNEC, was introduced in early 1990 in coprocessor
and non-coprocessor versions, and I never saw a significant difference
in results between the two -- and it did some extremely intensive
floating point calcualations. So if there was some problem, it must have
occured before that.

As a side note, I once fell for the alleged superiority of C with regard
to speed compared to Basic, and reprogrammed the calculation portion of
ELNEC with Quick C. The result was that the compiler generated about 30%
more code than with the Basic PDS I was using, and it ran about 30%
slower. Some genuine C gurus where I was working looked over the code
and couldn't find anything I'd done which would cause it to run slower
than optimum. So there are good and poor compilers in all languages.

This has strayed way off topic, and the OP has contacted me directly, so
I'll exit this thread now.

Roy Lewallen, W7EL

John Smith wrote:
Yes, you are correct. In "C"/C++ conversion is automatic (or generates a
compiler error prompting you to "cast" to another type) if there is the
slightest chance you will unintentionally lose precision...
If I go to VB or Fortran I tend to get a lot of math errors (which are not
caught by the compiler, but in real world use!) until I remember to
compensate and control my code better... double precision is used by
"C"/C++ (the "double"(integer) and "float"(floating point) variables) also
(you are right, it is related to the size, in bytes(bits), of the math
variable(s) in question), no problem--it is just more transparent in C.
And, you are correct again, "precision" is only a matter of where you wish
to "quit", and "double-double-precsion" and greater are able to be done,
either as a function of the compiler, hard code a routine directy in
assembly language yourself, or the programmer can institute them in the high
level code...
Visual Basic, Fortan, COBOL (yuck!), Pascal, "C", etc, etc are usually only
a matter of syntax, style, speed and preference... "C" is just my personal
preference...

Years ago it was common for Basic/VB to constantly have issues with math
variables (actually, changes to the functions in the OS) in each new release
of windows, I live in the past... frown

  #4   Report Post  
Old April 22nd 05, 02:36 PM
Joe User
 
Posts: n/a
Default

Roy Lewallen wrote:

-- I forget what
they call the double precision real variable in C.


double

-j
  #5   Report Post  
Old April 22nd 05, 03:50 PM
Bill Ogden
 
Posts: n/a
Default

Back in the dark ages, when I was in school, we were "encouraged" to take a
numerical analysis course if we were interested in computers. (I was an EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.

These current thoughts extend to other areas:

- C is more accurate than Fortran (or Basic, or what whatever)
- Obtaining "stable" numeric results means you get the same answer if
you run the program twice
- C produces the fastest programs
- if C is good then C++ is better
- Using all the obscure C operators produces a better program

(Anyone remember the IBM 7030 system? The user could control the rounding
direction of the floating point LSB. In this case running a program twice
(with different rounding options) really was relevant.)

Bill
W2WO





  #6   Report Post  
Old April 24th 05, 04:43 PM
J. Mc Laughlin
 
Posts: n/a
Default

Dear Bill:
I too am appalled at the abandonment of a solid numerical analysis
course in engineering education. Consider the common problem of solving a
set of linear, independent algebraic equations. Students have to be shown
that Cramer's rule will not work when using the (inevitable) finite
resolution of a computer or calculator. Of course, some of the time
Cramer's rule does work so it is important to teach students why it does not
work in general.
This is relevant to antennas where we routinely need to solve large sets
of equations. When using a computer to perform calculations, one needs to
think differently about methods than in the day when one needed to use large
sheets of paper and a pen.
If one is to use numbers, one needs to know the limitations of methods
of use.
73 Mac N8TT

--
J. Mc Laughlin; Michigan U.S.A.
Home:
"Bill Ogden" wrote in message
...
Back in the dark ages, when I was in school, we were "encouraged" to take

a
numerical analysis course if we were interested in computers. (I was an

EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.


snip

Bill
W2WO





  #7   Report Post  
Old April 24th 05, 05:22 PM
 
Posts: n/a
Default

This doesn't really matter anymore in the U.S. but it is important that
other countries do not abandon it.We do not rely on home grown engineers
as we have in the past since a
simple telephone call offshore
meets our economic needs.
Art
"J. Mc Laughlin" wrote in message
...
Dear Bill:
I too am appalled at the abandonment of a solid numerical analysis
course in engineering education. Consider the common problem of solving a
set of linear, independent algebraic equations. Students have to be shown
that Cramer's rule will not work when using the (inevitable) finite
resolution of a computer or calculator. Of course, some of the time
Cramer's rule does work so it is important to teach students why it does
not
work in general.
This is relevant to antennas where we routinely need to solve large
sets
of equations. When using a computer to perform calculations, one needs to
think differently about methods than in the day when one needed to use
large
sheets of paper and a pen.
If one is to use numbers, one needs to know the limitations of methods
of use.
73 Mac N8TT

--
J. Mc Laughlin; Michigan U.S.A.
Home:
"Bill Ogden" wrote in message
...
Back in the dark ages, when I was in school, we were "encouraged" to take

a
numerical analysis course if we were interested in computers. (I was an

EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.


snip

Bill
W2WO







  #8   Report Post  
Old April 26th 05, 01:52 AM
Joel Kolstad
 
Posts: n/a
Default

"J. Mc Laughlin" wrote in message
...
I too am appalled at the abandonment of a solid numerical analysis
course in engineering education. Consider the common problem of solving a
set of linear, independent algebraic equations. Students have to be shown
that Cramer's rule will not work when using the (inevitable) finite
resolution of a computer or calculator.


I was never shown that, but I do remember it being drilled into our heads that
Cramer's rule was the bogosort of linear system solving -- just about the
least efficient means you could possibly choose, and that it existed primarily
because it can be useful to have a closed form solution to a system of
equations.

Numeric analysis of linear systems is an incredibly in-depth topic, as far as
I can tell. Books such as SIAM's "Numerical Linear Algebra" spends hundreds
of pages going over it all.

---Joel


  #9   Report Post  
Old April 26th 05, 01:47 AM
Joel Kolstad
 
Posts: n/a
Default

"Bill Ogden" wrote in message
...
Back in the dark ages, when I was in school, we were "encouraged" to take a
numerical analysis course if we were interested in computers. (I was an EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.


Realistically 90+% of CS students are going to end up in jobs programming web
pages, databases, and other applications where it just isn't going to matter.
There just isn't time in the curriculum these days to cover everything... For
that matter, these days something like understanding the effects of finite
precision in integer arithmetic and how it relates to fixed point DSP
calcuations is probably applicable to a larger number of students!

(OK, ok, I'd be the first to admit that college courses have been dumbed down
over the years as well, but this is a direct reflection of the fact that
industry just doesn't _think_ they need that many engineers who DO know the
'hard core' bits...)

These current thoughts extend to other areas:

- C is more accurate than Fortran (or Basic, or what whatever)


Arbitrary statement (on the student's part).

- Obtaining "stable" numeric results means you get the same answer if
you run the program twice


Ditto.

- C produces the fastest programs


There is some truth to this, perhaps if only because so much more work (as far
as I can tell) has been done on C optimiziers than for other languages.
Perhaps a better statement would be, "With novice programmers, C tends to
produce the fastest programs."

- if C is good then C++ is better


C++ does have a lot of nice benefits over regular old C. The last time I
programmed in FORTRAN it was FORTRAN 77, but I can only imagine that FORTRAN
90 has some nice improvements over FORTRAN 77 as well. (And Delphi is
purpoertedly a nice improvement to Pascal, etc...)

- Using all the obscure C operators produces a better program


Uggh.

---Joel


  #10   Report Post  
Old April 26th 05, 08:17 AM
Richard Clark
 
Posts: n/a
Default

On Mon, 25 Apr 2005 17:47:55 -0700, "Joel Kolstad"
wrote:

- C produces the fastest programs


There is some truth to this, perhaps if only because so much more work (as far
as I can tell) has been done on C optimiziers than for other languages.
Perhaps a better statement would be, "With novice programmers, C tends to
produce the fastest programs."


Hi Joel,

I skipped this groaner the first time through. You could program in
almost any language to the same speed of performance if you simply
focused on the 5% bottleneck and coded it in assembler. Nearly every
"optimizer" consists of saving a lazy programmer's bacon when they
sloppily write poor control structures and assignment statements. It
should be called a de-babelizer.

73's
Richard Clark, KB7QHC


Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
JvComm32 and other digisoft on WinXP Jock. General 0 January 20th 05 05:36 PM
Mismatch Uncertainty and an EZNEC transmission line sudy Roy Lewallen Antenna 1 November 26th 04 06:34 AM
EZNEC v. 4.0 at Dayton Roy Lewallen Antenna 0 May 7th 04 06:10 PM
3 antennas modeled with EZNEC Cecil Moore Antenna 56 February 9th 04 09:36 AM
Eznec modeling loading coils? Roy Lewallen Antenna 11 August 18th 03 02:40 AM


All times are GMT +1. The time now is 12:00 AM.

Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 RadioBanter.
The comments are property of their posters.
 

About Us

"It's about Radio"

 

Copyright © 2017