RadioBanter

RadioBanter (https://www.radiobanter.com/)
-   Antenna (https://www.radiobanter.com/antenna/)
-   -   Help with Eznec on WinXP (https://www.radiobanter.com/antenna/69418-help-eznec-winxp.html)

SignalFerret April 22nd 05 12:09 AM

Help with Eznec on WinXP
 
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep getting
an Error 137, insufficent memory. Does anyone have a suggestion on what I
need to set, or fix?

Any help would be appreciated. Please post to the news group, as this
address doesn't have a mailbox associated with it to avoid spam.

Robert
N3LGC



Cecil Moore April 22nd 05 12:17 AM

SignalFerret wrote:
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep getting
an Error 137, insufficent memory. Does anyone have a suggestion on what I
need to set, or fix?


I vaguely remember the firewall/virus-protection needs to
be disabled during installation. I'm sure Roy will respond.
--
73, Cecil, W5DXP

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

John Smith April 22nd 05 12:27 AM

I could be wrong, and Roy will correct me if I am, but EZNEC seems to be
written in Visual Basic, or similar... might you need the run-time libraries
for an older edition?
And, he (Roy) mentions "double percision"--a nasty reality of basic (and
some Fortran compilers also), which seems to confirm my suspicions...

Regards,
John

"SignalFerret" wrote in message
news:JoW9e.26259$jd6.8685@trnddc07...
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep
getting an Error 137, insufficent memory. Does anyone have a suggestion
on what I need to set, or fix?

Any help would be appreciated. Please post to the news group, as this
address doesn't have a mailbox associated with it to avoid spam.

Robert
N3LGC





John Smith April 22nd 05 12:30 AM

precision even!!! before all the spelling freaks fall on me!

John

"John Smith" wrote in message
...
I could be wrong, and Roy will correct me if I am, but EZNEC seems to be
written in Visual Basic, or similar... might you need the run-time
libraries for an older edition?
And, he (Roy) mentions "double percision"--a nasty reality of basic (and
some Fortran compilers also), which seems to confirm my suspicions...

Regards,
John

"SignalFerret" wrote in message
news:JoW9e.26259$jd6.8685@trnddc07...
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep
getting an Error 137, insufficent memory. Does anyone have a suggestion
on what I need to set, or fix?

Any help would be appreciated. Please post to the news group, as this
address doesn't have a mailbox associated with it to avoid spam.

Robert
N3LGC







Tam/WB2TT April 22nd 05 12:42 AM


"SignalFerret" wrote in message
news:JoW9e.26259$jd6.8685@trnddc07...
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep
getting an Error 137, insufficent memory. Does anyone have a suggestion
on what I need to set, or fix?

Any help would be appreciated. Please post to the news group, as this
address doesn't have a mailbox associated with it to avoid spam.

Robert
N3LGC


Is that a DOS program? I recall upgrading to a Windows version, and think
it was 3.x.

Tam



SignalFerret April 22nd 05 01:00 AM

Yes, it's the DOS version, and up grading is not in the budget this month.
Hence, the need to get this version working.

Robert N3LGC

"Tam/WB2TT" wrote in message
...

"SignalFerret" wrote in message
news:JoW9e.26259$jd6.8685@trnddc07...
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep
getting an Error 137, insufficent memory. Does anyone have a suggestion
on what I need to set, or fix?

Any help would be appreciated. Please post to the news group, as this
address doesn't have a mailbox associated with it to avoid spam.

Robert
N3LGC


Is that a DOS program? I recall upgrading to a Windows version, and think
it was 3.x.

Tam




John Smith April 22nd 05 01:05 AM

No, I am wrong about that being a problem...
I should have looked first...
Here is another guy with a problem with EZNEC 2.0 on Windows XP--however,
not the same problem...
http://lists.contesting.com/pipermai...ry/043902.html
So, it does run--with more or less success, on XP...
However, does setting the "compatibility option in the dos emulator of
Windows XP help with this problem? -- simply drop a cmd icon on your
desktop, right click the icon, pick properties, and set the "compatabiliy"
tab to windows 95 and run the program though this icon... might help....

Regards,
John
"SignalFerret" wrote in message
news:JoW9e.26259$jd6.8685@trnddc07...
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep
getting an Error 137, insufficent memory. Does anyone have a suggestion
on what I need to set, or fix?

Any help would be appreciated. Please post to the news group, as this
address doesn't have a mailbox associated with it to avoid spam.

Robert
N3LGC





Roy Lewallen April 22nd 05 01:06 AM

That was a common problem with DOS EZNEC programs running on modern
machines and the fix is simple. Drop me an email and I'll be glad to
send you instructions.

Although I'm glad to answer questions here which are of general interest
about EZNEC and its use, it's more appropriate to answer specific
support questions directly.

Roy Lewallen, W7EL

SignalFerret wrote:
I'm trying to get Eznec ver 2 to run on a WinXP machine, but I keep getting
an Error 137, insufficent memory. Does anyone have a suggestion on what I
need to set, or fix?

Any help would be appreciated. Please post to the news group, as this
address doesn't have a mailbox associated with it to avoid spam.

Robert
N3LGC



Roy Lewallen April 22nd 05 01:25 AM

John Smith wrote:
I could be wrong, and Roy will correct me if I am, but EZNEC seems to be
written in Visual Basic, or similar... might you need the run-time libraries
for an older edition?


No, that's not the problem -- it's due to DOS not being able to properly
determine the size of a large amount of RAM. And the DOS versions were
written with the MS BASIC Professional Development System, not Visual
Basic. Windows versions of EZNEC (v. 3.0 and 4.0) are written in Visual
Basic, except the calculating engines and a few speed-critical main
program routines which are written in Fortran.

And, he (Roy) mentions "double percision"--a nasty reality of basic (and
some Fortran compilers also), which seems to confirm my suspicions...


Double precision isn't a "nasty reality" -- it's simply a way of storing
floating point variables. Normal precision floating point variables are
stored in four byte words, and consequently have a resolution of about
seven significant decimal digits. Double precision variables require 8
bytes and have about 15 significant digits of resolution. Fortran
additionally has a complex data type which requires twice as much
storage space, since each variable of that type has two parts. Some
compilers have additional, higher precisions available. The program
author can choose which data type to use for each individual variable.

Roy Lewallen, W7EL

John Smith April 22nd 05 02:00 AM

Yes, you are correct. In "C"/C++ conversion is automatic (or generates a
compiler error prompting you to "cast" to another type) if there is the
slightest chance you will unintentionally lose precision...
If I go to VB or Fortran I tend to get a lot of math errors (which are not
caught by the compiler, but in real world use!) until I remember to
compensate and control my code better... double precision is used by
"C"/C++ (the "double"(integer) and "float"(floating point) variables) also
(you are right, it is related to the size, in bytes(bits), of the math
variable(s) in question), no problem--it is just more transparent in C.
And, you are correct again, "precision" is only a matter of where you wish
to "quit", and "double-double-precsion" and greater are able to be done,
either as a function of the compiler, hard code a routine directy in
assembly language yourself, or the programmer can institute them in the high
level code...
Visual Basic, Fortan, COBOL (yuck!), Pascal, "C", etc, etc are usually only
a matter of syntax, style, speed and preference... "C" is just my personal
preference...

Years ago it was common for Basic/VB to constantly have issues with math
variables (actually, changes to the functions in the OS) in each new release
of windows, I live in the past... frown

Warmest regards,
John

"Roy Lewallen" wrote in message
...
John Smith wrote:
I could be wrong, and Roy will correct me if I am, but EZNEC seems to be
written in Visual Basic, or similar... might you need the run-time
libraries for an older edition?


No, that's not the problem -- it's due to DOS not being able to properly
determine the size of a large amount of RAM. And the DOS versions were
written with the MS BASIC Professional Development System, not Visual
Basic. Windows versions of EZNEC (v. 3.0 and 4.0) are written in Visual
Basic, except the calculating engines and a few speed-critical main
program routines which are written in Fortran.

And, he (Roy) mentions "double percision"--a nasty reality of basic (and
some Fortran compilers also), which seems to confirm my suspicions...


Double precision isn't a "nasty reality" -- it's simply a way of storing
floating point variables. Normal precision floating point variables are
stored in four byte words, and consequently have a resolution of about
seven significant decimal digits. Double precision variables require 8
bytes and have about 15 significant digits of resolution. Fortran
additionally has a complex data type which requires twice as much storage
space, since each variable of that type has two parts. Some compilers have
additional, higher precisions available. The program author can choose
which data type to use for each individual variable.

Roy Lewallen, W7EL




Roy Lewallen April 22nd 05 03:03 AM

John Smith wrote:
No, I am wrong about that being a problem...
I should have looked first...
Here is another guy with a problem with EZNEC 2.0 on Windows XP--however,
not the same problem...
http://lists.contesting.com/pipermai...ry/043902.html
. . .


That posting is over three years old, and the poster has since upgraded
to a Windows version. There are numerous possible problems in running
DOS EZNEC under WinXP, and I've seen most of them more than once, so I
can usually recommend a cure. (There are, however, a few systems that
won't run it at all, most commonly due to not liking the old graphics
modes.)

I'd like to think I'm in a bit better position to address problems than
someone who's never used the program, and I hate to see obscure problems
with EZNEC taking up space on public newsgroups. I encourage anyone who
has any kind of trouble with any version of EZNEC to contact me
directly. I do support the program. And I do gladly honor my
unconditional satisfaction guarantee, with no time limit.

Roy Lewallen, W7EL

Roy Lewallen April 22nd 05 03:36 AM

The C "float" type is equivalent to the VB Single and Fortran Real
variable types, which are single precision (4 bytes) -- I forget what
they call the double precision real variable in C. Integers are another
matter -- all have the same precision, and the only difference between
different sizes is the size of number they can contain. It's hard to
believe a compiler can tell when you'll lose precision, since it depends
on many factors in the course of the calculation including the order of
calculation, as well as the actual variable values. I don't know what
kind of "math errors" you got when you tried to use some other language,
but it's not because the numerical precision is any less with one
language than another. Careless programming in any language can cause
errors and loss of accuracy.

I'm not aware of any problem with Basic or VB with regard to variable
precision or other issues with variables. I've programmed in HP, DEC,
GW, Quick, and other flavors of Basic since the mid '60s, and VB since
v. 4. Every language has its strong and weak points, but for many years
now mathematical calculation quality has been determined by the
hardware, not the language. I suppose the language could have made a
difference before the days of the coprocessor. However, my first
commercial program, ELNEC, was introduced in early 1990 in coprocessor
and non-coprocessor versions, and I never saw a significant difference
in results between the two -- and it did some extremely intensive
floating point calcualations. So if there was some problem, it must have
occured before that.

As a side note, I once fell for the alleged superiority of C with regard
to speed compared to Basic, and reprogrammed the calculation portion of
ELNEC with Quick C. The result was that the compiler generated about 30%
more code than with the Basic PDS I was using, and it ran about 30%
slower. Some genuine C gurus where I was working looked over the code
and couldn't find anything I'd done which would cause it to run slower
than optimum. So there are good and poor compilers in all languages.

This has strayed way off topic, and the OP has contacted me directly, so
I'll exit this thread now.

Roy Lewallen, W7EL

John Smith wrote:
Yes, you are correct. In "C"/C++ conversion is automatic (or generates a
compiler error prompting you to "cast" to another type) if there is the
slightest chance you will unintentionally lose precision...
If I go to VB or Fortran I tend to get a lot of math errors (which are not
caught by the compiler, but in real world use!) until I remember to
compensate and control my code better... double precision is used by
"C"/C++ (the "double"(integer) and "float"(floating point) variables) also
(you are right, it is related to the size, in bytes(bits), of the math
variable(s) in question), no problem--it is just more transparent in C.
And, you are correct again, "precision" is only a matter of where you wish
to "quit", and "double-double-precsion" and greater are able to be done,
either as a function of the compiler, hard code a routine directy in
assembly language yourself, or the programmer can institute them in the high
level code...
Visual Basic, Fortan, COBOL (yuck!), Pascal, "C", etc, etc are usually only
a matter of syntax, style, speed and preference... "C" is just my personal
preference...

Years ago it was common for Basic/VB to constantly have issues with math
variables (actually, changes to the functions in the OS) in each new release
of windows, I live in the past... frown


John Smith April 22nd 05 04:44 AM

Yes, I am sure. You are surely a gentleman who would not use a newsgroup to
market products...
And neither what you suggest was going on either.
So let's skip the petty stuff, I am sure the fellow who posted the question
enjoyed the support and friendly exchange.
No one would question the importance of an authority on the matter, and no
one would question the use of civilized behavior here either--I suspect...

Regards,
John

"Roy Lewallen" wrote in message
...
John Smith wrote:
No, I am wrong about that being a problem...
I should have looked first...
Here is another guy with a problem with EZNEC 2.0 on Windows XP--however,
not the same problem...
http://lists.contesting.com/pipermai...ry/043902.html
. . .


That posting is over three years old, and the poster has since upgraded to
a Windows version. There are numerous possible problems in running DOS
EZNEC under WinXP, and I've seen most of them more than once, so I can
usually recommend a cure. (There are, however, a few systems that won't
run it at all, most commonly due to not liking the old graphics modes.)

I'd like to think I'm in a bit better position to address problems than
someone who's never used the program, and I hate to see obscure problems
with EZNEC taking up space on public newsgroups. I encourage anyone who
has any kind of trouble with any version of EZNEC to contact me directly.
I do support the program. And I do gladly honor my unconditional
satisfaction guarantee, with no time limit.

Roy Lewallen, W7EL




Joe User April 22nd 05 02:36 PM

Roy Lewallen wrote:

-- I forget what
they call the double precision real variable in C.


double

-j

Bill Ogden April 22nd 05 03:50 PM

Back in the dark ages, when I was in school, we were "encouraged" to take a
numerical analysis course if we were interested in computers. (I was an EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.

These current thoughts extend to other areas:

- C is more accurate than Fortran (or Basic, or what whatever)
- Obtaining "stable" numeric results means you get the same answer if
you run the program twice
- C produces the fastest programs
- if C is good then C++ is better
- Using all the obscure C operators produces a better program

(Anyone remember the IBM 7030 system? The user could control the rounding
direction of the floating point LSB. In this case running a program twice
(with different rounding options) really was relevant.)

Bill
W2WO




J. Mc Laughlin April 24th 05 04:43 PM

Dear Bill:
I too am appalled at the abandonment of a solid numerical analysis
course in engineering education. Consider the common problem of solving a
set of linear, independent algebraic equations. Students have to be shown
that Cramer's rule will not work when using the (inevitable) finite
resolution of a computer or calculator. Of course, some of the time
Cramer's rule does work so it is important to teach students why it does not
work in general.
This is relevant to antennas where we routinely need to solve large sets
of equations. When using a computer to perform calculations, one needs to
think differently about methods than in the day when one needed to use large
sheets of paper and a pen.
If one is to use numbers, one needs to know the limitations of methods
of use.
73 Mac N8TT

--
J. Mc Laughlin; Michigan U.S.A.
Home:
"Bill Ogden" wrote in message
...
Back in the dark ages, when I was in school, we were "encouraged" to take

a
numerical analysis course if we were interested in computers. (I was an

EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.


snip

Bill
W2WO






[email protected] April 24th 05 05:22 PM

This doesn't really matter anymore in the U.S. but it is important that
other countries do not abandon it.We do not rely on home grown engineers
as we have in the past since a
simple telephone call offshore
meets our economic needs.
Art
"J. Mc Laughlin" wrote in message
...
Dear Bill:
I too am appalled at the abandonment of a solid numerical analysis
course in engineering education. Consider the common problem of solving a
set of linear, independent algebraic equations. Students have to be shown
that Cramer's rule will not work when using the (inevitable) finite
resolution of a computer or calculator. Of course, some of the time
Cramer's rule does work so it is important to teach students why it does
not
work in general.
This is relevant to antennas where we routinely need to solve large
sets
of equations. When using a computer to perform calculations, one needs to
think differently about methods than in the day when one needed to use
large
sheets of paper and a pen.
If one is to use numbers, one needs to know the limitations of methods
of use.
73 Mac N8TT

--
J. Mc Laughlin; Michigan U.S.A.
Home:
"Bill Ogden" wrote in message
...
Back in the dark ages, when I was in school, we were "encouraged" to take

a
numerical analysis course if we were interested in computers. (I was an

EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.


snip

Bill
W2WO








Joel Kolstad April 26th 05 01:47 AM

"Bill Ogden" wrote in message
...
Back in the dark ages, when I was in school, we were "encouraged" to take a
numerical analysis course if we were interested in computers. (I was an EE
major.) It was not an easy topic, but it made us well aware of the
difference between correct results and computational precision. I was
recently astonished to find that most computer science students have no
concept of this area and even less interest in it.


Realistically 90+% of CS students are going to end up in jobs programming web
pages, databases, and other applications where it just isn't going to matter.
There just isn't time in the curriculum these days to cover everything... For
that matter, these days something like understanding the effects of finite
precision in integer arithmetic and how it relates to fixed point DSP
calcuations is probably applicable to a larger number of students!

(OK, ok, I'd be the first to admit that college courses have been dumbed down
over the years as well, but this is a direct reflection of the fact that
industry just doesn't _think_ they need that many engineers who DO know the
'hard core' bits...)

These current thoughts extend to other areas:

- C is more accurate than Fortran (or Basic, or what whatever)


Arbitrary statement (on the student's part).

- Obtaining "stable" numeric results means you get the same answer if
you run the program twice


Ditto.

- C produces the fastest programs


There is some truth to this, perhaps if only because so much more work (as far
as I can tell) has been done on C optimiziers than for other languages.
Perhaps a better statement would be, "With novice programmers, C tends to
produce the fastest programs."

- if C is good then C++ is better


C++ does have a lot of nice benefits over regular old C. The last time I
programmed in FORTRAN it was FORTRAN 77, but I can only imagine that FORTRAN
90 has some nice improvements over FORTRAN 77 as well. (And Delphi is
purpoertedly a nice improvement to Pascal, etc...)

- Using all the obscure C operators produces a better program


Uggh.

---Joel



Joel Kolstad April 26th 05 01:52 AM

"J. Mc Laughlin" wrote in message
...
I too am appalled at the abandonment of a solid numerical analysis
course in engineering education. Consider the common problem of solving a
set of linear, independent algebraic equations. Students have to be shown
that Cramer's rule will not work when using the (inevitable) finite
resolution of a computer or calculator.


I was never shown that, but I do remember it being drilled into our heads that
Cramer's rule was the bogosort of linear system solving -- just about the
least efficient means you could possibly choose, and that it existed primarily
because it can be useful to have a closed form solution to a system of
equations.

Numeric analysis of linear systems is an incredibly in-depth topic, as far as
I can tell. Books such as SIAM's "Numerical Linear Algebra" spends hundreds
of pages going over it all.

---Joel



Richard Clark April 26th 05 08:17 AM

On Mon, 25 Apr 2005 17:47:55 -0700, "Joel Kolstad"
wrote:

- C produces the fastest programs


There is some truth to this, perhaps if only because so much more work (as far
as I can tell) has been done on C optimiziers than for other languages.
Perhaps a better statement would be, "With novice programmers, C tends to
produce the fastest programs."


Hi Joel,

I skipped this groaner the first time through. You could program in
almost any language to the same speed of performance if you simply
focused on the 5% bottleneck and coded it in assembler. Nearly every
"optimizer" consists of saving a lazy programmer's bacon when they
sloppily write poor control structures and assignment statements. It
should be called a de-babelizer.

73's
Richard Clark, KB7QHC

Joel Kolstad April 26th 05 04:35 PM

Hi Rich,

"Richard Clark" wrote in message
...
I skipped this groaner the first time through. You could program in
almost any language to the same speed of performance if you simply
focused on the 5% bottleneck and coded it in assembler.


Good point, although in the case of highly sophisticated CPUs (superscalar,
VLIW, etc.), the difficulty in getting all the cache and register access
scheduling optimal is difficult enough that there are typically very few
people who can consistently do better in assembly than a high level language
with an optimizer.

In many cases selecting a better algorithm might buy one a lot more!




John Smith April 26th 05 04:45 PM

I love "religious wars" over languages and algorithms!!!

grin

Warmest regards,
John

"Richard Clark" wrote in message
...
On Mon, 25 Apr 2005 17:47:55 -0700, "Joel Kolstad"
wrote:

- C produces the fastest programs


There is some truth to this, perhaps if only because so much more work (as
far
as I can tell) has been done on C optimiziers than for other languages.
Perhaps a better statement would be, "With novice programmers, C tends to
produce the fastest programs."


Hi Joel,

I skipped this groaner the first time through. You could program in
almost any language to the same speed of performance if you simply
focused on the 5% bottleneck and coded it in assembler. Nearly every
"optimizer" consists of saving a lazy programmer's bacon when they
sloppily write poor control structures and assignment statements. It
should be called a de-babelizer.

73's
Richard Clark, KB7QHC




Richard Clark April 26th 05 11:07 PM

On Tue, 26 Apr 2005 08:35:35 -0700, "Joel Kolstad"
wrote:

Good point, although in the case of highly sophisticated CPUs (superscalar,
VLIW, etc.), the difficulty in getting all the cache and register access
scheduling optimal is difficult enough that there are typically very few
people who can consistently do better in assembly than a high level language
with an optimizer.

In many cases selecting a better algorithm might buy one a lot more!


Hi Joel,

Almost every performance gain you describe is hardware limited - not
software limited. If you can conspire to make every reference call a
cache hit, you win, but 99.999% of the applications used by everyone
here (including antenna modeling) fail in that one regard and stumble
over the rest of the "optimizations." When I look at my performance
monitor, it is idling along at 0 to 2% usage as I type (no surprise).

When I pull up a page from the New York Times (before I set my
firewall filters to turn off advertising) it would peg at 100% ad
infinitum (I guess there's an ironic pun in that). I dare say that no
one is using optimized code for running Nike ads - or if they are,
that it makes any appreciable difference at 2GHz (with a memory access
running at, what, 10% of that?). What HAS been optimized is the data
compression schemes that make up for sloppy code (the problem is
undoubtedly a memory leak or a failed garbage collection routine).

In a sense, I used client side optimization to kill the advertising
stream.

73's
Richard Clark, KB7QHC

Richard Clark April 26th 05 11:09 PM

On Tue, 26 Apr 2005 08:45:36 -0700, "John Smith"
wrote:
I love "religious wars" over languages and algorithms!!!


Hi Brett,

Well, I've programmed in them all from binary to AI - so that makes me
an agnostic.

73's
Richard Clark, KB7QHC

John Smith April 26th 05 11:19 PM

Richard:

Hmmm, an agnostic huh? EXCELLENT!!!
I will expect no "religious wars" from you on languages! grin

Warmest regards,
John

"Richard Clark" wrote in message
...
On Tue, 26 Apr 2005 08:45:36 -0700, "John Smith"
wrote:
I love "religious wars" over languages and algorithms!!!


Hi Brett,

Well, I've programmed in them all from binary to AI - so that makes me
an agnostic.

73's
Richard Clark, KB7QHC





All times are GMT +1. The time now is 02:50 AM.

Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
RadioBanter.com