Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #1   Report Post  
Old May 11th 10, 09:17 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Mar 2008
Posts: 1,339
Default Computer model experiment

On May 11, 1:38*pm, Jim Lux wrote:
Ralph Mowery wrote:
"tom" wrote in message
et...
On 5/10/2010 3:12 PM, wrote:
As Clint said in the wonderful old movie, "A man's gotta know his limits".
For antenna modelers it should read, "A man's gotta know the program's
limits".


Of course, Art thinks things have changed and the computer modelers have a
better grasp upon reality than the ones even he calls "the masters". He is
an example of the blind man leading himself.


tom
K0TAR


The computer program should know its limits.


yes and no. *For EM modeling codes originally intended for use by
sophisticated users with a knowledge of the limitations of numerical
analysis, they might assume the user knows enough to formulate models
that are "well conditioned", or how to experiment to determine this.
NEC is the leading example here. It doesn't do much checking of the
inputs, and assumes *you know what you are doing.

There were modeling articles in ARRL pubs 20 years ago that described
one way to do this at a simple level: changing the number of segments in
the model and seeing if the results change. *The "average gain test" is
another way.

In many cases, the constraints on the model are not simply representable
(a lot of "it depends"), so that raises an issue for a "design rule
checker" that is reasonably robust. *Some products that use NEC as the
backend put a checker on the front (4nec2, for instance, warns you about
length/diameter ratios, almost intersections, and the like)

It's sort of like power tools vs hand tools. *The assumption is that the
user of the power tool knows how to use it.

* Anytine a program allows the

data entered to be too large or small for the calculations, it should be
flagged as being out of range. *Also many computer programs will use
simplified formulars that can mast the true outcome. *Usually it is not very
much, but as all errors start to add up the end results may be way off.


There's whole books written on this for NEC. *Part I of the NEC
documents, in particular, discusses this. *There's also a huge
professional literature on various FEM computational techniques and
their limitations. *NEC, like most numerical codes (for mechanics,
thermal, as well as EM), is very much a chainsaw without safety guards.
* It's up to the user to wear gloves and goggles and not cut their leg off.


Jim Lux of NASA no less!
All of the programs clearly state that they are based on Maxwells
equations. The bottom line of that equation is that for accountability
for all forces involved are required and where the summation of all
equals zero. This is nothing new and has been followed thru for
centuries. The equations requires first and formost equilibrium and
what the program supplies is easily checked that it meets these
requirements. It is very simple. Showing that the solution is that
inside an arbitrary boundary all within as with the whole must be
resonant and in equilibrium.It requires no more than that to show if
the program has achieved its object. I understand your preachings but
you presented no point that can be discussed.
Now you will respond that I must do such and such to back the
statement above despite that those requirements are the basis of
physics. So to you I will supply the same that I have supplied to
others which they reject, no one has stated why.
A arbitrary gaussian border containing static particles
( not waves as many summize. Gauss was very clear about the presence
of static particles) in equilibrium may be made dynamic by the
addition of a time varying field such that Maxell's equations can be
applied to solve.I have stated the over checks that can be applied to
provide correctness of this procedure. You may, of course, join the
poll that swells on behalf of NASA in opposition to the above but it
would provide me a great deal of delight if you provided more than to
just say "I am wrong". Nobody as yet provided one mathematical reason
that disputes the above, so in the absence of such you will not be
alone, only your credibility suffers but you will remain in the
majority of the poll in the eyes of the ham radio World.
Regards
Art Unwin
  #2   Report Post  
Old May 11th 10, 10:02 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Mar 2007
Posts: 801
Default Computer model experiment

Art Unwin wrote:
On May 11, 1:38 pm, Jim Lux wrote:

The computer program should know its limits.

yes and no. For EM modeling codes originally intended for use by
sophisticated users with a knowledge of the limitations of numerical
analysis, they might assume the user knows enough to formulate models
that are "well conditioned", or how to experiment to determine this.
NEC is the leading example here. It doesn't do much checking of the
inputs, and assumes you know what you are doing.

Jim Lux of NASA no less!

Speaking, however, as Jim Lux, engineer, not necessarily on NASA's behalf.

All of the programs clearly state that they are based on Maxwells
equations.

snip
I understand your preachings but
you presented no point that can be discussed.



While NEC and its ilk are clearly based on Maxwell's equations, one
should realize that they do not provide an analytical closed form
solution, but, rather, are numerical approximations, and are subject to
all the limitations inherent in that. They solve for the currents by
the method of moments, which is but one way to find a solution, and one
that happens to work quite well with things made of wires.

Within the limits of computational precision, for simple cases, where
analytical solutions are known to exist, the results of NEC and the
analytical solution are identical. That's what validation of the code
is all about.

Further, where there is no analytical solution available, measured data
on an actual antenna matches that predicted by the model, within
experimental uncertainty.

In both of the above situations, the validation has been done many
times, by many people, other than the original authors of the software,
so NEC fits in the category of "high quality validated modeling tools".

This does not mean, however, that just because NEC is based on Maxwell's
equations that you can take anything that is solvable with Maxwell and
it will be equally solvable in NEC.

I suspect that one could take the NEC algorithms, and implement a
modeling code for, say, a dipole, using an arbitrary precision math
package and get results that are accurate to any desired degree. This
would be a lot of work.

It's unclear that this would be useful, except perhaps as an
extraordinary proof for an extraordinary claim (e.g. a magic antenna
that "can't be modeled in NEC"). However, once you've done all that
software development, you'd need independent verification that you
correctly implemented it.

This is where a lot of the newer modeling codes come from (e.g. FDTD):
they are designed to model things that a method of moments code can't do
effectively.


  #3   Report Post  
Old May 12th 10, 01:30 AM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Mar 2008
Posts: 1,339
Default Computer model experiment

On May 11, 4:02*pm, Jim Lux wrote:
Art Unwin wrote:
On May 11, 1:38 pm, Jim Lux wrote:


The computer program should know its limits.
yes and no. *For EM modeling codes originally intended for use by
sophisticated users with a knowledge of the limitations of numerical
analysis, they might assume the user knows enough to formulate models
that are "well conditioned", or how to experiment to determine this.
NEC is the leading example here. It doesn't do much checking of the
inputs, and assumes *you know what you are doing.


Jim Lux of NASA no less!


Speaking, however, as Jim Lux, engineer, not necessarily on NASA's behalf..

All of the programs clearly state that they are based on Maxwells
equations.


snip
I understand your preachings but

you presented no point that can be discussed.


While NEC and its ilk are clearly based on Maxwell's equations, one
should realize that they do not provide an analytical closed form
solution, but, rather, are numerical approximations, and are subject to
all the limitations inherent in that. *They solve for the currents by
the method of moments, which is but one way to find a solution, and one
that happens to work quite well with things made of wires.

Within the limits of computational precision, for simple cases, where
analytical solutions are known to exist, the results of NEC and the
analytical solution are identical. *That's what validation of the code
is all about.

Further, where there is no analytical solution available, measured data
on an actual antenna matches that predicted by the model, within
experimental uncertainty.

In both of the above situations, the validation has been done many
times, by many people, other than the original authors of the software,
so NEC fits in the category of "high quality validated modeling tools".

This does not mean, however, that just because NEC is based on Maxwell's
equations that you can take anything that is solvable with Maxwell and
it will be equally solvable in NEC.

I suspect that one could take the NEC algorithms, and implement a
modeling code for, say, a dipole, using an arbitrary precision math
package and get results that are accurate to any desired degree. *This
would be a lot of work.

It's unclear that this would be useful, except perhaps as an
extraordinary proof for an extraordinary claim (e.g. a magic antenna
that "can't be modeled in NEC"). *However, once you've done all that
software development, you'd need independent verification that you
correctly implemented it.

This is where a lot of the newer modeling codes come from (e.g. FDTD):
they are designed to model things that a method of moments code can't do
effectively.


Again you preach but obviously you are not qualified to address the
issue.
Maxwells equations are such that all forces are accounted for when the
array is in a state of equilibrium. To use such an equation for an
array that is not in equilibrium requires additional input
( proximetry equations) which is where error creep in.When an array is
in equilibrium then Maxwell's equations are exact. The proof of the
pudding is that the resulting array is in equilibrium as is its parts.
AO pro by Beasley consistently produces an array in equilibrium when
the optimizer is used as well as including the presence of particles
dictated by Gauss., The program is of Minninec foundation which
obviously does not require the patch work aproach that NEC has. On top
of all that. it sees an element as one in encapsulation as forseen by
Gauss by removing the resistance of the element, which produces a
loss, and thus allows dealing only with all vectors as they deal with
propagation. It is only because hams use Maxwell's equation for
occasions that equilibrium does not exist, such as the yagi, do errors
start to creep in. Any array produced solely by the use of Maxwell's
equations provides proof of association by producing an array in
equilibrium which can be seen as an over check.Like you, I speak only
as an engineer on behalf of myself. Clearly, Maxwell had taken
advantage of the presence of particles when he added displacement
current so that the principle of equilibrium would be adhered to. This
being exactly the same that Faraday did when explaining the
transference from a particle to a time varying current when describing
the workings of the cage.
Regards
Art
  #4   Report Post  
Old May 12th 10, 06:10 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Apr 2010
Posts: 484
Default Computer model experiment

On May 11, 8:30*pm, Art Unwin wrote:
When an array is
in equilibrium then Maxwell's equations are exact.


maxwell's equations are ALWAYS exact, it is digital models that are
inexact and have limitations due to the approximations made and the
numeric representations used.
  #5   Report Post  
Old May 12th 10, 08:29 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Mar 2008
Posts: 1,339
Default Computer model experiment

On May 12, 12:10*pm, K1TTT wrote:
On May 11, 8:30*pm, Art Unwin wrote:

When an array is
in equilibrium then Maxwell's equations are exact.


maxwell's equations are ALWAYS exact, it is digital models that are
inexact and have limitations due to the approximations made and the
numeric representations used.


On this I have total agreement. The moment one strays from the concept
of equilibrium is when we expose ourselves to errors.
Regards
Art



  #6   Report Post  
Old May 12th 10, 09:36 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Apr 2010
Posts: 484
Default Computer model experiment

On May 12, 3:29*pm, Art Unwin wrote:
On May 12, 12:10*pm, K1TTT wrote:

On May 11, 8:30*pm, Art Unwin wrote:


When an array is
in equilibrium then Maxwell's equations are exact.


maxwell's equations are ALWAYS exact, it is digital models that are
inexact and have limitations due to the approximations made and the
numeric representations used.


On this I have total agreement. The moment one strays from the concept of equilibrium is when we expose ourselves to errors.
Regards
Art


ok, so you DO agree that maxwell's equations that make no mention of
particles like neutrinos, gravity, coriolis forces, or levitation ARE
correct! And therefor you must agree that the representation of
gauss's law encapsulated in maxwell's equations, WITHOUT an explicit t
in it must be correct! You must also be admitting that your
optimization experiments are full of errors. wow, now its time to go
and rejoice, art has finally come around to the real world!
  #7   Report Post  
Old May 13th 10, 01:56 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Jun 2006
Posts: 1,374
Default Computer model experiment

K1TTT wrote:
On May 11, 8:30 pm, Art Unwin wrote:
When an array is
in equilibrium then Maxwell's equations are exact.


maxwell's equations are ALWAYS exact, it is digital models that are
inexact and have limitations due to the approximations made and the
numeric representations used.


Inexactness of the solution isn't because the method is digital. The
field equations solved by the digital methods simply can't be solved by
other methods, except for a relatively few very simple cases. Many
non-digital methods were developed over the years before high speed
computers to arrive at various approximate solutions, but all have
shortcomings. For example, I have a thick file of papers devoted to the
apparently simple problem of finding the input impedance of a dipole of
arbitrary length and diameter. Even that can't be solved in closed form.
Solution by digital methods is vastly superior, and is capable of giving
much more accurate results, than solution by any known method.

Roy Lewallen, W7EL
  #8   Report Post  
Old May 14th 10, 12:19 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Apr 2010
Posts: 484
Default Computer model experiment

On May 13, 8:56*am, Roy Lewallen wrote:
K1TTT wrote:
On May 11, 8:30 pm, Art Unwin wrote:
When an array is
in equilibrium then Maxwell's equations are exact.


maxwell's equations are ALWAYS exact, it is digital models that are
inexact and have limitations due to the approximations made and the
numeric representations used.


Inexactness of the solution isn't because the method is digital. The
field equations solved by the digital methods simply can't be solved by
other methods, except for a relatively few very simple cases. Many
non-digital methods were developed over the years before high speed
computers to arrive at various approximate solutions, but all have
shortcomings. For example, I have a thick file of papers devoted to the
apparently simple problem of finding the input impedance of a dipole of
arbitrary length and diameter. Even that can't be solved in closed form.
Solution by digital methods is vastly superior, and is capable of giving
much more accurate results, than solution by any known method.

Roy Lewallen, W7EL


quantization of every number in a numeric simulation is but one of the
contributions to inaccuracy. the limitations of the physical model is
another, every modeling program i know of breaks the physical thing
being modeled into small pieces, some with fixed sizes, some use
adaptive methods, but then they all calculate using those small pieces
as if they were a single homogonous piece with step changes at the
edges... that also adds to inaccuracies. the robustness of the
algorithm and the residual errors created are a bit part of getting
more accurate results. There is no doubt that numerical methods have
allowed 'solutions' of many problems that would be extremely difficult
to find closed form solutions for, but they must always be examined
for the acceptibility of the unavoidable errors in the method used.

other non-digital methods also have their limitations. unless you are
using the original differential or integral equations and satisfying
all the boundary conditions, your method will introduce errors.
weather that means you represent an odd shaped solid object by a
sphere, or make other geometic replacements that give you simpler
field configurations, you have introduced errors at some level. you
must of course judge these methods by the same way to determine of the
errors introduced by the simplyfied geometry or other methods used are
acceptible for the problem at hand.
  #9   Report Post  
Old May 14th 10, 04:50 PM posted to rec.radio.amateur.antenna
external usenet poster
 
First recorded activity by RadioBanter: Mar 2007
Posts: 801
Default Computer model experiment

K1TTT wrote:
On May 13, 8:56 am, Roy Lewallen wrote:
K1TTT wrote:
On May 11, 8:30 pm, Art Unwin wrote:
When an array is
in equilibrium then Maxwell's equations are exact.
maxwell's equations are ALWAYS exact, it is digital models that are
inexact and have limitations due to the approximations made and the
numeric representations used.

Inexactness of the solution isn't because the method is digital. The
field equations solved by the digital methods simply can't be solved by
other methods, except for a relatively few very simple cases. Many
non-digital methods were developed over the years before high speed
computers to arrive at various approximate solutions, but all have
shortcomings. For example, I have a thick file of papers devoted to the
apparently simple problem of finding the input impedance of a dipole of
arbitrary length and diameter. Even that can't be solved in closed form.
Solution by digital methods is vastly superior, and is capable of giving
much more accurate results, than solution by any known method.

Roy Lewallen, W7EL


quantization of every number in a numeric simulation is but one of the
contributions to inaccuracy. the limitations of the physical model is
another, every modeling program i know of breaks the physical thing
being modeled into small pieces, some with fixed sizes, some use
adaptive methods, but then they all calculate using those small pieces
as if they were a single homogonous piece with step changes at the
edges...


Not all modeling uses step changes. Some modeling approaches use a model
description that is continuous at element boundaries (at least for some
number of derivatives). For example, a cubic spline has smoothly
varying values, first and second derivatives.

The tradeoff in the code is whether you use fewer, better (higher order
modeling) chunks or more simpler chunks. For instance, NEC uses a basis
function that represents the current in a segment (the chunk) as the
combination of a value and two sinusoid sections. Other codes assume
the current is uniform over the segment, yet others assume a sinusoidal
distribution or a triangle.

This leads to a tradeoff in computational resources required: numerical
precision, computational complexity, etc. (lots of simple elements tends
to require bigger precision)

I think that for codes hams are likely to encounter, these are pretty
subtle differences and irrelevant. A lot of the "computational
efficiency" issues are getting smaller, as cheap processor horsepower is
easy to come by.


that also adds to inaccuracies. the robustness of the
algorithm and the residual errors created are a bit part of getting
more accurate results. There is no doubt that numerical methods have
allowed 'solutions' of many problems that would be extremely difficult
to find closed form solutions for, but they must always be examined
for the acceptibility of the unavoidable errors in the method used.


That's why there's all those "validation of modeling code X" papers out
there.



  #10   Report Post  
Old May 15th 10, 03:07 AM posted to rec.radio.amateur.antenna
tom tom is offline
external usenet poster
 
First recorded activity by RadioBanter: May 2009
Posts: 660
Default Computer model experiment

On 5/14/2010 6:19 AM, K1TTT wrote:

quantization of every number in a numeric simulation is but one of the
contributions to inaccuracy. the limitations of the physical model is
another, every modeling program i know of breaks the physical thing
being modeled into small pieces, some with fixed sizes, some use
adaptive methods, but then they all calculate using those small pieces
as if they were a single homogonous piece with step changes at the
edges... that also adds to inaccuracies. the robustness of the
algorithm and the residual errors created are a bit part of getting
more accurate results. There is no doubt that numerical methods have
allowed 'solutions' of many problems that would be extremely difficult
to find closed form solutions for, but they must always be examined
for the acceptibility of the unavoidable errors in the method used.


I will assume that most here are familiar with Simpson's Rule
Integration. This allows one to compute the "area under the curve" of a
function with a fairly simple algorithm. It's as little as 7 statements
using Fortran. And it is quite amazing how accurate the answer can be
with even just a few slices of the curve from start to finish. If used
properly.

Don't think that seemingly large chunks mean poor accuracy. When the
algorithm is good, and the program selects the chunk size well, the
results can be very close to the true answer.

tom
K0TAR


Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
FA: Philbrick GAP/R Model K2-W Early Computer Tube Op-Amp [email protected] Boatanchors 3 April 19th 05 03:13 PM
FA: Philbrick GAP/R Model K2-W Early Computer Tube Op-Amp [email protected] Boatanchors 0 April 18th 05 04:26 AM
FA: Philbrick GAP/R Model K2-W Early Computer Tube Op-Amp [email protected] Boatanchors 0 April 11th 05 10:23 PM
FA: Philbrick GAP/R Model K2-W Early Computer VacuumTube Op-Amp [email protected] Boatanchors 0 March 16th 05 09:26 PM
FA: Radio Shack Model 100 laptop computer ++ [email protected] Equipment 0 January 31st 05 03:10 PM


All times are GMT +1. The time now is 10:20 AM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 RadioBanter.
The comments are property of their posters.
 

About Us

"It's about Radio"

 

Copyright © 2017