Computer model experiment
K1TTT wrote:
On May 13, 8:56 am, Roy Lewallen wrote:
K1TTT wrote:
On May 11, 8:30 pm, Art Unwin wrote:
When an array is
in equilibrium then Maxwell's equations are exact.
maxwell's equations are ALWAYS exact, it is digital models that are
inexact and have limitations due to the approximations made and the
numeric representations used.
Inexactness of the solution isn't because the method is digital. The
field equations solved by the digital methods simply can't be solved by
other methods, except for a relatively few very simple cases. Many
non-digital methods were developed over the years before high speed
computers to arrive at various approximate solutions, but all have
shortcomings. For example, I have a thick file of papers devoted to the
apparently simple problem of finding the input impedance of a dipole of
arbitrary length and diameter. Even that can't be solved in closed form.
Solution by digital methods is vastly superior, and is capable of giving
much more accurate results, than solution by any known method.
Roy Lewallen, W7EL
quantization of every number in a numeric simulation is but one of the
contributions to inaccuracy. the limitations of the physical model is
another, every modeling program i know of breaks the physical thing
being modeled into small pieces, some with fixed sizes, some use
adaptive methods, but then they all calculate using those small pieces
as if they were a single homogonous piece with step changes at the
edges...
Not all modeling uses step changes. Some modeling approaches use a model
description that is continuous at element boundaries (at least for some
number of derivatives). For example, a cubic spline has smoothly
varying values, first and second derivatives.
The tradeoff in the code is whether you use fewer, better (higher order
modeling) chunks or more simpler chunks. For instance, NEC uses a basis
function that represents the current in a segment (the chunk) as the
combination of a value and two sinusoid sections. Other codes assume
the current is uniform over the segment, yet others assume a sinusoidal
distribution or a triangle.
This leads to a tradeoff in computational resources required: numerical
precision, computational complexity, etc. (lots of simple elements tends
to require bigger precision)
I think that for codes hams are likely to encounter, these are pretty
subtle differences and irrelevant. A lot of the "computational
efficiency" issues are getting smaller, as cheap processor horsepower is
easy to come by.
that also adds to inaccuracies. the robustness of the
algorithm and the residual errors created are a bit part of getting
more accurate results. There is no doubt that numerical methods have
allowed 'solutions' of many problems that would be extremely difficult
to find closed form solutions for, but they must always be examined
for the acceptibility of the unavoidable errors in the method used.
That's why there's all those "validation of modeling code X" papers out
there.
|