View Single Post
  #54   Report Post  
Old May 15th 10, 03:07 AM posted to rec.radio.amateur.antenna
tom tom is offline
external usenet poster
 
First recorded activity by RadioBanter: May 2009
Posts: 660
Default Computer model experiment

On 5/14/2010 6:19 AM, K1TTT wrote:

quantization of every number in a numeric simulation is but one of the
contributions to inaccuracy. the limitations of the physical model is
another, every modeling program i know of breaks the physical thing
being modeled into small pieces, some with fixed sizes, some use
adaptive methods, but then they all calculate using those small pieces
as if they were a single homogonous piece with step changes at the
edges... that also adds to inaccuracies. the robustness of the
algorithm and the residual errors created are a bit part of getting
more accurate results. There is no doubt that numerical methods have
allowed 'solutions' of many problems that would be extremely difficult
to find closed form solutions for, but they must always be examined
for the acceptibility of the unavoidable errors in the method used.


I will assume that most here are familiar with Simpson's Rule
Integration. This allows one to compute the "area under the curve" of a
function with a fairly simple algorithm. It's as little as 7 statements
using Fortran. And it is quite amazing how accurate the answer can be
with even just a few slices of the curve from start to finish. If used
properly.

Don't think that seemingly large chunks mean poor accuracy. When the
algorithm is good, and the program selects the chunk size well, the
results can be very close to the true answer.

tom
K0TAR