Home |
Search |
Today's Posts |
#1
![]() |
|||
|
|||
![]()
When it comes to measurements, most amateurs and very many
professionals suffer from delusions of accuracy. Making measurements is an ART rather than an engineering discipline. ALL measurements are subject to error. Errors are distributed in magnitude between trivial and catastrophic. Much of the art lies in assessment of the magnitude of error and depends on the measurement-makers' judgement and experience. Indeed, honesty is a factor. To gain support for the validity of a measurement result by stating the manufacturer's name and serial number of the instrument used doesn't carry much weight since accuracy depends on the person who made the measurement and many other just as important factors. People can't be avoided. Something similar applies to numerical computer programs. The reliability of a computer program depends on the programmer's knowledge of the matter in hand and has nothing to do with the machine it is running on. Far too much faith is placed on computed results merely because they are computed. Very little extra knowledge is gained by comparing a pair of computed and measured results because there is no means of knowing how and from where the inevitable difference arises. Reliabilty and confidence of both programs and measurements require time in which to accumulate. Mean time between estimated errors? ---- Reg, G4FGQ |
#2
![]() |
|||
|
|||
![]()
As we are fond of saying, "measure with a micrometer, mark with a crayola,
and cut with an axe" "Reg Edwards" wrote in message ... When it comes to measurements, most amateurs and very many professionals suffer from delusions of accuracy. |
#3
![]() |
|||
|
|||
![]()
Reg Edwards wrote:
-snip- To gain support for the validity of a measurement result by stating the manufacturer's name and serial number of the instrument used doesn't carry much weight since accuracy depends on the person who made the measurement and many other just as important factors. People can't be avoided. -snip- I was taught to record ID information of all the tools and instruments used, but I always took that to mean that it was to keep track of what things to check if anomolies cropped up. Having said that, if someone gives me a 5-digit voltage reading taken by a 3-1/2 digit Rat Shack meter I'll believe it much less that if the reading is taken by a recently calibrated name-brand 5-digit meter (and I won't believe it much at any rate -- 5-digit voltage measurements have _lots_ of embedded voodoo). -- ------------------------------------------- Tim Wescott Wescott Design Services http://www.wescottdesign.com |
#4
![]() |
|||
|
|||
![]()
Reg:
When ever you see charts being drug out and used as the basis for what one is doing--you know you are in trouble. Either the underlying laws and mathematical functions of that in question are severely lacking--or the people you are dealing with do not have the necessary skill--usually the former... here in radio you find a lot of the latter... From the abundance of charts laying around--I'd say there is still about 1/2 not known! Warmest regards, John "Reg Edwards" wrote in message ... When it comes to measurements, most amateurs and very many professionals suffer from delusions of accuracy. Making measurements is an ART rather than an engineering discipline. ALL measurements are subject to error. Errors are distributed in magnitude between trivial and catastrophic. Much of the art lies in assessment of the magnitude of error and depends on the measurement-makers' judgement and experience. Indeed, honesty is a factor. To gain support for the validity of a measurement result by stating the manufacturer's name and serial number of the instrument used doesn't carry much weight since accuracy depends on the person who made the measurement and many other just as important factors. People can't be avoided. Something similar applies to numerical computer programs. The reliability of a computer program depends on the programmer's knowledge of the matter in hand and has nothing to do with the machine it is running on. Far too much faith is placed on computed results merely because they are computed. Very little extra knowledge is gained by comparing a pair of computed and measured results because there is no means of knowing how and from where the inevitable difference arises. Reliabilty and confidence of both programs and measurements require time in which to accumulate. Mean time between estimated errors? ---- Reg, G4FGQ |
#5
![]() |
|||
|
|||
![]()
Reg Edwards wrote:
. . . Making measurements is an ART rather than an engineering discipline. I maintain that it's both an art AND an engineering discipline. . . . Something similar applies to numerical computer programs. The reliability of a computer program depends on the programmer's knowledge of the matter in hand and has nothing to do with the machine it is running on. Far too much faith is placed on computed results merely because they are computed. Agreed. This makes it essential that computed results be compared with measured results, or results calculated by other established methods, to gain confidence that the computer calculations are valid. . . . Very little extra knowledge is gained by comparing a pair of computed and measured results because there is no means of knowing how and from where the inevitable difference arises. . . . I've found that to be completely untrue. I've done a good deal of design work in the areas of very high speed time domain circuitry and microwave RF circuitry, where direct measurement of internal functioning is impossible. I and my colleagues used modeling extensively in the course of our design work. An essential part of the process is to periodically look at the overall performance of a real circuit and compare it to the model. The model was adjusted (by varying unknown values, by adding components previously thought insignificant, and so forth) until the model adequately matched the measured results. This does, of course, require confidence in the measured results, which is subject to a host of potential problems. But the important thing is that agreement between the model and measurement must be gained in order to have any confidence in the usefullness of the model. It's not only possible, but critical to know how and from where the differences arise. It sounds like you're saying that when presented with computed results we have no recourse but to put faith in the programmer. That's not at all so. We have to test the results against measurements or other calculations and resolve the differences before we can put much faith in any model or program. With the rest, I agree. . . . Roy Lewallen, W7EL |
#6
![]() |
|||
|
|||
![]()
Roy,
It sounds like you're saying that when presented with computed results we have no recourse but to put faith in the programmer. That's not at all so. We have to test the results against measurements or other calculations and resolve the differences before we can put much faith in any model or program. With the rest, I agree. ================================== Roy, put a new battery in your hearing aid. ;o) The danger of putting much faith in the programmer is that he may belong to same set of old wives as the measurer and therefore make the same mistakes. Misconceptions are known to be popular. The two sets of incorrect results, measured and computed, then fatally agree with each other due to the correlation. It's impossible for a program to be more reliable than the programmer. It can only be worse. Your use of the word "before" implies a time span. The reliability of anything accumulates with experience, use and TIME. (Don't anybody mention bath tubs). Reliability is Quality vs Time. And Quality, in engineering terms, is the degree of conformance to specified requirements. Or, in more worldly terms as may be applied to computer programs, the fitness for the intended purpose. Thus we can say that the accuracy of Eznec, etc., as determined over a number of years, is highly fit for its intended purposes. ---- Reg, G4FGQ |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
Anyone know a referral site for Doctors specializing in dementia? | CB | |||
Ham Radio Embarrassments | General | |||
Ham Radio Embarrassments | Policy | |||
Ham Radio Embarrassments | CB | |||
Doug's Diagnosis...sorry, forgot to change the subject line | CB |