[e2e] was double blind, now reproduceable results
David P. Reed
dpreed at reed.com
Wed May 19 16:41:20 PDT 2004
>David,
>
>I believe your argument is just plain wrong.
I hope I just expressed it wrong. Since you clearly didn't get what I was
trying to say from what I wrote, I'll try again below.
>It is true (but not relevant) that humans have a non-determinism that
I said nothing about non-determinism. I was referring to forcing science
into a situation where the only things you can study are those that are
perfectly reproducible (i.e. abstracted completely from their
environment). I was referring to phenomena that depend on large-scale
environmental characteristics.
>makes good science in psychology hard. Nevertheless, I was married to a
>social psychologist who got her degree under Festinger at Stanford; she
>learned how to do solid experimental, reproducible, science even with
>human beings.
Here I worry that you laud results merely because they are
reproducible. I have no problem with results being reproducible being a
good thing when that is possible to achieve. Jon's point, like Marc
Weiser's many years ago, was to exclude (with prejudice) reporting results
of experiments where exact reproducibility might be difficult. (read
carefully, it would mean that measurements of the "real" Internet would
never be valid science, because they are inherently not reproducible. My
point was essentially that Behaviorism was largely the result of the drunk
looking under the lampost (of laboratory procedures that worshiped
reproducibility rather than searching for effective means to test theories).
>Your argument would suggest that because of quantum theory and in
>particular the Heisenberg Principle, physicists cannot do reproducible
>research. Nonsense.
I never said that. My point was that there is a fantasy that certain
sciences do "reproducible" experiments. It's worth pointing out that the
first experiment that confirmed General Relativity was a measurement during
an eclipse. That experiment cannot be reproduced, nor was the data
without challenge.
>I agree with Jon. When our published results are not reproducible, we
>are not doing good computer science.
The results may not be reproducible, but one can do good science if the
hypotheses can be tested in many ways, none of which duplicates the prior
experiment.
My point was that making rules about the FORM of the papers that are
written, or the class of experiments that are allowed to be viewed as
scientific, is far too limiting.
In addition, though I did not say it, mere slavish reproduction of a
particular experiment may well be bad science, or at least
misleading. If a scientific theory is USEFUL, it is best to test it with
many DIFFERENT experiments.
The model of planetary motions based on epicycles was indeed more accurate
than one based on Kepler's laws. The single experiment of comparing
epicyclic predictions to the world would never have falsified that model,
no matter how many times it was carried out. The same is true of
demonstrations of rat maze learning. Good science is the art of figuring
out how to form hypotheses and test them, not following a cookbook full of
rules like "only reproducible experiments count".
More information about the end2end-interest
mailing list