[e2e] patents on routing algorithms
David P. Reed
dpreed at reed.com
Fri Jan 4 05:42:39 PST 2008
Jon Crowcroft wrote:
> it is a goal of much recent work (see Sewell et al in sigcomm 05
> "Rigorous specification and conformance testing techniques for network protocols,
> as applied to TCP, UDP, and sockets"
> and various papers
> by Griffin and Sobrinho on Metarouting)
> to render protocols merely
> algorithmic specifications that are fed into engines that run them
>
> shame on us as computer scientists that
> we dont use such techniques on a daily basis for
> well-found engineering instead of the handwaving that passes
> for communications work still in the 21st century
>
> it is a technical AND ethical goal to make it so
> and should be a duty on all of us to get the law to recognize it
>
>
That's a plausible point of view. I heartily disagree, however. In
1974 or so, our research group (Saltzer, Clark, Reed, Liskov, Svobodova,
as I recall) decided that a *crucial* aspect of distributed systems was
that they exhibited "autonomy", which implies a serious notion of loose
coupling, flexibility, revisability, etc. That set of attributes are
crucial, leaving them out for the sake of formal methods is just another
Procrustean bed, where they are the Feet.
*Protocols* are techniques for achieving communications in the face of
uncertainty about who is on the other side of the network. Not just an
unreliable network in the middle, but an uncertainty in a very
fundamental sense about what is on the other side.
In "distributed systems" that must function in the real world, a core
and *essential* concept is that one must specify parts of the system to
work "right" EVEN IF THE DEFINITION OF RIGHT CANNOT BE WELL-DEFINED
MATHEMATICALLY.
To someone who speaks English as a protocol, this is obvious. I can
try to convince you, for example, by the words above that I am right.
And I am using English correctly, and this can be verified. But it has
nothing to do whatsoever with being able to prove that you *will* agree
with me at the end of the conversation. Maybe it will take more
conversations, maybe not.
But a protocol is not an algorithm executed by a complete set of formal
machines, though some protocols (a small subset might be in that
category). That is a sad, little boring and utlimately trivial subset
of the "protocols" of the world. Maybe it makes small-minded
mathematicians happy because they can close off a "formal system" and
prove theorems, as if proving theorems is the desired endpoint of system
design. But the ability to prove theorems is not the test of a
*useful* protocol set - neither of engineering value, nor of human
value. The ability to communicate (which cannot be formalized in any
way I know) is the correct test. The Internet is one example of a
system that succeeds in communicating, and there really was NOT a need
to define a formal specification of a collection of machines to achieve
that result.
More information about the end2end-interest
mailing list