[e2e] Achieving Scalability in Digital Preservation (yes, this is an e2e topic)
l.wood@surrey.ac.uk
l.wood at surrey.ac.uk
Tue Jul 17 01:21:20 PDT 2012
> One interpretation of end-to-end tells us that in order to improve
> the scalability of our solution, we should do less in the channel,
> let corruption go uncorrected, and move the work of overcoming faults
> closer to the endpoint.
No, the end-to-end argument says that work of *rejecting* faults must take place at the endpoint. (It may not be the only place to detect errors and reject them for performance reasons, but it is the place of last resort to catch errors that intermediate checks that boost performance can't catch.)
The end-to-end argument is basically an argument of where and how best to implement ARQ. In a tight control loop, intermediate checks along the path do not increase performance, cannot guarantee correctness - the check at end is always needed - and are redundant. In a longer control loop, intermediate checks can boost performance by decreasing local resend times, reducing overall delay.
But, in open-loop digital preservation using FEC you can't use ARQ to communicate back centuries. If you reject your data as corrupted, what then? You have no control loop, you have no recourse.
You need a control loop, but can't introduce it. The end-to-end argument isn't applicable to your scenario.
Lloyd Wood
http://sat-net.com/L.Wood/dtn/
More information about the end2end-interest
mailing list