|
From: | Hans Aberg |
Subject: | Re: timestamp resolution |
Date: | Thu, 10 May 2007 15:09:59 +0200 |
On 9 May 2007, at 16:30, Joseph M Gwinn wrote:
In fact, POSIX "Seconds Since the Epoch" is effectively TAI minus an unspecified offset because POSIX counts ~SI seconds regardless of astronomy and thus leap anything.
I think the specs ignore the issue, so it is only accurate within a couple of ten seconds. I figure typical system just ignore the leap seconds from the epoch, and adjusts the internal clock on the first lookup after the time server has changed. It is these jumps in the internal clock that may pose a problem: it is hard to tell which computer that have adjusted and which have not.
If one would follow the suggestion of using a TAI-JD the way I did, one would end up with a system that ignores the leap system in the internal second count, from the epoch. It means that one must have time servers that do not introduce a jump when a leap second appears. Instead, when a human readable time stamp occurs, one makes a lookup from a file, that adjusts the time accordingly.
(The fact that ordinary computer clock hardware isn't nearly as accurate as that collection of caesium beamclocks is neither here nor there - it's the semantics of the timescale,not its accuracy, that counts here.)
Right. The idea is not to impose a system on the current hardware, but admitting future hardware with more accurate clocks to adjust at need. This need not only to be file time stamps, but say distributed ronoting, or something.
POSIX time cannot actually be TAI bacause not all POSIX systems haveaccess to (or need for) time that's accurate with respect to any externaltimescale. Think isolated networks with no access to the sky.
A completely isolated system only needs adjustment to its one and only clock. In a distributed setting, be it over the network, or by radio broadcasts, need access to time servers which do not introduce a leap second in the count.
As for the choice of the one true clock, the original and still a corereason for POSIX to care about time is to support causal ordering of fileupdates by comparison of timestamps.The granularity issue has always been with us. While it is known that nofinite-resolution timestamp scheme can ensure causal order, thealternative (a central guaranteed-sequence hardware utility) is usuallyimpractical, so people have always used timestamps. (IBM sells such a utility box for use in their transaction systems.) What one can most easily do is to require much better timestamp resolution as technologyprogresses, thus reducing the window of non-causality in such things asmake.
Different system will require different granularity. The interesting thing is that a high performance system distributed around planet Earth might in principle have an accuracy of 10^-7 seconds.
Hans Aberg
[Prev in Thread] | Current Thread | [Next in Thread] |