What's the worst resolution I can reasonably expect from System.nanoTime?

Michael :

I'm writing software that requires timestamps in microsecond resolution or better.

I'm planning on using System.currentTimeMillis in combination with System.nanoTime sort of like this, though it's just a rough code sketch:

private static final long absoluteTime = (System.currentTimeMillis() * 1000 * 1000);
private static final long relativeTime = System.nanoTime();

public long getTime()
{
    final long delta = System.nanoTime() - relativeTime;
    if (delta < 0) throw new IllegalStateException("time delta is negative");
    return absoluteTime  + delta;
}

The documentation for nanoTime says:

This method provides nanosecond precision, but not necessarily nanosecond resolution (that is, how frequently the value changes) - no guarantees are made except that the resolution is at least as good as that of currentTimeMillis().

so it hasn't given us a guarantee of a resolution any better than milliseconds.

Going a little deeper, under the hood of nanoTime (which is predictably a native method):

  • Windows uses the QueryPerformanceCounter API which promises a resolution of less than one microsecond which is great.

  • Linux uses clock_gettime with a flag to ensure the value is monotonic but makes no promises about resolution.

  • Solaris is similar to Linux

  • The source doesn't mention how OSX or Unix-based OSs deal with this.

(source)

I've seen a couple of vague allusions to the fact it will "usually" have microsecond resolution, such as this answer on another question:

On most systems the three least-significant digits will always be zero. This in effect gives microsecond accuracy, but reports it at the fixed precision level of a nanosecond.

but there's no source and the word "usually" is very subjective.

Question: Under what circumstances might nanoTime return a value whose resolution is worse than microseconds? For example, perhaps a major OS release doesn't support it, or a particular hardware feature is required which may be absent. Please try to provide sources if you can.


I'm using Java 1.6 but there's a small chance I could upgrade if there were substantial benefits with regards to this problem.

the8472 :

Question: Under what circumstances might nanoTime return a value whose resolution is worse than microseconds? What operating systems, hardware, JVMs etc. that are somewhat commonly used might this affect? Please try to provide sources if you can.

Asking for an exhaustive list of all possible circumstances under which that constraint will be violated seems a bit much, nobody knows under which environments your software will run. But to prove that it can happen see this blog post by aleksey shipilev, where he describes a case where nanotime becomes less accurate (in terms of its own latency) than a microsecond on windows machines, due to contention.

Another case would be the software running under a VM that emulates hardware clocks in a very coarse manner.

The specification has been left intentionally vague exactly due to platform and hardware-specific behaviors.

You can "reasonably expect" microsecond precision once you have verified that the hardware and operating system you're using do provide what you need and that VMs pass through the necessary features.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=436766&siteId=1