SQL server Getutcdate - Why is GETUTCDATE earlier than SYSDATETIMEOFFSET?

351    Asked by AndreaBailey in SQL Server , Asked on Jul 19, 2021

Let’s consider this Code: 

‘@Offset’ is set before ‘@UTC’, yet it sometimes has a later value. (I've tried this on SQL Server 2008 R2 and SQL Server 2016. One has to run the code a few times to catch the suspect occurrences.)

This does not appear to be simply a matter of rounding or lack of precision. (In fact, I think the rounding is what "fixes" the issue sometimes.) The values for example run are as follows: Offset 2017-06-07 12:01:58.8801139 -05:00UTC 2017-06-07 17:01:58.877 UTC From Offset: 2017-06-07 17:01:58.880

So DateTime precision allows the .880 as a valid value.

Even Microsoft's GETUTCDATE examples show the SYS* values being later than the older methods, despite being SELECTed earlier:

 

 

I presume this is because they come from separate underlying system information. Can anyone confirm and provide details?

Microsoft's SYSDATETIMEOFFSET documentation states "SQL Server obtains the date and time values by using the GetSystemTimeAsFileTime() Windows API" (thanks srutzky), but their GETUTCDATE documentation is much less specific, saying only that the "value is derived from the operating system of the computer on which the instance of SQL Server is running".

 

Answered by Ankur vaish

This issue seems to be a mix of datatype granularity/accuracy and the source of the values.

First of all, DATETIME is only accurate/granular to every 3 milliseconds. Hence, converting from a more precise datatype such as DATETIMEOFFSET or DATETIME2 won't just round up or down to the nearest millisecond, it could be 2 milliseconds different.
Secondly, the documentation seems to imply a difference in where the values come from. The SYS* functions use the high-precision FileTime functions.
SYSDATETIMEOFFSET documentation states:
SQL Server obtains the date and time values by using the GetSystemTimeAsFileTime() Windows API.
while the GETUTCDATE documentation states:
This value is derived from the operating system of the computer on which the instance of SQL Server is running.
Then, in the About Time documentation, a chart shows the following two (of several) types:

  • System Time = "Year, month, day, hour, second, and millisecond, taken from the internal hardware clock."
  • File Time = "The number of 100-nanosecond intervals since January 1, 1601."
  • Additional clues are in the .NET documentation for the StopWatch class :
  • Stopwatch Class
  • The Stopwatch measures elapsed time by counting ticks in the timer in the underlying timer mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Stopwatch class uses that counter to measure elapsed time. Otherwise, the Stopwatch class uses the system timer to measure elapsed time.
  • Stopwatch.IsHighResolution Field
  • The timer used by the Stopwatch class depends on the system hardware and operating system. IsHighResolution is true if the Stopwatch timer is based on a high-resolution performance counter. Otherwise, IsHighResolution is false, which indicates that the Stopwatch timer is based on the system timer.

  Hence, there are different "types" of times that have both different precisions and different sources.But, even if that is a very loose logic, testing both types of functions as sources for the DATETIME value proves it. The following adaptation of the query from the question shows this behavior:

Returns:

  As you can notice in the above results, UTC and UTC2 are both DATETIME datatypes. @UTC2 is set via SYSUTCDATETIME() and is set after @Offset (also taken from an SYS* function), but prior to @UTC which is set via GETUTCDATE(). Yet, @UTC2 appears to come before @UTC. The OFFSET part of this is completely unrelated to anything.HOWEVER, to be fair, this still isn't proof in a strict sense. @MartinSmith traced the GETUTCDATE() call and found the following:

Now we see three interesting things in this call stack: It starts with a call to GetSystemTime() which returns a value that is only precise down to the milliseconds. It uses DateFromParts (i.e. no formula to convert, just use the individual pieces). There is a call to QueryPerformanceCounter towards the bottom. This is a high-resolution difference counter that could be being used as a means of "correction". I have seen some suggestions of using one type to adjust the other over time as long intervals slowly get out of sync (see How to get the timestamp of tick precision in .NET / C#? on S.O.). This does not show up when calling SYSDATETIMEOFFSET. There is much good info here regarding the different types of time, different sources, drift, etc: Acquiring high-resolution time stamps. Really, it is not appropriate to compare values of different precisions. For example, if you have 2017-06-07 12:01:58.8770011 and 2017-06-07 12:01:58.877, then how do you know that the one with less precision is greater than, less than, or equal to the value with more precision? Comparing them assumes that the less precise one is actually 2017-06-07 12:01:58.8770000, but who knows if that is true or not? The real-time could have been either 2017-06-07 12:01:58.8770005 or 2017-06-07 12:01:58.8770111.

  However, if you have DATETIME datatypes, then you should use the SYS* functions as the source as they are more accurate, even if you lose some precision as the value is forced into a less precise type. And along those lines, it seems to make more sense to use SYSUTCDATETIME() rather than call SYSDATETIMEOFFSET() only to adjust it via SWITCHOFFSET(@Offset, 0).

What is SQL Server GETUTDATE? This function in SQL Server is used to return the UTC date and time of a present database system in a ‘YYYY-MM-DD hh:mm:ss.mmm’ pattern.  Features :

  • used to find the UTC date and time of the current database system.
  • comes under Date Functions.
  • doesn’t accept any parameter.
  • returns the output in ‘YYYY-MM-DD hh:mm:ss.mmm’ format.









Your Answer

Interviews

Parent Categories