No announcement yet.


  • Filter
  • Time
  • Show
Clear All
new posts

  • GetSystemTimeAdjustment

    I wonder if some of you would be kind to run the following and let me know what you get.

    Minimum supported client: Windows 2000 Professional
    Minimum supported server: Windows 2000 Server

    #Compile  Exe
    #Dim      All
    #Include "WIN32API.INC"
    Function PBMain( ) As Long
    Local lpTimeAdjustment, lpTimeIncrement, lpTimeAdjustmentDisabled, result As Long
      result = GetSystemTimeAdjustment( lpTimeAdjustment, lpTimeIncrement, lpTimeAdjustmentDisabled )
      If IsFalse(result) Then
        MsgBox "Function failed"
        MsgBox Str$(lpTimeAdjustment)  + Str$(lpTimeIncrement) + Str$(lpTimeAdjustmentDisabled)
      End If
    End Function
    I have XP Pro SP3 and get

    156250 156250 1


  • #2
    XP home, SP2 same result as you.



    • #3
      Thanks Paul.

      My reading so far has got me 18.2 interrupts per second. With 156250 x 100 nanoseconds we get 64 interrupts per second - a figure I have yet to see mentioned. If memory serves your knowledge in this area far exceeds mine.


      • #4
        after Windows 98 the timer interrupts changed. They aren't 18.2Hz anymore.
        The hardware for WinXP interupts at 1ms intervals and gives Windows its minimum timeslice resolution of 1ms.

        For the TIMER the OS counts those milliseconds and averages them out to give a long term average of 64Hz.
        64Hz is 15.625ms.

        Run the program below and see how the timer varies. Press a key to stop it scrolling so you can look more closely at the figures.
        The first column is the time between TIMER interrupts and is either 15 or 16 milliseconds.
        15 and 16ms are the closest to 15.625 that can be achieved so the OS varies the number of 15 and 16ms interrupts to give the required long term average of 15.625ms or 64Hz. Look closely and you'll see that out of every 8 TIMER interrupts, 5 are 16ms and 3 are 15ms, 5/8=0.625 giving the fractional part of the 64Hz TIMER.
        The TIMER is not an exact frequency but will have some jitter on it because of this.

        The 15 or 16ms intervals might change if you're using high resolution timers but 15.625 is the default.

        'PBCC5.0 program
            LOOP UNTIL t2##<>t1##
        PRINT FORMAT$(t2##-t1##,"#.#########") ,FORMAT$(sum##/cnt&,"#.#########") ',format$(sum##/cnt&-0.015625,"#.##########")


        • #5
          Thanks very much Paul.

          In view of "The TIMER is not an exact frequency but will have some jitter on it because of this." what I have done may not pass muster so I'll need to give it more thought.

          I added a switch to NIST-Time and called it NoteCorrection. This simply dumps the correction and time stamp to Correction.dat. This version has not been uploaded.

          Anyway, my time was persistently fast by about 1300ms per six hours. I decided not to mess with lpTimeIncrement which, perhaps, is just as well considering your last post. However, I did mess with lpTimeAdjustment and reduced it from 156250 to 156241 ie 9 x 100 nanoseconds in the hope of getting about 200ms per day accuracy. 1 x 100 nanoseconds per interrupt gives 552.96ms per day but I was coming in about mid resolution.

          This is what I have at the moment.

          -39ms 01 Jan 09 06:00:01
          -76ms 01 Jan 09 12:00:01
          -76ms 01 Jan 09 18:00:00

          which is a heck of a lot better than -1300ms.

          I have noticed that the 'extended' privilege to mess around like this does not persist between windows sessions so I may add a switch to NIST-Time to adjust the clock when run from the StartUp folder.

          I tend not to have my machine on for days at a time so getting the time from a time server at boot up may be all I need, provided the above newly created accuracy holds.

          I'll introduce the QPC to your code and see if tinkering with lpTimeAdjustment makes sense in theory.
          Last edited by David Roberts; 1 Jan 2009, 01:52 PM.


          • #6
            I'll introduce the QPC to your code and see if tinkering with lpTimeAdjustment makes sense in theory.
            Didn't need to.

            Even with an inexact frequency algebraically adding a constant will not muddy the waters since we are a long way from the 'noise' floor.

            I've scheduled an hourly correction and will leave the machine on a while. I need to see a linear shift, as opposed to a sinusoidal shift, in which case I'll upload a new version of NIST-Time so folk can test their machines and allow a clock adjustment based upon the results.

            One thing is for sure and that is if anyone's machine is better than 552.96/2 ms per day then we cannot better it by tinkering with lpTimeAdjustment.


            • #7
              After I wrote the last post it occurred to me that if the new clock accuracy was maintained on an hourly basis then it may be better than the accuracy of the time got from a time server which tends to be in the range 0 to 30ms. It may be borderline on a three hourly interval so a six hourly interval is probably best for testing as done above.

              Anyway, this is what I got with the hourly interval.

              -6ms 02 Jan 09 00:00:01
              6ms 02 Jan 09 01:00:01
              -6ms 02 Jan 09 02:00:00
              5ms 02 Jan 09 03:00:00
              -11ms 02 Jan 09 04:00:01
              -20ms 02 Jan 09 05:00:01
              -57ms 02 Jan 09 06:00:00
              -13ms 02 Jan 09 07:00:01
              42ms 02 Jan 09 08:00:00
              0ms 02 Jan 09 09:00:01

              I don't think that tells is much. That 42ms was unexpected but I will get the odd value from a time server which may be out by about 50ms being in the UK and all the servers being in the US.

              I'll go ahead and include a clock adjustment in NIST-Time.