如何在Python中获得毫秒级和毫秒级的时间戳?

编程入门 行业动态 更新时间:2024-10-26 00:28:02
本文介绍了如何在Python中获得毫秒级和毫秒级的时间戳?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

更新:我终于弄明白了,想分享这些知识并节省很多时间,所以请参阅下面的答案。我终于在Linux上也找到了解决方案,包括Python 3.3之前的版本(例如:Raspberry Pi),并且在下面的答案中贴了新的模块/代码。

原始问题:

如何在Python中获得毫秒级和微秒级的时间戳? 我也想类似于Arduino的 delay()(延迟时间为毫秒)和 delayMicroseconds()函数。


对社区的注意:

请不要将此问题标记为重复,并在肯定的情况下在其他地方说出答案

此问题未正确关闭并标记为

它说,此问题在这里已经有答案了。不幸的是,那不是真的。几年前,我在问这个问题之前先阅读了这些答案,但它们并没有回答我的问题,也无法满足我的需求。它们与我的问题一样不适合此处的问题,因为该问题灰白了,因为它依赖于 time 模块,该模块是错误的,这是错误的,因为它依赖于Python之前的版本3.3没有任何类型的保证分辨率:

请重新打开我的问题。它不是重复的。它没有其他问题的先前答案。链接为已经包含答案的问题依赖于 time 模块,甚至指出其解决方案遍地都是。那里获得最高评价的答案使用的Windows分辨率为16毫秒,比我在此处提供的答案(0.5美国分辨率)差 32000倍。同样,我需要 1 ms 和 1 us (或类似的)分辨率,而不是 16000 us 分辨率。因此,它不是重复的。

相关:

  • [我自己关于做同一件事的答案(在C ++中获取ms和我们的分辨率时间戳)] 使用C ++获得准确的执行时间(微秒)
  • 解决方案

    对于Windows:这是一个全功能模块,适用于Linux(也适用于Python 3.3之前的版本)和Windows:

    功能和代码示例。 函数包括:

    • micros()
    • millis()
    • delay()
    • delayMicroseconds()

    Python代码模块:

    GS_timing.py -为Python 创建一些底层的类似于Arduino的millis()和micros()(微秒)计时函数Gabriel Staples www.ElectricRCAircraftGuy -单击与我联系在我的网站顶部找到我的电子邮件地址开始日期:2016年7月11日更新日期:2016年8月13日 历史记录(最新顶部): 20160813- v0.2.0创建-使用ctypes增加了Linux兼容性,因此与Python 3.3之前版本兼容(对于Python 3.3或更高版本,只需使用Linux的内置时间函数,如下所示:docs.python /3/library/time.html) -ex:time.clock_gettime(time.CLOCK_MONOTONIC_RAW) 20160711-创建v0.1.0-函数仅适用于Windows *(通过QPC计时器) 参考: WINDOWS:-个人(C ++代码):GS_PCArduino.h 1)获取高分辨率时间戳(Windows) -https: //msdn.microsoft/zh-cn/library/windows/desktop/dn553408(v=vs.85).aspx 2)QueryPerformanceCounter函数(Windows) -https:// msdn。 microsoft/en-us/library/windows/desktop/ms644904(v=vs.85).aspx 3)QueryPerformanceFrequency函数(Windows) -https://msdn.microsoft/ zh-CN / library / Windows /桌面/ms644905(v=vs.85).aspx 4)LARGE_INTEGER联合(Windows) -https://msdn.microsoft/zh-cn/library/windows/desktop/ aa383713(v = vs.85).aspx -***** https://stackoverflow/questions/4430227/python-on-win32-how-to-get- absolute-timing-cpu-cycle-count LINUX: -https://stackoverflow/questions/1205722/how-do-i-get-monotonic-time- python持续时间 导入ctypes,os #常量: VERSION ='0.2.0' #------- -------------------------------------------------- ---------- #功能:#---------------------------- --------------------------------------- #特定于操作系统的底层计时功能: if(os.name =='nt'):#对于Windows: def micros():以毫秒为单位返回时间戳 tic = ctypes.c_int64() freq = ctypes.c_int64() #在内部〜2MHz QPC时钟 ctypes.windll.Kernel32上获得刻度。 QueryPerformanceCounter(ctypes.byref(tics))#获取实际频率。内部〜2MHz QPC时钟的值 ctypes.windll.Kernel32.QueryPerformanceFrequency(ctypes.byref(freq)) t_us = tics.value * 1e6 / freq.value 返回t_us def millis():返回以毫秒(ms)为单位的时间戳 tic = ctypes.c_int64() freq = ctypes.c_int64() #在内部〜2MHz QPC时钟 ctypes.windll.Kernel32上获得刻度。 QueryPerformanceCounter(ctypes.byref(tics))#获取实际频率。内部〜2MHz QPC时钟的值 ctypes.windll.Kernel32.QueryPerformanceFrequency(ctypes.byref(freq)) t_ms = tics.value * 1e3 / freq.value 返回t_ms elif(os.name =='posix'):#对于Linux: #常量: CLOCK_MONOTONIC_RAW = 4#参见< linux / time .h>此处:github/torvalds/linux/blob/master/include/uapi/linux/time.h #准备{long,long} 的ctype timespec结构class timespec(ctypes.Structure): _fields_ = \ [('tv_sec',ctypes.c_long),('tv_nsec',ctypes.c_long)] #通过ctypes配置Python对clock_gettime C库的访问:#文档:#-ctypes.CDLL:docs.python /3.2/library/ctypes.html #-librt.so.1,带有clock_gettime:docs.oracle/cd/E36784_01/html/E36873/librt-3lib.html#-#-Linux clock_gettime():linux.die/man/3/clock_gettime librt = ctypes.CDLL('librt.so.1',use_errno = True) clock_gettime = librt.clock_gettime #指定C参数clock_gettime()的输入参数和类型#(int clock_ID,timespec * t) clock_gettime.argtypes = [ctypes.c_int,ctypes.POINTER (时间c)] def monotonic_time():以秒为单位返回时间戳(秒); t = timespec()#(请注意,Clock_gettime()对于,返回0表示成功,对于失败,则返回-1。此处:linux.die/man/3/clock_gettime 如果clock_gettime(CLOCK_MONOTONIC_RAW,ctypes.pointer(t))!= 0:#如果clock_gettime()返回错误 errno_ = ctypes.get_errno()提高OSError(errno_,os.strerror(errno_)) return t.tv_sec + t.tv_nsec * 1e-9 #sec def micros():以毫秒为单位返回时间戳。 return monotonic_time()* 1e6 #us def millis():返回时间戳(以毫秒(ms)为单位) return monotonic_time()* 1e3 #ms #其他计时功能: def delay(delay_ms): delay_ms毫秒(ms)的延迟 t_start = millis()而(millis()-t_start< delay_ms): pass#什么都不做返回 def delayMicroseconds(delay_us ): delay_us微秒(us)的延迟 t_start = micros()而(micros()-t_start< delay_us): pass#什么都不做返回 #- -------------------------------------------------- -------------- #示例:#------------------------ ------------------------------------------- #仅执行如果直接运行此模块,则为此代码块;如果导入,则为#*不是*:请参见此处:http://effbot/pyfaq/tutor-what-is-if-name- main-for.htm if __name__ == __main__:#如果将此模块作为独立程序运行 #使用micros()$打印循环执行时间100次b $ b tStart = micros()#us 对于x在范围(0,100)中: tNow = micros()#us dt = tNow-tStart #us; delta time tStart = tNow #us;使用millis() print()更新 print( dt(us)= + str(dt)) #print循环执行时间100次。 n) tStart = millis()#ms 对于x在范围(0,100)中: tNow = millis()#ms dt = tNow-tStart #多发性硬化症;增量时间 tStart = tNow #ms;更新 print( dt(ms)= + str(dt)) #每秒打印一次计数器,持续5秒,使用延迟 print( \nstart)对于范围(1,6)中的i:延迟(1000)打印(i) #打印一次计数器每秒5秒,使用delayMicroseconds print( $ nstart) for i in range(1,6): delayMicroseconds(1000000) print( i)

    如果您知道如何在Linux中获得上述毫秒级和毫秒级的时间戳,请发布,

    这也适用于Linux,包括在Python 3.3之前的版本中,因为我正在通过ctypes模块使用C函数,以便读取时间戳。

    (注意:上面的代码最初发布在这里: www.electricrcaircraftguy/2016/07/arduino-like-millisecond-and-microsecond-timestamps-in-python.html )

    特别感谢@ArminRonacher在此之前出色的Python 3.3之前的Linux答案: stackoverflow/a/1205762/4561887


    更新:在Python 3.3之前,内置Python时间库( docs.python/3.5/library/time.html )没有任何明确的高分辨率功能。现在,它确实提供了其他选项,包括一些高分辨率函数。

    但是,我上面的模块为Python 3.3之前和之后的Python代码提供了高分辨率时间戳, 并且在Linux和Windows上都是这样做的。

    这是我的意思的一个示例,它显示了 time.sleep() 函数不一定是高分辨率函数。 *在我的Windows计算机上,它的分辨率也许最好为 8ms ,而我上面的模块的分辨率为 0.5us ( 要好16000倍!! > )。

    代码演示:

    导入时间导入GS_timing作为定时 def delay微秒(n): time.sleep(n / 1000000.) def delay毫秒(n): time.sleep(n / 1000.) t_start = 0 t_end = 0 #using time.sleep print('using time。 sleep') print('delayMicroseconds(1)') for range in x(10): t_start = Timing.micros()#us delayMicroseconds(1) t_end = Timing.micros()#us print('dt(us)='+ str(t_end-t_start)) print('delayMicroseconds(2000)')对于在范围(10)中的x: t_start = timing.micros()#us delayMicroseconds(2000) t_end = timing.micros()#us print('dt (我们)='+ str(t_end-t_start)) #using GS_timing print('\nusing GS_timing') print('timing.delayMicroseconds(1)') for range in x(10): t_start = Timing.micros()#us Timing.delayMicroseconds(1) t_end = Timing.micros()#us print('dt(us)='+ str(t_end-t_start) ) print('timing.delayMicroseconds(2000)') for range in x(10): t_start = Timing.micros()#us Timing.delayMicroseconds(2000) ) t_end = Timing.micros()#us print('dt(us)='+ str(t_end-t_start))


    我的WINDOWS 8.1机器上的采样结果(请注意time.sleep的差多少):

    using time.sleep delayMicroseconds(1) dt(us)= 2872.059814453125 dt(us)= 886.3939208984375 dt(us)= 770.4649658203125 dt(美国)= 1138.7698974609375 dt(美国)= 1426.027099609375 dt(美国)= 734.557861328125 dt(美国)= 10617.233642578125 dt(美国)= 9594.90576171875 dt(us)= 9155.299560 546875 dt(美国)= 9520.526611328125 延迟微秒(2000) dt(美国)= 8799.3056640625 dt(美国)= 9609.2685546875 dt(美国)= 9679.5439453125 dt(美国)= 9248.145263671875 dt(美国)= 9389.721923828125 dt(美国)= 9637.994262695312 dt(美国)= 9616.450073242188 dt(美国)= 9592.853881835938 dt(美国)= 9465.639892578125 dt(美国)= 7650.276611328125 使用GS_timing Timing.delayMicroseconds(1) dt(美国)= 53.3477783203125 dt(美国)= 36.93310546875 dt(美国)= 36.9329833984375 dt(美国)= 34.8812255859375 dt(美国)= 35.3941650390625 dt(美国)= 40.010986328125 dt(美国)= 38.4720458984375 dt(美国)= 56.425537109375 dt(美国)= 35.9072265625 dt(美国)= 36.420166015625 Timing.delayMicroseconds(2000) dt(美国)= 2039.526611328125 dt(美国)= 2046.195068359375 dt(美国)= 2033.8841552734375 dt(美国)= 2037.4747314453125 dt(美国)= 2032.34521484375 dt(us)= 2086.2059326171875 dt(美国)= 2035.4229736328125 dt(美国)= 2051.32470703125 dt(美国)= 2040.03955078125 dt(美国)= 2027.215576171875


    我的树莓派版本1 B +的示例结果(请注意,使用time.sleep和我的模块之间的结果基本相同。显然,在时间中的低级函数已经在这里访问分辨率更高的计时器,因为它是一台Linux计算机(运行Raspbian)...但是在我的 GS_timing 模块我明确地调用了CLOCK_MONOTONIC_RAW计时器。谁知道其他用途?):

    使用time.sleep delayMicroseconds(1) dt(us)= 1022.0 dt(美国)= 417.0 dt(美国)= 407.0 dt(美国)= 450.0 dt(美国)= 2078.0 dt(美国)= 393.0 dt(美国)= 1297.0 dt(美国)= 878.0 dt(美国)= 1135.0 dt(美国)= 2896.0 delayMicroseconds(2000) dt(美国)= 2746.0 dt(美国)= 2568.0 dt(美国)= 2512.0 dt(美国)= 2423.0 dt(美国)= 2454.0 dt(美国)= 2608.0 dt(美国)= 2518.0 dt(美国)= 2569.0 dt(美国)= 2548.0 dt(美国)= 2496.0 使用GS_timing Timing.delayMicroseconds(1) dt(美国)= 572.0 dt(美国)= 673.0 dt(美国)= 1084.0 dt(美国)= 561.0 dt(美国)= 728.0 dt(美国)= 576.0 dt(美国)= 556.0 dt(美国)= 584.0 dt(美国)= 576.0 dt(美国)= 578.0 计时.delayMicroseconds(2000) dt(美国)= 2741.0 dt(美国)= 2466.0 dt(美国)= 2522.0 dt(美国)= 2810.0 dt(美国)= 2589.0 dt(美国)= 2681.0 dt(美国)= 2546.0 dt(美国)= 3090.0 dt(美国)= 2600.0 dt(美国)= 2400.0

    相关:

  • [我自己关于如何做同样事情的答案(获取毫秒和我们分辨率的时间戳))获取准确的C ++执行时间(微秒)
  • UPDATE: I finally figured this out and would like to share the knowledge and save someone a bunch of time, so see my answer below. I've finally figured it out for Linux too, including for pre-Python 3.3 (ex: for the Raspberry Pi), and I've posted my new module/code in my answer below.


    Original question:

    How do I get millisecond and microsecond-resolution timestamps in Python? I'd also like the Arduino-like delay() (which delays in milliseconds) and delayMicroseconds() functions.


    Note to the community:

    Please don't mark this question as a duplicate and say it has an answer elsewhere when it definitely does not.

    This question was incorrectly closed and marked as a duplicate of this one in approx. Sept. 2018 (see screenshot just below). It was then finally re-opened two years later on 23 Aug. 2020. Thank you for re-opening it! It's not a duplicate. That was a mistake. See more information on why just below.

    It says, "This question already has an answer here." Unfortunately, that's just not true. I read those answers before asking this question, years ago, and they don't answer my question nor meet my needs. They are just as inapplicable to my question as is the most downvoted answer here, which is greyed out because it is unfortunately wrong since it relies on the time module, which prior to Python 3.3 did NOT have any type of guaranteed resolution whatsoever:

    Please re-open my question. It is not a duplicate. It does not have a prior answer from another question. The question linked as already containing an answer relies on the time module, and even states its resolution is all over the place. The most upvoted answer there quotes a Windows resolution using their answer of 16 ms, which is 32000 times worse than my answer I provided here (0.5 us resolution). Again, I needed 1 ms and 1 us (or similar) resolutions, NOT 16000 us resolution. Therefore, it is not a duplicate.

    Related:

  • [my own answer on how to do the same thing (get ms and us-resolution timestamps) in C++] Getting an accurate execution time in C++ (micro seconds)
  • 解决方案

    For Windows: Here's a fully-functional module for both Linux (works with pre-Python 3.3 too) and Windows:

    Functions and code samples. Functions include:

    • micros()
    • millis()
    • delay()
    • delayMicroseconds()

    Python code module:

    """ GS_timing.py -create some low-level Arduino-like millis() (milliseconds) and micros() (microseconds) timing functions for Python By Gabriel Staples www.ElectricRCAircraftGuy -click "Contact me" at the top of my website to find my email address Started: 11 July 2016 Updated: 13 Aug 2016 History (newest on top): 20160813 - v0.2.0 created - added Linux compatibility, using ctypes, so that it's compatible with pre-Python 3.3 (for Python 3.3 or later just use the built-in time functions for Linux, shown here: docs.python/3/library/time.html) -ex: time.clock_gettime(time.CLOCK_MONOTONIC_RAW) 20160711 - v0.1.0 created - functions work for Windows *only* (via the QPC timer) References: WINDOWS: -personal (C++ code): GS_PCArduino.h 1) Acquiring high-resolution time stamps (Windows) -msdn.microsoft/en-us/library/windows/desktop/dn553408(v=vs.85).aspx 2) QueryPerformanceCounter function (Windows) -msdn.microsoft/en-us/library/windows/desktop/ms644904(v=vs.85).aspx 3) QueryPerformanceFrequency function (Windows) -msdn.microsoft/en-us/library/windows/desktop/ms644905(v=vs.85).aspx 4) LARGE_INTEGER union (Windows) -msdn.microsoft/en-us/library/windows/desktop/aa383713(v=vs.85).aspx -*****stackoverflow/questions/4430227/python-on-win32-how-to-get- absolute-timing-cpu-cycle-count LINUX: -stackoverflow/questions/1205722/how-do-i-get-monotonic-time-durations-in-python """ import ctypes, os #Constants: VERSION = '0.2.0' #------------------------------------------------------------------- #FUNCTIONS: #------------------------------------------------------------------- #OS-specific low-level timing functions: if (os.name=='nt'): #for Windows: def micros(): "return a timestamp in microseconds (us)" tics = ctypes.c_int64() freq = ctypes.c_int64() #get ticks on the internal ~2MHz QPC clock ctypes.windll.Kernel32.QueryPerformanceCounter(ctypes.byref(tics)) #get the actual freq. of the internal ~2MHz QPC clock ctypes.windll.Kernel32.QueryPerformanceFrequency(ctypes.byref(freq)) t_us = tics.value*1e6/freq.value return t_us def millis(): "return a timestamp in milliseconds (ms)" tics = ctypes.c_int64() freq = ctypes.c_int64() #get ticks on the internal ~2MHz QPC clock ctypes.windll.Kernel32.QueryPerformanceCounter(ctypes.byref(tics)) #get the actual freq. of the internal ~2MHz QPC clock ctypes.windll.Kernel32.QueryPerformanceFrequency(ctypes.byref(freq)) t_ms = tics.value*1e3/freq.value return t_ms elif (os.name=='posix'): #for Linux: #Constants: CLOCK_MONOTONIC_RAW = 4 # see <linux/time.h> here: github/torvalds/linux/blob/master/include/uapi/linux/time.h #prepare ctype timespec structure of {long, long} class timespec(ctypes.Structure): _fields_ =\ [ ('tv_sec', ctypes.c_long), ('tv_nsec', ctypes.c_long) ] #Configure Python access to the clock_gettime C library, via ctypes: #Documentation: #-ctypes.CDLL: docs.python/3.2/library/ctypes.html #-librt.so.1 with clock_gettime: docs.oracle/cd/E36784_01/html/E36873/librt-3lib.html #- #-Linux clock_gettime(): linux.die/man/3/clock_gettime librt = ctypes.CDLL('librt.so.1', use_errno=True) clock_gettime = librt.clock_gettime #specify input arguments and types to the C clock_gettime() function # (int clock_ID, timespec* t) clock_gettime.argtypes = [ctypes.c_int, ctypes.POINTER(timespec)] def monotonic_time(): "return a timestamp in seconds (sec)" t = timespec() #(Note that clock_gettime() returns 0 for success, or -1 for failure, in # which case errno is set appropriately) #-see here: linux.die/man/3/clock_gettime if clock_gettime(CLOCK_MONOTONIC_RAW , ctypes.pointer(t)) != 0: #if clock_gettime() returns an error errno_ = ctypes.get_errno() raise OSError(errno_, os.strerror(errno_)) return t.tv_sec + t.tv_nsec*1e-9 #sec def micros(): "return a timestamp in microseconds (us)" return monotonic_time()*1e6 #us def millis(): "return a timestamp in milliseconds (ms)" return monotonic_time()*1e3 #ms #Other timing functions: def delay(delay_ms): "delay for delay_ms milliseconds (ms)" t_start = millis() while (millis() - t_start < delay_ms): pass #do nothing return def delayMicroseconds(delay_us): "delay for delay_us microseconds (us)" t_start = micros() while (micros() - t_start < delay_us): pass #do nothing return #------------------------------------------------------------------- #EXAMPLES: #------------------------------------------------------------------- #Only executute this block of code if running this module directly, #*not* if importing it #-see here: effbot/pyfaq/tutor-what-is-if-name-main-for.htm if __name__ == "__main__": #if running this module as a stand-alone program #print loop execution time 100 times, using micros() tStart = micros() #us for x in range(0, 100): tNow = micros() #us dt = tNow - tStart #us; delta time tStart = tNow #us; update print("dt(us) = " + str(dt)) #print loop execution time 100 times, using millis() print("\n") tStart = millis() #ms for x in range(0, 100): tNow = millis() #ms dt = tNow - tStart #ms; delta time tStart = tNow #ms; update print("dt(ms) = " + str(dt)) #print a counter once per second, for 5 seconds, using delay print("\nstart") for i in range(1,6): delay(1000) print(i) #print a counter once per second, for 5 seconds, using delayMicroseconds print("\nstart") for i in range(1,6): delayMicroseconds(1000000) print(i)

    If you know how to get the above millisecond and microsecond-resolution timestamps in Linux, please post, as that would be very helpful too.

    This works for Linux too, including in pre-Python 3.3, since I'm using C functions via the ctypes module in order to read the time stamps.

    (Note: code above originally posted here: www.electricrcaircraftguy/2016/07/arduino-like-millisecond-and-microsecond-timestamps-in-python.html)

    Special thanks to @ArminRonacher for his brilliant pre-Python 3.3 Linux answer here: stackoverflow/a/1205762/4561887


    Update: prior to Python 3.3, the built-in Python time library (docs.python/3.5/library/time.html) didn't have any explicitly high-resolution functions. Now, however it does provide other options, including some high-resolution functions.

    My module above, however, provides high-resolution timestamps for Python code before Python 3.3, as well as after, and it does so on both Linux and Windows.

    Here's an example of what I mean, showing that the time.sleep() function is NOT necessarily a high-resolution function. *On my Windows machine, it's resolution is perhaps 8ms at best, whereas my module above has 0.5us resolution (16000 times better!) on the same machine.

    Code demonstration:

    import time import GS_timing as timing def delayMicroseconds(n): time.sleep(n / 1000000.) def delayMillisecond(n): time.sleep(n / 1000.) t_start = 0 t_end = 0 #using time.sleep print('using time.sleep') print('delayMicroseconds(1)') for x in range(10): t_start = timing.micros() #us delayMicroseconds(1) t_end = timing.micros() #us print('dt (us) = ' + str(t_end - t_start)) print('delayMicroseconds(2000)') for x in range(10): t_start = timing.micros() #us delayMicroseconds(2000) t_end = timing.micros() #us print('dt (us) = ' + str(t_end - t_start)) #using GS_timing print('\nusing GS_timing') print('timing.delayMicroseconds(1)') for x in range(10): t_start = timing.micros() #us timing.delayMicroseconds(1) t_end = timing.micros() #us print('dt (us) = ' + str(t_end - t_start)) print('timing.delayMicroseconds(2000)') for x in range(10): t_start = timing.micros() #us timing.delayMicroseconds(2000) t_end = timing.micros() #us print('dt (us) = ' + str(t_end - t_start))


    SAMPLE RESULTS ON MY WINDOWS 8.1 MACHINE (notice how much worse time.sleep does):

    using time.sleep delayMicroseconds(1) dt (us) = 2872.059814453125 dt (us) = 886.3939208984375 dt (us) = 770.4649658203125 dt (us) = 1138.7698974609375 dt (us) = 1426.027099609375 dt (us) = 734.557861328125 dt (us) = 10617.233642578125 dt (us) = 9594.90576171875 dt (us) = 9155.299560546875 dt (us) = 9520.526611328125 delayMicroseconds(2000) dt (us) = 8799.3056640625 dt (us) = 9609.2685546875 dt (us) = 9679.5439453125 dt (us) = 9248.145263671875 dt (us) = 9389.721923828125 dt (us) = 9637.994262695312 dt (us) = 9616.450073242188 dt (us) = 9592.853881835938 dt (us) = 9465.639892578125 dt (us) = 7650.276611328125 using GS_timing timing.delayMicroseconds(1) dt (us) = 53.3477783203125 dt (us) = 36.93310546875 dt (us) = 36.9329833984375 dt (us) = 34.8812255859375 dt (us) = 35.3941650390625 dt (us) = 40.010986328125 dt (us) = 38.4720458984375 dt (us) = 56.425537109375 dt (us) = 35.9072265625 dt (us) = 36.420166015625 timing.delayMicroseconds(2000) dt (us) = 2039.526611328125 dt (us) = 2046.195068359375 dt (us) = 2033.8841552734375 dt (us) = 2037.4747314453125 dt (us) = 2032.34521484375 dt (us) = 2086.2059326171875 dt (us) = 2035.4229736328125 dt (us) = 2051.32470703125 dt (us) = 2040.03955078125 dt (us) = 2027.215576171875


    SAMPLE RESULTS ON MY RASPBERRY PI VERSION 1 B+ (notice that the results between using time.sleep and my module are basically identical...apparently the low-level functions in time are already accessing better-resolution timers here, since it's a Linux machine (running Raspbian)...BUT in my GS_timing module I am explicitly calling the CLOCK_MONOTONIC_RAW timer. Who knows what's being used otherwise):

    using time.sleep delayMicroseconds(1) dt (us) = 1022.0 dt (us) = 417.0 dt (us) = 407.0 dt (us) = 450.0 dt (us) = 2078.0 dt (us) = 393.0 dt (us) = 1297.0 dt (us) = 878.0 dt (us) = 1135.0 dt (us) = 2896.0 delayMicroseconds(2000) dt (us) = 2746.0 dt (us) = 2568.0 dt (us) = 2512.0 dt (us) = 2423.0 dt (us) = 2454.0 dt (us) = 2608.0 dt (us) = 2518.0 dt (us) = 2569.0 dt (us) = 2548.0 dt (us) = 2496.0 using GS_timing timing.delayMicroseconds(1) dt (us) = 572.0 dt (us) = 673.0 dt (us) = 1084.0 dt (us) = 561.0 dt (us) = 728.0 dt (us) = 576.0 dt (us) = 556.0 dt (us) = 584.0 dt (us) = 576.0 dt (us) = 578.0 timing.delayMicroseconds(2000) dt (us) = 2741.0 dt (us) = 2466.0 dt (us) = 2522.0 dt (us) = 2810.0 dt (us) = 2589.0 dt (us) = 2681.0 dt (us) = 2546.0 dt (us) = 3090.0 dt (us) = 2600.0 dt (us) = 2400.0

    Related:

  • [my own answer on how to do the same thing (get ms and us-resolution timestamps) in C++] Getting an accurate execution time in C++ (micro seconds)
  • 更多推荐

    如何在Python中获得毫秒级和毫秒级的时间戳?

    本文发布于:2023-10-30 22:26:09,感谢您对本站的认可!
    本文链接:https://www.elefans.com/category/jswz/34/1544184.html
    版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
    本文标签:时间   如何在   Python

    发布评论

    评论列表 (有 0 条评论)
    草根站长

    >www.elefans.com

    编程频道|电子爱好者 - 技术资讯及电子产品介绍!