First talk about the pit, the production problems encountered on my experience, I schedule a Python script to execute and monitor this process, the running time is far greater than python script python script in your own statistics program execution time.
Monitoring python script execution time is 36 hours, while the python script to perform their own statistical time is about four hours.
After the storm drain problem was first thought linux is a problem, look for a variety of log not found any abnormalities.
Then the thought of writing data asynchronously py2neo the python used in blocking the process of implementation.
Finally, finally found where the problem is: python scripted using statistical time is time.clock (), and in this way statistics is the execution time of the execution time of the CPU, not the program.
Next, it is time statistics several ways python's compare:
method 1:
import datetime starttime = datetime.datetime.now() #long running #do something other endtime = datetime.datetime.now() print (endtime - starttime).seconds
datetime.datetime.now () Gets the current date, after the end of program execution, this time way to get the value of program execution time.
Method 2:
start = time.time() #long running #do something other end = time.time() print end-start
time.time () to get the current time since the epoch (in seconds). If the system clock thereof, there may be fractions of seconds. So this is a place to return floating-point type. Here is the execution time of the acquisition program.
Method 3:
start = time.clock() #long running #do something other end = time.clock() print end-start
time.clock () returns the program starts or is first invoked clock CPU time () since. This system has as many recording accuracy. Return is also a floating-point type. Obtained here is the execution time of the CPU.
Original: https: //blog.csdn.net/wangshuang1631/article/details/54286551/