linux delay execution

Device drivers often need to delay a period of time to perform a specific fragment of code, often allowing the hardware to complete a task. In this section we cover a number of different techniques to get delayed. Circumstances of each situation determine which technology best good; we go over them all, and pointed out the strengths and weaknesses of each.

 

One important thing to consider is how the delay you need compares with the clock tick, considering the range of HZ across a variety of platforms. That reliably than the clock tick long and it will not be damaged in the rough size of the delay, can use the system clock. each short delays typically must use the software cycle to achieve. There is a gray area in these two cases. in this chapter, we use the phrase "long" delay to refer to more than one jiffy delay, it can be as little as a few milliseconds on some platforms, but it seems the CPU core and is still long.

 

The following sections discuss different delay, through the use of a number of long paths, intuitively inappropriate method to correct a variety of ways. We chose this approach because it allows for more in-depth discussion of the core issues related to timing. If you eager to find the correct code, just skim through the section.

 

Long delay

 

Occasionally, a driver needs to delay the implementation of a relatively long time - more than one clock tick There are several ways to achieve this type of delay; we start with the simplest technique, then enter into more advanced technologies.

 

Busy wait

 

If you want to delay execution by multiple clock tick, allowing some slack in value, the easiest (although not recommended) implementation is a jiffy counter surveillance cycle. This busy waiting implementation usually looks like the following code here j1 is jiffies of the delay time-out values:

 

while (time_before(jiffies, j1)) cpu_relax();

 

Call to cpu_relex uses a system-specific way, you do not do things this time with a processor in many systems, it does not do anything; In symmetric multithreading ( "hyperthreading") system, may allow other threads to the core. in the circumstances, whenever possible, this method should definitely be avoided. we show that it is because occasionally you might want to run this code to better understand the internals of other code.

 

We look at how the code works. This cycle is guaranteed to work because jiffies is declared kernel headers volatile, and therefore, have to obtain it from memory at any time some C code accesses. Although technically correct (it's the same design as the work), this busy severely degrade system performance if you do not configure your kernel preemption operation, this cycle is completely locked during the delay processor; scheduler will never seize a process running in the kernel, and the computer looks completely dead until j1 time to time. If this problem when you run a preemptive kernel will improve a bit, because, unless the code is holding a lock, the processor Some time may be obtained for other purposes. However, busy waiting preemptive system is still expensive.

 

Worse, when you enter the loop if interrupts happen to be disabled, jiffies will not be updated, and while conditions remain forever true. Running a preemptive kernel will not help, and you will be forced to hit the red button.

 

 

This implementation delay code is available, like the following ones, in the jit module. These / proc / jit * files created by the module every time you read a line of text seconds delay a whole, and each of these rows is guaranteed 20 bytes If you want to test the code busy wait, you can read / proc / jitbusy, whenever it returns a row it busy - loops for one second.

 

Be sure to read, at most, a line (or lines) from a / proc / jitbusy. Simplified registration mechanism kernel / proc file read method is called repeatedly to fill the cache data requested by the user. Thus, a command such as cat

/ Proc / jitbusy, if it is a read 4KB, freezes the computer 205 seconds.

 

Recommended read / proc / jitbusy command is dd bs = 200 </ proc / jitbusy, optionally specify the number of blocks simultaneously. Each 20-byte value indicating the return line of the file jiffy counter existing, prior to the delay . and after a delay which is an example of a run on other computers without the burden of the:

 

phon% dd

bs=20 count=5 < /proc/jitbusy

1686518

1687518

1687519

1688519

1688520

1689520

1689520

1690520

1690521

1691521

 

All looks good: delays are exactly 1 second (1000 jiffies), and the next read system call starts immediately after an end but let's look at a large number of CPU- intensive processes running (and. What will happen on the system is non-preemptive kernel) of:

 

phon% dd

bs=20 count=5 < /proc/jitbusy

1911226

1912226

1913323

1914323

1919529

1920529

1925632

1926632

1931835

1932835

 

Here, each read system call exactly 1 second delay, but the kernel can take more than 5 seconds so that it can process scheduling dd before the next system call is issued to a multi-tasking system is desirable;. CPU running at all times shared between processes, and a CPU- intensive process has its dynamic priority reduced. (discussed scheduling policies outside the scope of this book).

 

Test under load shown above is already running load50 sample program conducted. This program derived many of the processes of doing nothing, but in a CPU- intensive way. This program is accompanied by the book part of the sample files, and the default is derived 50 process, although this number can be specified on the command line. in this chapter, as well as in the rest of the book, using a test load of the system has been used in a load50 otherwise idle on the computer running to were.

 

Repeat this command, you will find that if you are running a preemptive kernel is no significant difference in an otherwise idle CPU as well as the following behavior under load:

 

phon% dd bs=20 count=5 < /proc/jitbusy 14940680 14942777

14942778 14945430

 

 

 

14945431

14948491

14948492

14951960

14951961

14955840

 

Here, there is no significant delay between the end of a call the next system start, but the individual delays are much longer than 1 second: up to 3.8 seconds in the example illustrated in rise over time and these values ​​are displayed. process is interrupted in its delay, scheduling other processes. gap between the system call is not only the process of scheduling options, so there is no particular delay can be seen there.

Guess you like

Origin www.cnblogs.com/fanweisheng/p/11141993.html