"Linux Kernel Design and Implementation" reading notes-the second half and post-execution work

Limitations of interrupt handlers

The interrupt handler is executed asynchronously, which may interrupt other important codes;

  • Need to avoid the interrupted code stop time is too long;

When the current interrupt handler is executing, other interrupts will be shielded;

  • The faster the interrupt handler is executed, the better;
  • The shielded interrupt will be activated later instead of giving up;

Interrupt handlers often need to operate hardware;

  • So there are usually very high time limits;

The interrupt handler does not run in the context of the process;

  • So it cannot be blocked;

 

Upper and lower half

Interrupt handler (also called the upper part): A fast, asynchronous, and simple mechanism is responsible for quickly responding to the hardware and completing those very demanding operations;

  • If a task is very time sensitive, put it in the interrupt handler for execution
  • If a task is related to hardware, put it in the interrupt handler for execution;
  • If a task is to be guaranteed not to be interrupted by other interrupts (especially the same interrupt), put it in the interrupt handler for execution;

The lower part: the execution is closely related to the interrupt ratio but the interrupt handler itself does not execute the work;

  • All other tasks are considered for execution in the lower half.
  • Usually the lower half will be executed as soon as the interrupt handler returns;
  • The key to the lower half is to allow all interrupts to be responded to when it is running;

 

Break classification

BH : Bottom Half, the old bottom half, no introduction.

Task queue : The lower part of the old, not introduced.

Soft interrupt and tasklet .

  • Tasklet is realized by soft interrupt;
  • Tasklet can be used in most of the lower part, and soft interrupt is only used in situations with very high performance requirements (such as network);

Work queue .

 

Soft interrupt

The soft interrupt is represented by the softirq_action structure (include\linux\interrupt.h):

struct softirq_action
{
    void    (*action)(struct softirq_action *);
};

An array containing 32 such structures is defined in kernel\softirq.c:

static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;

Currently (Linux 2.6.34) only 9 are used:

Actually found 10 in Linux 2.6.34.1 (include\linux\interrupt.h):

/* PLEASE, avoid to allocate new softirqs, if you need not _really_ high
   frequency threaded job scheduling. For almost all the purposes
   tasklets are more than enough. F.e. all serial device BHs et
   al. should be converted to tasklets, not to softirqs.
 */
enum
{
    HI_SOFTIRQ=0,
    TIMER_SOFTIRQ,
    NET_TX_SOFTIRQ,
    NET_RX_SOFTIRQ,
    BLOCK_SOFTIRQ,
    BLOCK_IOPOLL_SOFTIRQ,
    TASKLET_SOFTIRQ,
    SCHED_SOFTIRQ,
    HRTIMER_SOFTIRQ,
    RCU_SOFTIRQ,    /* Preferable RCU should always be the last softirq */
    NR_SOFTIRQS
};

Compared with the picture, there is one more BLOCK_IOPOOL_SOFTIRQ, which also conforms to the description in the book, that is, the new soft interrupt can be inserted between BLOCK_SOFTIRQ and TASKLET_SOFTIRQ .

Soft interrupts are statically allocated during compilation .

A registered soft interrupt must be executed after being marked, which is called triggering a soft interrupt .

The interrupt handler will mark its soft interrupt before returning so that it will be executed later.

After that, at the right moment, the soft interrupt will be executed.

The right moment, such as:

  • When returning from a hardware interrupt code);
  • In the ksoftirqd kernel thread;
  • In the code that explicitly checks and executes pending soft interrupts, such as the network subsystem;

No matter what method is used to evoke, the soft interrupt must be executed in do_softirq():

asmlinkage void do_softirq(void)
{
    __u32 pending;
    unsigned long flags;
    if (in_interrupt())
        return;
    local_irq_save(flags);
    pending = local_softirq_pending();
    if (pending)
        __do_softirq();
    local_irq_restore(flags);
}

Triggering a soft interrupt in the interrupt handler is the most common form. After the kernel finishes executing the interrupt handler, it will immediately call the do_softirq() function to execute the remaining tasks left by the interrupt handler for it to complete.

raise_softirq() sets a soft interrupt to the suspended state, so that it will be put into operation the next time the do_softirq() function is called. The following is an example:

void run_local_timers(void)
{
    hrtimer_run_queues();
    raise_softirq(TIMER_SOFTIRQ);
    softlockup_tick();
}

For soft interrupts (including tasklet), the kernel will not immediately process the re-triggered soft interrupts. As an improvement, when a large number of soft interrupts occur, the kernel will wake up the kernel thread ksoftirqd to handle these loads.

 

Soft interrupt use

Through the open_softirq() function, register processing functions (called soft interrupt handlers ) in the above several soft interrupts . The following is an example (block\blk-iopoll.c):

static __init int blk_iopoll_setup(void)
{
    int i;
    for_each_possible_cpu(i)
        INIT_LIST_HEAD(&per_cpu(blk_cpu_iopoll, i));
    open_softirq(BLOCK_IOPOLL_SOFTIRQ, blk_iopoll_softirq);
    register_hotcpu_notifier(&blk_iopoll_cpu_notifier);
    return 0;
}

When the soft interrupt handler is executed, it is allowed to respond to the interrupt, but it cannot sleep by itself.

When a soft interrupt handler is running, the soft interrupt on the current processor is disabled, but other processors can still execute other ( other? But isn't it possible to say that this is also possible below? ) soft interrupt.

If the same soft interrupt is triggered again while it is being executed, then another processor can run its handler at the same time. This means sharing data , so lock protection is required. In fact, most soft interrupt handlers use uniprocessor data or other techniques to avoid explicit locking.

If you don't need to expand to multiple processors, use tasklet . Multiple instances of the same handler cannot run on multiple processors at the same time .

 

tasklet

Tasklet is a lower half mechanism implemented by soft interrupt.

Compared with soft interrupt, tasklet should be used.

There are two soft interrupt representatives for tasklet: HI_SOFTIRQ and TASKLET_SOFTIRQ (the former has a higher priority than the latter).

tasklet structure:

struct tasklet_struct
{
    struct tasklet_struct *next;
    unsigned long state;
    atomic_t count;
    void (*func)(unsigned long);
    unsigned long data;
};

The func member is the tasklet handler.

If the count is 0, the tasklet is activated and can be executed when it is set to the suspended state.

state is TASKLET_STATE_SCHED showing tasklet has been scheduled time; T ASKLET_STATE_RUN for determining whether to perform tasklet on other processors.

Each structure represents a tasklet alone. The scheduled tasklet (equivalent to the triggered soft interrupt) is stored in two uniprocessor data structures (as mentioned earlier, it is used to avoid explicit shackles) tasklet_vec and tasklet_hi_ver (corresponding to two software interrupt codes). The two data structures are linked lists made up of tasklet structures.

The tasklet_schedule() and tasklet_hi_schedule() functions are used to schedule tasklets.

 

tasklet operation

Statically create tasklet:

#define DECLARE_TASKLET(name, func, data) \
struct tasklet_struct name = { NULL, 0, ATOMIC_INIT(0), func, data }
#define DECLARE_TASKLET_DISABLED(name, func, data) \
struct tasklet_struct name = { NULL, 0, ATOMIC_INIT(1), func, data }

Create tasklet dynamically:

extern void tasklet_init(struct tasklet_struct *t,
void (*func)(unsigned long), unsigned long data);

Schedule tasklet:

static inline void tasklet_schedule(struct tasklet_struct *t)

Disable tasklet (will operate count members):

static inline void tasklet_disable(struct tasklet_struct *t)

Activate tasklet (will operate count members):

static inline void tasklet_enable(struct tasklet_struct *t)

 

Work queue

The work queue can postpone the work and hand it over to a kernel thread for execution .

This lower half will always be executed in the context of the process .

If the task to be executed later needs to sleep, then you need to select the work queue. This lower half can:

  1. Get a lot of memory;
  2. Need to get semaphore;
  3. Need to perform blocking IO operations;

Use worker threads to process the work queue.

The kernel creates a default worker thread events/n, where n represents the processor number.

Many kernel drivers give the lower half of them to the default worker thread.

Processor-intensive and performance-critical tasks will have their own worker threads.

The worker thread is represented by the workqueue_struct structure:

struct workqueue_struct {
    struct cpu_workqueue_struct *cpu_wq;
    struct list_head list;
    const char *name;
    int singlethread;
    int freezeable;     /* Freeze threads during suspend */
    int rt;
#ifdef CONFIG_LOCKDEP
    struct lockdep_map lockdep_map;
#endif
};

Since each processor corresponds to a worker thread, there is also a cpu_workqueue_struct:

struct cpu_workqueue_struct {
    spinlock_t lock;
    struct list_head worklist;
    wait_queue_head_t more_work;
    struct work_struct *current_work;
    struct workqueue_struct *wq;
    struct task_struct *thread;
} ____cacheline_aligned;

worklist corresponds to a specific work list, and work is represented by work_struct:

struct work_struct {
    atomic_long_t data;
#define WORK_STRUCT_PENDING 0       /* T if work item pending execution */
#define WORK_STRUCT_STATIC  1       /* static initializer (debugobjects) */
#define WORK_STRUCT_FLAG_MASK (3UL)
#define WORK_STRUCT_WQ_DATA_MASK (~WORK_STRUCT_FLAG_MASK)
    struct list_head entry;
    work_func_t func;
#ifdef CONFIG_LOCKDEP
    struct lockdep_map lockdep_map;
#endif
};

The work queue processing function func :

typedef void (*work_func_t)(struct work_struct *work);

The worker thread performs specific work by executing the worker_thread() function.

 

Use of work queue

Statically create a queue:

#define DECLARE_WORK(n, f)                  \
    struct work_struct n = __WORK_INITIALIZER(n, f)

Dynamically create a queue:

#define INIT_WORK(_work, _func)                 \
    do {                            \
        __INIT_WORK((_work), (_func), 0);       \
    } while (0)

Schedule work:

extern int schedule_work(struct work_struct *work);

Refresh the work queue:

extern void flush_scheduled_work(void);

 

Comparison of the lower half

 

Disable and enable the lower half

void local_bh_enable(void)
void local_bh_disable(void)

But these functions have no effect on the work queue.

There is no need to prohibit the work queue, because it is executed in the context of the process and does not involve the issue of asynchronous execution.

 

Guess you like

Origin blog.csdn.net/jiangwei0512/article/details/106145550