Linux schedule 2、调度算法

2、调度算法

linux进程一般分成了实时进程(RT)和普通进程,linux使用sched_class结构来管理不同类型进程的调度算法:rt_sched_class负责实时类进程(SCHED_FIFO/SCHED_RR)的调度,fair_sched_class负责普通进程(SCHED_NORMAL)的调度,还有idle_sched_class(SCHED_IDLE)、dl_sched_class(SCHED_DEADLINE)都比较简单和少见;

实时进程的调度算法移植都没什么变化,SCHED_FIFO类型的谁优先级高就一直抢占/SCHED_RR相同优先级的进行时间片轮转。

所以我们常说的调度算法一般指普通进程(SCHED_NORMAL)的调度算法,这类进程也在系统中占大多数。在2.6.24以后内核引入的是CFS算法,这个也是现在的主流;在这之前2.6内核使用的是一种O(1)算法;

2.1、linux2.6的O(1)调度算法

这里写图片描述

linux进程的优先级有140种,其中优先级(0-99)对应实时进程,优先级(100-139)对应普通进程,nice(0)对应优先级120,nice(-10)对应优先级100,nice(19)对应优先级139。

#define MAX_USER_RT_PRIO    100
#define MAX_RT_PRIO     MAX_USER_RT_PRIO        // 优先级(1-99)对应实时进程

#define MAX_PRIO        (MAX_RT_PRIO + 40)      // 优先级(100-139)对应普通进程

/*
 * Convert user-nice values [ -20 ... 0 ... 19 ]
 * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
 * and back.
 */
#define NICE_TO_PRIO(nice)  (MAX_RT_PRIO + (nice) + 20) // nice(0)对应优先级120,nice(-20)对应优先级100,nice(19)对应优先级139
#define PRIO_TO_NICE(prio)  ((prio) - MAX_RT_PRIO - 20)
#define TASK_NICE(p)        PRIO_TO_NICE((p)->static_prio)

/*
 * 'User priority' is the nice value converted to something we
 * can work with better when scaling various scheduler parameters,
 * it's a [ 0 ... 39 ] range.
 */
#define USER_PRIO(p)        ((p)-MAX_RT_PRIO)
#define TASK_USER_PRIO(p)   USER_PRIO((p)->static_prio)
#define MAX_USER_PRIO       (USER_PRIO(MAX_PRIO))

O(1)调度算法主要包含以下内容:

  • (1)、每个cpu的rq包含两个140个成员的链表数组rq->active、rq->expired;

任务根据优先级挂载到不同的数组当中,时间片没有用完放在rq->active,时间片用完后放到rq->expired,在rq->active所有任务时间片用完为空后rq->active和rq->expired相互反转。

在schedule()中pcik next task时,首先会根据array->bitmap找出哪个最先优先级还有任务需要调度,然后根据index找到 对应的优先级任务链表。因为查找bitmap的在IA处理器上可以通过bsfl等一条指令来实现,所以他的复杂度为O(1)。

asmlinkage void __sched schedule(void)
{


    idx = sched_find_first_bit(array->bitmap);
    queue = array->queue + idx;
    next = list_entry(queue->next, task_t, run_list);

}
  • (2)、进程优先级分为静态优先级(p->static_prio)、动态优先级(p->prio);

静态优先级(p->static_prio)决定进程时间片的大小:

/*
 * task_timeslice() scales user-nice values [ -20 ... 0 ... 19 ]
 * to time slice values: [800ms ... 100ms ... 5ms]
 *
 * The higher a thread's priority, the bigger timeslices
 * it gets during one round of execution. But even the lowest
 * priority thread gets MIN_TIMESLICE worth of execution time.
 */

/* 根据算法如果nice(0)的时间片为100mS,那么nice(-20)时间片为800ms、nice(19)时间片为5ms */

#define SCALE_PRIO(x, prio) \
    max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO/2), MIN_TIMESLICE)

static unsigned int task_timeslice(task_t *p)
{
    if (p->static_prio < NICE_TO_PRIO(0))
        return SCALE_PRIO(DEF_TIMESLICE*4, p->static_prio);
    else
        return SCALE_PRIO(DEF_TIMESLICE, p->static_prio);
}

#define MIN_TIMESLICE       max(5 * HZ / 1000, 1)
#define DEF_TIMESLICE       (100 * HZ / 1000)

动态优先级决定进程在rq->active、rq->expired进程链表中的index:

static void enqueue_task(struct task_struct *p, prio_array_t *array)
{
    sched_info_queued(p);
    list_add_tail(&p->run_list, array->queue + p->prio); // 根据动态优先级p->prio作为index,找到对应链表
    __set_bit(p->prio, array->bitmap);
    array->nr_active++;
    p->array = array;
}

动态优先级和静态优先级之间的转换函数:动态优先级=max(100 , min(静态优先级 – bonus + 5) , 139)

扫描二维码关注公众号,回复: 936814 查看本文章
/*
 * effective_prio - return the priority that is based on the static
 * priority but is modified by bonuses/penalties.
 *
 * We scale the actual sleep average [0 .... MAX_SLEEP_AVG]
 * into the -5 ... 0 ... +5 bonus/penalty range.
 *
 * We use 25% of the full 0...39 priority range so that:
 *
 * 1) nice +19 interactive tasks do not preempt nice 0 CPU hogs.
 * 2) nice -20 CPU hogs do not get preempted by nice 0 tasks.
 *
 * Both properties are important to certain workloads.
 */
static int effective_prio(task_t *p)
{
    int bonus, prio;

    if (rt_task(p))
        return p->prio;

    bonus = CURRENT_BONUS(p) - MAX_BONUS / 2;  // MAX_BONUS = 10

    prio = p->static_prio - bonus;
    if (prio < MAX_RT_PRIO)
        prio = MAX_RT_PRIO;
    if (prio > MAX_PRIO-1)
        prio = MAX_PRIO-1;
    return prio;
}

从上面看出动态优先级是以静态优先级为基础,再加上相应的惩罚或奖励(bonus)。这个bonus并不是随机的产生,而是根据进程过去的平均睡眠时间做相应的惩罚或奖励。所谓平均睡眠时间(sleep_avg,位于task_struct结构中)就是进程在睡眠状态所消耗的总时间数,这里的平均并不是直接对时间求平均数。

  • (3)、根据平均睡眠时间判断进程是否是交互式进程(INTERACTIVE);

交互式进程的好处?交互式进程时间片用完会重新进入active队列;

void scheduler_tick(void)
{



    if (!--p->time_slice) {     // (1) 时间片用完
        dequeue_task(p, rq->active);    // (2) 退出actice队列
        set_tsk_need_resched(p);
        p->prio = effective_prio(p);
        p->time_slice = task_timeslice(p);
        p->first_time_slice = 0;

        if (!rq->expired_timestamp)
            rq->expired_timestamp = jiffies;
        if (!TASK_INTERACTIVE(p) || EXPIRED_STARVING(rq)) {
            enqueue_task(p, rq->expired);       // (3) 普通进程进入expired队列
            if (p->static_prio < rq->best_expired_prio)
                rq->best_expired_prio = p->static_prio;
        } else
            enqueue_task(p, rq->active);    // (4) 如果是交互式进程,重新进入active队列
    }


}

判断进程是否是交互式进程(INTERACTIVE)的公式:动态优先级≤3*静态优先级/4 + 28

#define TASK_INTERACTIVE(p) \
    ((p)->prio <= (p)->static_prio - DELTA(p))

平均睡眠时间的算法和交互进程的思想,我没有详细去看大家可以参考一下的一些描述:

所谓平均睡眠时间(sleep_avg,位于task_struct结构中)就是进程在睡眠状态所消耗的总时间数,这里的平均并不是直接对时间求平均数。平均睡眠时间随着进程的睡眠而增长,随着进程的运行而减少。因此,平均睡眠时间记录了进程睡眠和执行的时间,它是用来判断进程交互性强弱的关键数据。如果一个进程的平均睡眠时间很大,那么它很可能是一个交互性很强的进程。反之,如果一个进程的平均睡眠时间很小,那么它很可能一直在执行。另外,平均睡眠时间也记录着进程当前的交互状态,有很快的反应速度。比如一个进程在某一小段时间交互性很强,那么sleep_avg就有可能暴涨(当然它不能超过 MAX_SLEEP_AVG),但如果之后都一直处于执行状态,那么sleep_avg就又可能一直递减。理解了平均睡眠时间,那么bonus的含义也就显而易见了。交互性强的进程会得到调度程序的奖励(bonus为正),而那些一直霸占CPU的进程会得到相应的惩罚(bonus为负)。其实bonus相当于平均睡眠时间的缩影,此时只是将sleep_avg调整成bonus数值范围内的大小。
O(1)调度器区分交互式进程和批处理进程的算法与以前虽大有改进,但仍然在很多情况下会失效。有一些著名的程序总能让该调度器性能下降,导致交互式进程反应缓慢。例如fiftyp.c, thud.c, chew.c, ring-test.c, massive_intr.c等。而且O(1)调度器对NUMA支持也不完善。

2.2、CFS调度算法

针对O(1)算法出现的问题(具体是哪些问题我也理解不深说不上来),linux推出了CFS(Completely Fair Scheduler)完全公平调度算法。该算法从楼梯调度算法(staircase scheduler)和RSDL(Rotating Staircase Deadline Scheduler)发展而来,抛弃了复杂的active/expire数组和交互进程计算,把所有进程一视同仁都放到一个执行时间的红黑树中,实现了完全公平的思想。

CFS的主要思想如下:

  • 根据普通进程的优先级nice值来定一个比重(weight),该比重用来计算进程的实际运行时间到虚拟运行时间(vruntime)的换算;不言而喻优先级高的进程运行更多的时间和优先级低的进程运行更少的时间在vruntime上市等价的;
  • 根据rq->cfs_rq中进程的数量计算一个总的period周期,每个进程再根据自己的weight占整个的比重来计算自己的理想运行时间(ideal_runtime),在scheduler_tick()中判断如果进程的实际运行时间(exec_runtime)已经达到理想运行时间(ideal_runtime),则进程需要被调度test_tsk_need_resched(curr)。有了period,那么cfs_rq中所有进程在period以内必会得到调度;
  • 根据进程的虚拟运行时间(vruntime),把rq->cfs_rq中的进程组织成一个红黑树(平衡二叉树),那么在pick_next_entity时树的最左节点就是运行时间最少的进程,是最好的需要调度的候选人;

2.2.1、vruntime

每个进程的vruntime = runtime * (NICE_0_LOAD/nice_n_weight)

/* 该表的主要思想是,高一个等级的weight是低一个等级的 1.25 倍 */
/*
 * Nice levels are multiplicative, with a gentle 10% change for every
 * nice level changed. I.e. when a CPU-bound task goes from nice 0 to
 * nice 1, it will get ~10% less CPU time than another CPU-bound task
 * that remained on nice 0.
 *
 * The "10% effect" is relative and cumulative: from _any_ nice level,
 * if you go up 1 level, it's -10% CPU usage, if you go down 1 level
 * it's +10% CPU usage. (to achieve that we use a multiplier of 1.25.
 * If a task goes up by ~10% and another task goes down by ~10% then
 * the relative distance between them is ~25%.)
 */
static const int prio_to_weight[40] = {
 /* -20 */     88761,     71755,     56483,     46273,     36291,
 /* -15 */     29154,     23254,     18705,     14949,     11916,
 /* -10 */      9548,      7620,      6100,      4904,      3906,
 /*  -5 */      3121,      2501,      1991,      1586,      1277,
 /*   0 */      1024,       820,       655,       526,       423,
 /*   5 */       335,       272,       215,       172,       137,
 /*  10 */       110,        87,        70,        56,        45,
 /*  15 */        36,        29,        23,        18,        15,
};

nice(0)对应的weight是NICE_0_LOAD(1024),nice(-1)对应的weight是NICE_0_LOAD*1.25,nice(1)对应的weight是NICE_0_LOAD/1.25。

NICE_0_LOAD(1024)在schedule计算中是一个非常神奇的数字,他的含义就是基准”1”。因为kernel不能表示小数,所以把1放大称为1024。

scheduler_tick() -> task_tick_fair() -> update_curr():

↓

static void update_curr(struct cfs_rq *cfs_rq)
{

    curr->sum_exec_runtime += delta_exec;       // (1) 累计当前进程的实际运行时间
    schedstat_add(cfs_rq, exec_clock, delta_exec);

    curr->vruntime += calc_delta_fair(delta_exec, curr);  // (2) 累计当前进程的vruntime
    update_min_vruntime(cfs_rq);

}

↓

static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se)
{
    // (2.1) 根据进程的weight折算vruntime
    if (unlikely(se->load.weight != NICE_0_LOAD))
        delta = __calc_delta(delta, NICE_0_LOAD, &se->load);

    return delta;
}

2.2.2、period和ideal_runtime

scheduler_tick()中根据cfs_rq中的se数量计算period和ideal_time,判断当前进程时间是否用完需要调度:

scheduler_tick() -> task_tick_fair() -> entity_tick() -> check_preempt_tick():

↓

static void
check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
{
    unsigned long ideal_runtime, delta_exec;
    struct sched_entity *se;
    s64 delta;

    /* (1) 计算period和ideal_time */
    ideal_runtime = sched_slice(cfs_rq, curr);  

    /* (2) 计算实际运行时间 */
    delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;  

    /* (3) 如果实际运行时间已经超过ideal_time,
          当前进程需要被调度,设置TIF_NEED_RESCHED标志
     */
    if (delta_exec > ideal_runtime) {   
        resched_curr(rq_of(cfs_rq));    
        /*
         * The current task ran long enough, ensure it doesn't get
         * re-elected due to buddy favours.
         */
        clear_buddies(cfs_rq, curr);
        return;
    }

    /*
     * Ensure that a task that missed wakeup preemption by a
     * narrow margin doesn't have to wait for a full slice.
     * This also mitigates buddy induced latencies under load.
     */
    if (delta_exec < sysctl_sched_min_granularity)
        return;

    se = __pick_first_entity(cfs_rq);
    delta = curr->vruntime - se->vruntime;

    if (delta < 0)
        return;

    if (delta > ideal_runtime)
        resched_curr(rq_of(cfs_rq));
}

↓

static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
    /* (1.1) 计算period值 */
    u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq); 

    /* 疑问:这里是根据最底层se和cfq_rq来计算ideal_runtime,然后逐层按比重折算到上层时间
        这种方法是不对的,应该是从顶层到底层分配时间下来才比较合理。
        庆幸的是,在task_tick_fair()中会调用task_tick_fair递归的每层递归的计算时间,
        所以最上面的一层也是判断的
     */
    for_each_sched_entity(se) {     
        struct load_weight *load;
        struct load_weight lw;

        cfs_rq = cfs_rq_of(se);
        load = &cfs_rq->load;

        if (unlikely(!se->on_rq)) {
            lw = cfs_rq->load;

            update_load_add(&lw, se->load.weight);
            load = &lw;
        }
        /* (1.2) 根据period值和进程weight在cfs_rq weight中的比重计算ideal_runtime
         */
        slice = __calc_delta(slice, se->load.weight, load);
    }
    return slice;
}

↓

/* (1.1.1) period的计算方法,从默认值看:
    如果cfs_rq中的进程大于8(sched_nr_latency)个,则period=n*0.75ms(sysctl_sched_min_granularity)
    如果小于等于8(sched_nr_latency)个,则period=6ms(sysctl_sched_latency)
 */

/*
 * The idea is to set a period in which each task runs once.
 *
 * When there are too many tasks (sched_nr_latency) we have to stretch
 * this period because otherwise the slices get too small.
 *
 * p = (nr <= nl) ? l : l*nr/nl
 */
static u64 __sched_period(unsigned long nr_running)
{
    if (unlikely(nr_running > sched_nr_latency))
        return nr_running * sysctl_sched_min_granularity;
    else
        return sysctl_sched_latency;
}

/*
 * Minimal preemption granularity for CPU-bound tasks:
 * (default: 0.75 msec * (1 + ilog(ncpus)), units: nanoseconds)
 */
unsigned int sysctl_sched_min_granularity = 750000ULL;
unsigned int normalized_sysctl_sched_min_granularity = 750000ULL;

/*
 * is kept at sysctl_sched_latency / sysctl_sched_min_granularity
 */
static unsigned int sched_nr_latency = 8;

/*
 * Targeted preemption latency for CPU-bound tasks:
 * (default: 6ms * (1 + ilog(ncpus)), units: nanoseconds)
 *
 * NOTE: this latency value is not the same as the concept of
 * 'timeslice length' - timeslices in CFS are of variable length
 * and have no persistent notion like in traditional, time-slice
 * based scheduling concepts.
 *
 * (to see the precise effective timeslice length of your workload,
 *  run vmstat and monitor the context-switches (cs) field)
 */
unsigned int sysctl_sched_latency = 6000000ULL;
unsigned int normalized_sysctl_sched_latency = 6000000ULL;

2.2.3、红黑树(Red Black Tree)

这里写图片描述

红黑树又称为平衡二叉树,它的特点:

  • 1、平衡。从根节点到叶子节点之间的任何路径,差值不会超过1。所以pick_next_task()复杂度为O(log n)。可以看到pick_next_task()复杂度是大于o(1)算法的,但是最大路径不会超过log2(n) - 1,复杂度是可控的。
  • 2、排序。左边的节点一定小于右边的节点,所以最左边节点是最小值。

按照进程的vruntime组成了红黑树:

enqueue_task_fair() -> enqueue_entity() -> __enqueue_entity():

↓

static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
    struct rb_node **link = &cfs_rq->tasks_timeline.rb_node;
    struct rb_node *parent = NULL;
    struct sched_entity *entry;
    int leftmost = 1;

    /*
     * Find the right place in the rbtree:
     */
    /* (1) 根据vruntime的值在rbtree中找到合适的插入点 */
    while (*link) {
        parent = *link;
        entry = rb_entry(parent, struct sched_entity, run_node);
        /*
         * We dont care about collisions. Nodes with
         * the same key stay together.
         */
        if (entity_before(se, entry)) {
            link = &parent->rb_left;
        } else {
            link = &parent->rb_right;
            leftmost = 0;
        }
    }

    /*
     * Maintain a cache of leftmost tree entries (it is frequently
     * used):
     */
    /* (2) 更新最左值最小值cache */
    if (leftmost)
        cfs_rq->rb_leftmost = &se->run_node;

    /* (3) 将节点插入rbtree */
    rb_link_node(&se->run_node, parent, link);
    rb_insert_color(&se->run_node, &cfs_rq->tasks_timeline);
}

2.2.4、sched_entity和task_group

这里写图片描述

因为新的内核加入了task_group的概念,所以现在不是使用task_struct结构直接参与到schedule计算当中,而是使用sched_entity结构。一个sched_entity结构可能是一个task也可能是一个task_group->se[cpu]。上图非常好的描述了这些结构之间的关系。

其中主要的层次关系如下:

  • 1、一个cpu只对应一个rq;
  • 2、一个rq有一个cfs_rq;
  • 3、cfs_rq使用红黑树组织多个同一层级的sched_entity;
  • 4、如果sched_entity对应的是一个task_struct,那sched_entity和task是一对一的关系;
  • 5、如果sched_entity对应的是task_group,那么他是一个task_group多个sched_entity中的一个。task_group有一个数组se[cpu],在每个cpu上都有一个sched_entity。这种类型的sched_entity有自己的cfs_rq,一个sched_entity对应一个cfs_rq(se->my_q),cfs_rq再继续使用红黑树组织多个同一层级的sched_entity;3-5的层次关系可以继续递归下去。

2.2.5、scheduler_tick()

关于算法,最核心的部分都在scheduler_tick()函数当中,所以我们来详细的解析这部分代码。

void scheduler_tick(void)
{
    int cpu = smp_processor_id();
    struct rq *rq = cpu_rq(cpu);
    struct task_struct *curr = rq->curr;

    /* (1) sched_tick()的校准,x86 bug的修复 */
    sched_clock_tick();     
#ifdef CONFIG_MTK_SCHED_MONITOR
    mt_trace_rqlock_start(&rq->lock);
#endif
    raw_spin_lock(&rq->lock);
#ifdef CONFIG_MTK_SCHED_MONITOR
    mt_trace_rqlock_end(&rq->lock);
#endif

    /* (2) 计算cpu级别(rq)的运行时间 :
        rq->clock是cpu总的运行时间  (疑问:这里没有考虑cpu hotplug??)
        rq->clock_task是进程的实际运行时间,= rq->clock总时间 - rq->prev_irq_time中断消耗的时间
     */
    update_rq_clock(rq);   

    /* (3) 调用进程所属sched_class的tick函数
        cfs对应的是task_tick_fair()
        rt对应的是task_tick_rt()
     */
    curr->sched_class->task_tick(rq, curr, 0);

    /* (4) 更新cpu级别的负载 */
    update_cpu_load_active(rq);

    /* (5) 更新系统级别的负载 */
    calc_global_load_tick(rq);

    /* (6) cpufreq_sched governor,计算负载来进行cpu调频 */
    sched_freq_tick(cpu);
    raw_spin_unlock(&rq->lock);

    perf_event_task_tick();
#ifdef CONFIG_MTK_SCHED_MONITOR
    mt_save_irq_counts(SCHED_TICK);
#endif

#ifdef CONFIG_SMP
    /* (7) 负载均衡 */
    rq->idle_balance = idle_cpu(cpu);
    trigger_load_balance(rq);
#endif
    rq_last_tick_reset(rq);
}

|→

static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
{
    struct cfs_rq *cfs_rq;
    struct sched_entity *se = &curr->se;

    /* (3.1) 按照task_group组织的se父子关系,
        逐级对se 和 se->parent 进行递归计算
     */
    for_each_sched_entity(se) {
        cfs_rq = cfs_rq_of(se);
        /* (3.2) se对应的tick操作 */
        entity_tick(cfs_rq, se, queued);
    }

    /* (3.3) NUMA负载均衡 */
    if (static_branch_unlikely(&sched_numa_balancing))
        task_tick_numa(rq, curr);

    if (!rq->rd->overutilized && cpu_overutilized(task_cpu(curr)))
        rq->rd->overutilized = true;
}

||→

static void
entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued)
{
    /*
     * Update run-time statistics of the 'current'.
     */
    /* (3.2.1) 更新cfs_rq->curr的se的实际运行时间curr->sum_exec_runtime和虚拟运行时间curr->vruntime
        更新cfs_rq的运行时间
     */
    update_curr(cfs_rq);

    /*
     * Ensure that runnable average is periodically updated.
     */
    /* (3.2.2) 更新entity级别的负载,PELT计算 */
    update_load_avg(curr, 1);

    /* (3.2.3) 更新task_group的shares */
    update_cfs_shares(cfs_rq);

#ifdef CONFIG_SCHED_HRTICK
    /*
     * queued ticks are scheduled to match the slice, so don't bother
     * validating it and just reschedule.
     */
    if (queued) {
        resched_curr(rq_of(cfs_rq));
        return;
    }
    /*
     * don't let the period tick interfere with the hrtick preemption
     */
    if (!sched_feat(DOUBLE_TICK) &&
            hrtimer_active(&rq_of(cfs_rq)->hrtick_timer))
        return;
#endif

    /* (3.2.4) check当前任务的理想运行时间ideal_runtime是否已经用完,
        是否需要重新调度
     */
    if (cfs_rq->nr_running > 1)
        check_preempt_tick(cfs_rq, curr);
}

|||→

static void update_curr(struct cfs_rq *cfs_rq)
{
    struct sched_entity *curr = cfs_rq->curr;
    u64 now = rq_clock_task(rq_of(cfs_rq));
    u64 delta_exec;

    if (unlikely(!curr))
        return;

    /* (3.2.1.1)  计算cfs_rq->curr se的实际执行时间 */ 
    delta_exec = now - curr->exec_start;
    if (unlikely((s64)delta_exec <= 0))
        return;

    curr->exec_start = now;

    schedstat_set(curr->statistics.exec_max,
              max(delta_exec, curr->statistics.exec_max));

    curr->sum_exec_runtime += delta_exec;
    // 更新cfs_rq的实际执行时间cfs_rq->exec_clock
    schedstat_add(cfs_rq, exec_clock, delta_exec); 

    /* (3.2.1.2)  计算cfs_rq->curr se的虚拟执行时间vruntime */
    curr->vruntime += calc_delta_fair(delta_exec, curr);
    update_min_vruntime(cfs_rq);

    /* (3.2.1.3)  如果se对应的是task,而不是task_group,
        更新task对应的时间统计
     */
    if (entity_is_task(curr)) {
        struct task_struct *curtask = task_of(curr);

        trace_sched_stat_runtime(curtask, delta_exec, curr->vruntime);
        // 更新task所在cgroup之cpuacct的某个cpu运行时间ca->cpuusage[cpu]->cpuusage
        cpuacct_charge(curtask, delta_exec);
        // 统计task所在线程组(thread group)的运行时间:
        // tsk->signal->cputimer.cputime_atomic.sum_exec_runtime
        account_group_exec_runtime(curtask, delta_exec);
    }

    /* (3.2.1.4)  计算cfs_rq的运行时间,是否超过cfs_bandwidth的限制:
        cfs_rq->runtime_remaining
     */
    account_cfs_rq_runtime(cfs_rq, delta_exec);
}

2.2.6、几个特殊时刻vruntime的变化

关于cfs调度和vruntime,除了正常的scheduler_tick()的计算,还有些特殊时刻需要特殊处理。这些细节用一些疑问来牵引出来:

  • 1、新进程的vruntime是多少?

假如新进程的vruntime初值为0的话,比老进程的值小很多,那么它在相当长的时间内都会保持抢占CPU的优势,老进程就要饿死了,这显然是不公平的。

CFS的做法是:取父进程vruntime(curr->vruntime) 和 (cfs_rq->min_vruntime + 假设se运行过一轮的值)之间的最大值,赋给新创建进程。把新进程对现有进程的调度影响降到最小。

_do_fork() -> copy_process() -> sched_fork() -> task_fork_fair():

↓

static void task_fork_fair(struct task_struct *p)
{

    /* (1) 如果cfs_rq->current进程存在,
        se->vruntime的值暂时等于curr->vruntime
     */
    if (curr)
        se->vruntime = curr->vruntime;   

    /* (2) 设置新的se->vruntime */
    place_entity(cfs_rq, se, 1);

    /* (3) 如果sysctl_sched_child_runs_first标志被设置,
        确保fork子进程比父进程先执行*/
    if (sysctl_sched_child_runs_first && curr && entity_before(curr, se)) {
        /*
         * Upon rescheduling, sched_class::put_prev_task() will place
         * 'current' within the tree based on its new key value.
         */
        swap(curr->vruntime, se->vruntime);
        resched_curr(rq);
    }

    /* (4) 防止新进程运行时是在其他cpu上运行的,
        这样在加入另一个cfs_rq时再加上另一个cfs_rq队列的min_vruntime值即可
        (具体可以看enqueue_entity函数)
     */
    se->vruntime -= cfs_rq->min_vruntime;

    raw_spin_unlock_irqrestore(&rq->lock, flags);
}

|→

static void
place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
{
    u64 vruntime = cfs_rq->min_vruntime;

    /*
     * The 'current' period is already promised to the current tasks,
     * however the extra weight of the new task will slow them down a
     * little, place the new task so that it fits in the slot that
     * stays open at the end.
     */
    /* (2.1) 计算cfs_rq->min_vruntime + 假设se运行过一轮的值,
        这样的做法是把新进程se放到红黑树的最后 */
    if (initial && sched_feat(START_DEBIT))
        vruntime += sched_vslice(cfs_rq, se);

    /* sleeps up to a single latency don't count. */
    if (!initial) {
        unsigned long thresh = sysctl_sched_latency;

        /*
         * Halve their sleep time's effect, to allow
         * for a gentler effect of sleepers:
         */
        if (sched_feat(GENTLE_FAIR_SLEEPERS))
            thresh >>= 1;

        vruntime -= thresh;
    }

    /* (2.2) 在 (curr->vruntime) 和 (cfs_rq->min_vruntime + 假设se运行过一轮的值),
    之间取最大值
     */
    /* ensure we never gain time by being placed backwards. */
    se->vruntime = max_vruntime(se->vruntime, vruntime);
}


static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{
    /*
     * Update the normalized vruntime before updating min_vruntime
     * through calling update_curr().
     */
    /* (4.1) 在enqueue时给se->vruntime重新加上cfs_rq->min_vruntime */
    if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
        se->vruntime += cfs_rq->min_vruntime;

}
  • 2、休眠进程的vruntime一直保持不变吗、

如果休眠进程的 vruntime 保持不变,而其他运行进程的 vruntime 一直在推进,那么等到休眠进程终于唤醒的时候,它的vruntime比别人小很多,会使它获得长时间抢占CPU的优势,其他进程就要饿死了。这显然是另一种形式的不公平。

CFS是这样做的:在休眠进程被唤醒时重新设置vruntime值,以min_vruntime值为基础,给予一定的补偿,但不能补偿太多。

static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{

    if (flags & ENQUEUE_WAKEUP) {
        /* (1) 计算进程唤醒后的vruntime */
        place_entity(cfs_rq, se, 0);
        enqueue_sleeper(cfs_rq, se);
    }


}

|→

static void
place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
{
    /* (1.1) 初始值是cfs_rq的当前最小值min_vruntime */
    u64 vruntime = cfs_rq->min_vruntime;

    /*
     * The 'current' period is already promised to the current tasks,
     * however the extra weight of the new task will slow them down a
     * little, place the new task so that it fits in the slot that
     * stays open at the end.
     */
    if (initial && sched_feat(START_DEBIT))
        vruntime += sched_vslice(cfs_rq, se);

    /* sleeps up to a single latency don't count. */
    /* (1.2) 在最小值min_vruntime的基础上给予补偿,
        默认补偿值是6ms(sysctl_sched_latency)
     */
    if (!initial) {
        unsigned long thresh = sysctl_sched_latency;

        /*
         * Halve their sleep time's effect, to allow
         * for a gentler effect of sleepers:
         */
        if (sched_feat(GENTLE_FAIR_SLEEPERS))
            thresh >>= 1;

        vruntime -= thresh;
    }

    /* ensure we never gain time by being placed backwards. */
    se->vruntime = max_vruntime(se->vruntime, vruntime);
}
  • 3、休眠进程在唤醒时会立刻抢占CPU吗?

进程被唤醒默认是会马上检查是否库抢占,因为唤醒的vruntime在cfs_rq的最小值min_vruntime基础上进行了补偿,所以他肯定会抢占当前的进程。

CFS可以通过禁止WAKEUP_PREEMPTION来禁止唤醒抢占,不过这也就失去了抢占特性。

try_to_wake_up() -> ttwu_queue() -> ttwu_do_activate() -> ttwu_do_wakeup() -> check_preempt_curr() -> check_preempt_wakeup()

↓

static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
{

    /*
     * Batch and idle tasks do not preempt non-idle tasks (their preemption
     * is driven by the tick):
     */
    /* (1) 如果WAKEUP_PREEMPTION没有被设置,不进行唤醒时的抢占 */
    if (unlikely(p->policy != SCHED_NORMAL) || !sched_feat(WAKEUP_PREEMPTION))
        return;


preempt:
    resched_curr(rq);


}
  • 4、进程从一个CPU迁移到另一个CPU上的时候vruntime会不会变?

不同cpu的负载时不一样的,所以不同cfs_rq里se的vruntime水平是不一样的。如果进程迁移vruntime不变也是非常不公平的。

CFS使用了一个很聪明的做法:在退出旧的cfs_rq时减去旧cfs_rq的min_vruntime,在加入新的cfq_rq时重新加上新cfs_rq的min_vruntime。

static void
dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{

    /*
     * Normalize the entity after updating the min_vruntime because the
     * update can refer to the ->curr item and we need to reflect this
     * movement in our normalized position.
     */
    /* (1) 退出旧的cfs_rq时减去旧cfs_rq的min_vruntime */
    if (!(flags & DEQUEUE_SLEEP))
        se->vruntime -= cfs_rq->min_vruntime;

}

static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{
    /*
     * Update the normalized vruntime before updating min_vruntime
     * through calling update_curr().
     */
    /* (2) 加入新的cfq_rq时重新加上新cfs_rq的min_vruntime */
    if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
        se->vruntime += cfs_rq->min_vruntime;

}

2.2.7、cfs bandwidth

这里写图片描述

  • 1、cfs bandwidth是针对task_group的配置,一个task_group的bandwidth使用一个struct cfs_bandwidth *cfs_b数据结构来控制。
struct cfs_bandwidth {
#ifdef CONFIG_CFS_BANDWIDTH
    raw_spinlock_t lock;
    ktime_t period;     // cfs bandwidth的监控周期,默认值是default_cfs_period() 0.1s
    u64 quota;          // cfs task_group 在一个监控周期内的运行时间配额,默认值是RUNTIME_INF,无限大
    u64 runtime;        // cfs task_group 在一个监控周期内剩余可运行的时间
    s64 hierarchical_quota;
    u64 runtime_expires;

    int idle, period_active;
    struct hrtimer period_timer;
    struct hrtimer slack_timer;
    struct list_head throttled_cfs_rq;

    /* statistics */
    int nr_periods, nr_throttled;
    u64 throttled_time;
#endif
};

其中几个关键的数据结构:cfs_b->period是监控周期,cfs_b->quota是tg的运行配额,cfs_b->runtime是tg剩余可运行的时间。cfs_b->runtime在监控周期开始的时候等于cfs_b->quota,随着tg不断运行不断减少,如果cfs_b->runtime < 0说明tg已经超过bandwidth,触发流量控制;

cfs bandwidth是提供给CGROUP_SCHED使用的,所以cfs_b->quota的初始值都是RUNTIME_INF无限大,所以在使能CGROUP_SCHED以后需要自己配置这些参数。

  • 2、因为一个task_group是在percpu上都创建了一个cfs_rq,所以cfs_b->quota的值是这些percpu cfs_rq中的进程共享的,每个percpu cfs_rq在运行时需要向tg->cfs_bandwidth->runtime来申请;
scheduler_tick() -> task_tick_fair() -> entity_tick() -> update_curr() -> account_cfs_rq_runtime()

↓

static __always_inline
void account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
{
    if (!cfs_bandwidth_used() || !cfs_rq->runtime_enabled)
        return;

    __account_cfs_rq_runtime(cfs_rq, delta_exec);
}

|→

static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
{
    /* (1) 用cfs_rq已经申请的时间配额(cfs_rq->runtime_remaining)减去已经消耗的时间 */
    /* dock delta_exec before expiring quota (as it could span periods) */
    cfs_rq->runtime_remaining -= delta_exec;

    /* (2) expire超期时间的判断 */
    expire_cfs_rq_runtime(cfs_rq);

    /* (3) 如果cfs_rq已经申请的时间配额还没用完,返回 */
    if (likely(cfs_rq->runtime_remaining > 0))
        return;

    /*
     * if we're unable to extend our runtime we resched so that the active
     * hierarchy can be throttled
     */
    /* (4) 如果cfs_rq申请的时间配额已经用完,尝试向tg的cfs_b->runtime申请新的时间片
        如果申请新时间片失败,说明整个tg已经没有可运行时间了,把本进程设置为需要重新调度,
        在中断返回,发起schedule()时,发现cfs_rq->runtime_remaining<=0,会调用throttle_cfs_rq()对cfs_rq进行实质的限制
     */
    if (!assign_cfs_rq_runtime(cfs_rq) && likely(cfs_rq->curr))
        resched_curr(rq_of(cfs_rq));
}

||→

static int assign_cfs_rq_runtime(struct cfs_rq *cfs_rq)
{
    struct task_group *tg = cfs_rq->tg;
    struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(tg);
    u64 amount = 0, min_amount, expires;

    /* (4.1) cfs_b的分配时间片的默认值是5ms */
    /* note: this is a positive sum as runtime_remaining <= 0 */
    min_amount = sched_cfs_bandwidth_slice() - cfs_rq->runtime_remaining;

    raw_spin_lock(&cfs_b->lock);
    if (cfs_b->quota == RUNTIME_INF)
        /* (4.2) RUNTIME_INF类型,时间是分配不完的 */
        amount = min_amount;
    else {
        start_cfs_bandwidth(cfs_b);

        /* (4.3) 剩余时间cfs_b->runtime减去分配的时间片 */
        if (cfs_b->runtime > 0) {
            amount = min(cfs_b->runtime, min_amount);
            cfs_b->runtime -= amount;
            cfs_b->idle = 0;
        }
    }
    expires = cfs_b->runtime_expires;
    raw_spin_unlock(&cfs_b->lock);

    /* (4.4) 分配的时间片赋值给cfs_rq */
    cfs_rq->runtime_remaining += amount;
    /*
     * we may have advanced our local expiration to account for allowed
     * spread between our sched_clock and the one on which runtime was
     * issued.
     */
    if ((s64)(expires - cfs_rq->runtime_expires) > 0)
        cfs_rq->runtime_expires = expires;

    /* (4.5) 判断分配时间是否足够? */
    return cfs_rq->runtime_remaining > 0;
}
  • 3、在enqueue_task_fair()、put_prev_task_fair()、pick_next_task_fair()这几个时刻,会check cfs_rq是否已经达到throttle,如果达到cfs throttle会把cfs_rq dequeue停止运行;
enqueue_task_fair() -> enqueue_entity() -> check_enqueue_throttle() -> throttle_cfs_rq()
put_prev_task_fair() -> put_prev_entity() -> check_cfs_rq_runtime() -> throttle_cfs_rq()
pick_next_task_fair() -> check_cfs_rq_runtime() -> throttle_cfs_rq()


static void check_enqueue_throttle(struct cfs_rq *cfs_rq)
{
    if (!cfs_bandwidth_used())
        return;

    /* an active group must be handled by the update_curr()->put() path */
    if (!cfs_rq->runtime_enabled || cfs_rq->curr)
        return;

    /* (1.1) 如果已经throttle,直接返回 */
    /* ensure the group is not already throttled */
    if (cfs_rq_throttled(cfs_rq))
        return;

    /* update runtime allocation */
    /* (1.2) 更新最新的cfs运行时间 */
    account_cfs_rq_runtime(cfs_rq, 0);

    /* (1.3) 如果cfs_rq->runtime_remaining<=0,启动throttle */
    if (cfs_rq->runtime_remaining <= 0)
        throttle_cfs_rq(cfs_rq);
}

/* conditionally throttle active cfs_rq's from put_prev_entity() */
static bool check_cfs_rq_runtime(struct cfs_rq *cfs_rq)
{
    if (!cfs_bandwidth_used())
        return false;

    /* (2.1) 如果cfs_rq->runtime_remaining还有运行时间,直接返回 */
    if (likely(!cfs_rq->runtime_enabled || cfs_rq->runtime_remaining > 0))
        return false;

    /*
     * it's possible for a throttled entity to be forced into a running
     * state (e.g. set_curr_task), in this case we're finished.
     */
    /* (2.2) 如果已经throttle,直接返回 */
    if (cfs_rq_throttled(cfs_rq))
        return true;

    /* (2.3) 已经throttle,执行throttle动作 */
    throttle_cfs_rq(cfs_rq);
    return true;
}

static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
{
    struct rq *rq = rq_of(cfs_rq);
    struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
    struct sched_entity *se;
    long task_delta, dequeue = 1;
    bool empty;

    se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))];

    /* freeze hierarchy runnable averages while throttled */
    rcu_read_lock();
    walk_tg_tree_from(cfs_rq->tg, tg_throttle_down, tg_nop, (void *)rq);
    rcu_read_unlock();

    task_delta = cfs_rq->h_nr_running;
    for_each_sched_entity(se) {
        struct cfs_rq *qcfs_rq = cfs_rq_of(se);
        /* throttled entity or throttle-on-deactivate */
        if (!se->on_rq)
            break;

        /* (3.1) throttle的动作1:将cfs_rq dequeue停止运行 */
        if (dequeue)
            dequeue_entity(qcfs_rq, se, DEQUEUE_SLEEP);
        qcfs_rq->h_nr_running -= task_delta;

        if (qcfs_rq->load.weight)
            dequeue = 0;
    }

    if (!se)
        sub_nr_running(rq, task_delta);

    /* (3.2) throttle的动作2:将cfs_rq->throttled置位 */
    cfs_rq->throttled = 1;
    cfs_rq->throttled_clock = rq_clock(rq);
    raw_spin_lock(&cfs_b->lock);
    empty = list_empty(&cfs_b->throttled_cfs_rq);

    /*
     * Add to the _head_ of the list, so that an already-started
     * distribute_cfs_runtime will not see us
     */
    list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);

    /*
     * If we're the first throttled task, make sure the bandwidth
     * timer is running.
     */
    if (empty)
        start_cfs_bandwidth(cfs_b);

    raw_spin_unlock(&cfs_b->lock);
}
  • 4、对每一个tg的cfs_b,系统会启动一个周期性定时器cfs_b->period_timer,运行周期为cfs_b->period。主要作用是period到期后检查是否有cfs_rq被throttle,如果被throttle恢复它,并进行新一轮的监控;
sched_cfs_period_timer() -> do_sched_cfs_period_timer()

↓

static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
{
    u64 runtime, runtime_expires;
    int throttled;

    /* no need to continue the timer with no bandwidth constraint */
    if (cfs_b->quota == RUNTIME_INF)
        goto out_deactivate;

    throttled = !list_empty(&cfs_b->throttled_cfs_rq);
    cfs_b->nr_periods += overrun;

    /*
     * idle depends on !throttled (for the case of a large deficit), and if
     * we're going inactive then everything else can be deferred
     */
    if (cfs_b->idle && !throttled)
        goto out_deactivate;

    /* (1) 新周期的开始,给cfs_b->runtime重新赋值为cfs_b->quota */
    __refill_cfs_bandwidth_runtime(cfs_b);

    if (!throttled) {
        /* mark as potentially idle for the upcoming period */
        cfs_b->idle = 1;
        return 0;
    }

    /* account preceding periods in which throttling occurred */
    cfs_b->nr_throttled += overrun;

    runtime_expires = cfs_b->runtime_expires;

    /*
     * This check is repeated as we are holding onto the new bandwidth while
     * we unthrottle. This can potentially race with an unthrottled group
     * trying to acquire new bandwidth from the global pool. This can result
     * in us over-using our runtime if it is all used during this loop, but
     * only by limited amounts in that extreme case.
     */
    /* (2) 解除cfs_b->throttled_cfs_rq中所有被throttle住的cfs_rq */
    while (throttled && cfs_b->runtime > 0) {
        runtime = cfs_b->runtime;
        raw_spin_unlock(&cfs_b->lock);
        /* we can't nest cfs_b->lock while distributing bandwidth */
        runtime = distribute_cfs_runtime(cfs_b, runtime,
                         runtime_expires);
        raw_spin_lock(&cfs_b->lock);

        throttled = !list_empty(&cfs_b->throttled_cfs_rq);

        cfs_b->runtime -= min(runtime, cfs_b->runtime);
    }

    /*
     * While we are ensured activity in the period following an
     * unthrottle, this also covers the case in which the new bandwidth is
     * insufficient to cover the existing bandwidth deficit.  (Forcing the
     * timer to remain active while there are any throttled entities.)
     */
    cfs_b->idle = 0;

    return 0;

out_deactivate:
    return 1;
}

|→

static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
        u64 remaining, u64 expires)
{
    struct cfs_rq *cfs_rq;
    u64 runtime;
    u64 starting_runtime = remaining;

    rcu_read_lock();
    list_for_each_entry_rcu(cfs_rq, &cfs_b->throttled_cfs_rq,
                throttled_list) {
        struct rq *rq = rq_of(cfs_rq);

        raw_spin_lock(&rq->lock);
        if (!cfs_rq_throttled(cfs_rq))
            goto next;

        runtime = -cfs_rq->runtime_remaining + 1;
        if (runtime > remaining)
            runtime = remaining;
        remaining -= runtime;

        cfs_rq->runtime_remaining += runtime;
        cfs_rq->runtime_expires = expires;

        /* (2.1) 解除throttle */
        /* we check whether we're throttled above */
        if (cfs_rq->runtime_remaining > 0)
            unthrottle_cfs_rq(cfs_rq);

next:
        raw_spin_unlock(&rq->lock);

        if (!remaining)
            break;
    }
    rcu_read_unlock();

    return starting_runtime - remaining;
}

||→

void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
{
    struct rq *rq = rq_of(cfs_rq);
    struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
    struct sched_entity *se;
    int enqueue = 1;
    long task_delta;

    se = cfs_rq->tg->se[cpu_of(rq)];

    cfs_rq->throttled = 0;

    update_rq_clock(rq);

    raw_spin_lock(&cfs_b->lock);
    cfs_b->throttled_time += rq_clock(rq) - cfs_rq->throttled_clock;
    list_del_rcu(&cfs_rq->throttled_list);
    raw_spin_unlock(&cfs_b->lock);

    /* update hierarchical throttle state */
    walk_tg_tree_from(cfs_rq->tg, tg_nop, tg_unthrottle_up, (void *)rq);

    if (!cfs_rq->load.weight)
        return;

    task_delta = cfs_rq->h_nr_running;
    for_each_sched_entity(se) {
        if (se->on_rq)
            enqueue = 0;

        cfs_rq = cfs_rq_of(se);
        /* (2.1.1) 重新enqueue运行 */
        if (enqueue)
            enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP);
        cfs_rq->h_nr_running += task_delta;

        if (cfs_rq_throttled(cfs_rq))
            break;
    }

    if (!se)
        add_nr_running(rq, task_delta);

    /* determine whether we need to wake up potentially idle cpu */
    if (rq->curr == rq->idle && rq->cfs.nr_running)
        resched_curr(rq);
}

2.2.8、sched sysctl参数

系统在sysctl中注册了很多sysctl参数供我们调优使用,在”/proc/sys/kernel/”目录下可以看到:

 # ls /proc/sys/kernel/sched_*
sched_cfs_boost                         
sched_child_runs_first          // fork子进程后,子进程是否先于父进程运行
sched_latency_ns                // 计算cfs period,如果runnable数量小于sched_nr_latency,返回的最小period值,单位ns     
sched_migration_cost_ns                 
sched_min_granularity_ns        // 计算cfs period,如果runnable数量大于sched_nr_latency,每个进程可占用的时间,单位ns
                                // cfs period = nr_running * sysctl_sched_min_granularity;
sched_nr_migrate                        
sched_rr_timeslice_ms           // SCHED_RR类型的rt进程每个时间片的大小,单位ms
sched_rt_period_us              // rt-throttle的计算周期
sched_rt_runtime_us             // 一个rt-throttle计算周期内,rt进程可运行的时间
sched_shares_window_ns
sched_time_avg_ms               // rt负载(rq->rt_avg)的老化周期
sched_tunable_scaling
sched_wakeup_granularity_ns

kern_table[]中也有相关的定义:

static struct ctl_table kern_table[] = {
    {
        .procname   = "sched_child_runs_first",
        .data       = &sysctl_sched_child_runs_first,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = proc_dointvec,
    },
#ifdef CONFIG_SCHED_DEBUG
    {
        .procname   = "sched_min_granularity_ns",
        .data       = &sysctl_sched_min_granularity,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = sched_proc_update_handler,
        .extra1     = &min_sched_granularity_ns,
        .extra2     = &max_sched_granularity_ns,
    },
    {
        .procname   = "sched_latency_ns",
        .data       = &sysctl_sched_latency,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = sched_proc_update_handler,
        .extra1     = &min_sched_granularity_ns,
        .extra2     = &max_sched_granularity_ns,
    },
    {
        .procname   = "sched_wakeup_granularity_ns",
        .data       = &sysctl_sched_wakeup_granularity,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = sched_proc_update_handler,
        .extra1     = &min_wakeup_granularity_ns,
        .extra2     = &max_wakeup_granularity_ns,
    },
#ifdef CONFIG_SMP
    {
        .procname   = "sched_tunable_scaling",
        .data       = &sysctl_sched_tunable_scaling,
        .maxlen     = sizeof(enum sched_tunable_scaling),
        .mode       = 0644,
        .proc_handler   = sched_proc_update_handler,
        .extra1     = &min_sched_tunable_scaling,
        .extra2     = &max_sched_tunable_scaling,
    },
    {
        .procname   = "sched_migration_cost_ns",
        .data       = &sysctl_sched_migration_cost,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = proc_dointvec,
    },
    {
        .procname   = "sched_nr_migrate",
        .data       = &sysctl_sched_nr_migrate,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = proc_dointvec,
    },
    {
        .procname   = "sched_time_avg_ms",
        .data       = &sysctl_sched_time_avg,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = proc_dointvec,
    },
    {
        .procname   = "sched_shares_window_ns",
        .data       = &sysctl_sched_shares_window,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = proc_dointvec,
    },
#endif /* CONFIG_SMP */

#endif /* CONFIG_SCHED_DEBUG */
    {
        .procname   = "sched_rt_period_us",
        .data       = &sysctl_sched_rt_period,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = sched_rt_handler,
    },
    {
        .procname   = "sched_rt_runtime_us",
        .data       = &sysctl_sched_rt_runtime,
        .maxlen     = sizeof(int),
        .mode       = 0644,
        .proc_handler   = sched_rt_handler,
    },
    {
        .procname   = "sched_rr_timeslice_ms",
        .data       = &sched_rr_timeslice,
        .maxlen     = sizeof(int),
        .mode       = 0644,
        .proc_handler   = sched_rr_handler,
    },
#ifdef CONFIG_SCHED_AUTOGROUP
    {
        .procname   = "sched_autogroup_enabled",
        .data       = &sysctl_sched_autogroup_enabled,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = proc_dointvec_minmax,
        .extra1     = &zero,
        .extra2     = &one,
    },
#endif
#ifdef CONFIG_CFS_BANDWIDTH
    {
        .procname   = "sched_cfs_bandwidth_slice_us",
        .data       = &sysctl_sched_cfs_bandwidth_slice,
        .maxlen     = sizeof(unsigned int),
        .mode       = 0644,
        .proc_handler   = proc_dointvec_minmax,
        .extra1     = &one,
    },
#endif
#ifdef CONFIG_SCHED_TUNE
    {
        .procname   = "sched_cfs_boost",
        .data       = &sysctl_sched_cfs_boost,
        .maxlen     = sizeof(sysctl_sched_cfs_boost),
#ifdef CONFIG_CGROUP_SCHEDTUNE
        .mode       = 0444,
#else
        .mode       = 0644,
#endif
        .proc_handler   = &sysctl_sched_cfs_boost_handler,
        .extra1     = &zero,
        .extra2     = &one_hundred,
    },
#endif

}

2.2.9、”/proc/sched_debug”

在/proc/sched_debug中会打印出详细的schedule相关的信息,对应的代码在”kernel/sched/debug.c”中实现:

# cat /proc/sched_debug
Sched Debug Version: v0.11, 4.4.22+ #95
ktime                                   : 1036739325.473178
sched_clk                               : 1036739500.521349
cpu_clk                                 : 1036739500.521888
jiffies                                 : 4554077128

sysctl_sched
  .sysctl_sched_latency                    : 10.000000
  .sysctl_sched_min_granularity            : 2.250000
  .sysctl_sched_wakeup_granularity         : 2.000000
  .sysctl_sched_child_runs_first           : 0
  .sysctl_sched_features                   : 233275
  .sysctl_sched_tunable_scaling            : 0 (none)

cpu#0: Online
  .nr_running                    : 1                    // rq->nr_running,rq中总的可运行进程数,包括cfs_rq + cfs_rq + dl_rq
  .load                          : 1024                 // rq->load.weight,rq总的weight值
  .nr_switches                   : 288653745
  .nr_load_updates               : 102586831
  .nr_uninterruptible            : 386195
  .next_balance                  : 4554.077177
  .curr->pid                     : 5839                 // rq->curr当前进程的pid
  .clock                         : 1036739583.441965    // rq总的运行时间,单位s
  .clock_task                    : 1036739583.441965    // rq总的task运行时间,单位s
  .cpu_load[0]                   : 178                  // cpu级别的负载值,rq->cpu_load[]
  .cpu_load[1]                   : 341
  .cpu_load[2]                   : 646
  .cpu_load[3]                   : 633
  .cpu_load[4]                   : 448
  .yld_count                     : 495661
  .sched_count                   : 290639530
  .sched_goidle                  : 95041623
  .avg_idle                      : 66000
  .max_idle_balance_cost         : 33000
  .ttwu_count                    : 169556205
  .ttwu_local                    : 156832675

cfs_rq[0]:/bg_non_interactive                           // 叶子cfs_rq,“/bg_non_interactive ”
  .exec_clock                    : 2008394.796159       // cfs_rq->exec_clock)
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 4932671.018182
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : -148755265.877002
  .nr_spread_over                : 5018
  .nr_running                    : 0                    // cfs_rq->nr_running,cfs_rq中总的可运行进程数
  .load                          : 0                    // cfs_rq->load.weight
  .load_avg                      : 0                    // cfs_rq->avg.load_avg
  .runnable_load_avg             : 0                    // cfs_rq->runnable_load_avg
  .util_avg                      : 0                    // cfs_rq->avg.util_avg
  .removed_load_avg              : 0
  .removed_util_avg              : 0
  .tg_load_avg_contrib           : 0
  .tg_load_avg                   : 943
  .se->exec_start                : 1036739470.724118    // print_cfs_group_stats(),se = cfs_rq->tg->se[cpu]
  .se->vruntime                  : 153687902.677263
  .se->sum_exec_runtime          : 2008952.798927
  .se->statistics.wait_start     : 0.000000
  .se->statistics.sleep_start    : 0.000000
  .se->statistics.block_start    : 0.000000
  .se->statistics.sleep_max      : 0.000000
  .se->statistics.block_max      : 0.000000
  .se->statistics.exec_max       : 384.672539
  .se->statistics.slice_max      : 110.416539
  .se->statistics.wait_max       : 461.053539
  .se->statistics.wait_sum       : 4583320.426021
  .se->statistics.wait_count     : 4310369
  .se->load.weight               : 2
  .se->avg.load_avg              : 0
  .se->avg.util_avg              : 0

cfs_rq[0]:/                                             // 根cfs_rq,“/”
  .exec_clock                    : 148912219.736328
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 153687936.895184
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : 0.000000
  .nr_spread_over                : 503579
  .nr_running                    : 1
  .load                          : 1024
  .load_avg                      : 4815
  .runnable_load_avg             : 168
  .util_avg                      : 63
  .removed_load_avg              : 0
  .removed_util_avg              : 0
  .tg_load_avg_contrib           : 4815
  .tg_load_avg                   : 9051

rt_rq[0]:/bg_non_interactive                            // 叶子rt_rq,“/bg_non_interactive ”
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 700.000000

rt_rq[0]:/                                              // 根rt_rq,“/”
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 800.000000

dl_rq[0]:                                               // dl_rq
  .dl_nr_running                 : 0

runnable tasks:                                         // 并不是rq中现在的runnable进程,而是逐个遍历进程,看看哪个进程最后是在当前cpu上运行。很多进程现在是睡眠状态;
                                                        // 上述的rq->nr_running=1,只有一个进程处于runnable状态。但是打印出了几十条睡眠状态的进程;

                                                        // 第一列"R",说明是当前运行的进程rq->curr
                                                        // "tree-key",p->se.vruntime               // 进程的vruntime
                                                        // "wait-time",p->se.statistics.wait_sum   // 进程在整个运行队列中的时间,runnable+running时间
                                                        // "sum-exec",p->se.sum_exec_runtime       // 进程的执行累加时间,running时间
                                                        // "sum-sleep",p->se.statistics.sum_sleep_runtime  // 进程的睡眠时间

            task   PID         tree-key  switches  prio     wait-time             sum-exec        sum-sleep
----------------------------------------------------------------------------------------------------------
            init     1 153554847.251576     11927   120     23938.628500     23714.949808 1036236697.068574 /
        kthreadd     2 153613100.582264      7230   120      4231.780138     11601.352220 1036459940.732829 /
     ksoftirqd/0     3 153687920.598799   2123535   120   2612543.672044    485896.233949 1033641057.952048 /
    kworker/0:0H     5       867.040456         6   100         1.690538         2.306692     13011.504340 /
     rcu_preempt     7 153687932.055261  38012901   120  19389366.435276   4762068.709434 1012596083.722693 /
       rcu_sched     8 153687932.006723   9084101   120   9536442.439335    832973.285818 1026372474.208896 /
          rcu_bh     9        44.056109         2   120         3.062001         0.071692         0.000000 /
     migration/0    10         0.000000    810915     0         0.157999   1405490.418215         0.000000 /
       writeback    41  75740016.734657        11   100         6.979694        22.657923 515725974.508217 /
        cfg80211    68 145389776.829002        16   100        19.614385        22.409536 981346170.111828 /
     pmic_thread    69        -4.853692        59     1         0.000000       416.570075     90503.424677 /
   cfinteractive    70         0.000000   6128239     0         0.000000    835706.912900         0.000000 /
     ion_history    74 153687775.391059   1077475   120    598925.096753    155563.560671 1035979729.028569 /
          vmstat    88 153613102.428420       581   100       628.318084       213.543232 1036470246.848623 /
          bioset   124       413.230915         2   100         0.000000         0.091461         0.065000 /
 mt_gpufreq_inpu   135         0.000000        51     0         0.000000         2.947619         0.000000 /
   kworker/u20:2   165 153687842.212961  19921527   120   1900166.780690   3504856.564055 1031329325.876435 /
      disp_check   168         0.000000    345697    12         0.000000    254109.286291         0.049692 /
 disp_delay_trig   172         0.000000     25049     5         0.000000      8460.322268         0.050769 /
 kpi_wait_4_hw_v   184  77456637.322727     15150   120     17756.340357      2238.503671 525392630.298747 /
 teei_switch_thr   202         0.000000       150    49         0.000000      4726.615934         0.000000 /
     hang_detect   204         0.000000     34740     0         0.000000    190287.797938         0.063154 /
  irq/680-stk_ps   220         0.000000         6    49         0.193308         0.703539         0.000000 /
 sub_touch_resum   224       457.923116         2   100         0.000000         0.085691         0.046539 /
 irq/677-sub_tou   226         0.000000         2    49         0.000000         0.073847         0.000000 /
 irq/672-fuelg_i   228         0.000000         4    49         0.000000         0.415462         0.000000 /
 irq/845-primary   230         0.000000         2    49         0.000000         0.074847         0.000000 /
  dm_bufio_cache   231  61370971.457226         3   100         0.000000         0.924077 410081439.629970 /
 binder_watchdog   233 153687520.358159    529297   120    133539.189144     70472.061275 1036525567.186647 /
    cs35l35_eint   235       624.068320         2   100         0.000000         0.091923         0.049693 /
 ipi_cpu_dvfs_rt   240 153687569.261082   4721359   120   3352016.787765   1096346.814808 1032281259.787992 /
        hps_main   248 153687929.657234  24442793   100  11377751.354137  44892964.862782 980455478.318003 /
           pd_wq   251       692.938392         2   100         0.000000         0.254461         0.050000 /
         pd_task   254         0.000000   9095537    98         0.000000   1645126.407931         0.031231 /
  pvr_defer_free   257 151412505.141936      2089   139      1592.921777      1280.969986 1023235084.781867 /
  pvr_device_wdg   258 153178077.742167       379   120       242.453158        56.183535 1034057592.348265 /
             mwp   259       744.637922         3   100         0.024154         0.092615         0.100154 /
 dsx_rebuild_wor   267       753.712295         2   100         0.018384         0.100231         0.044077 /
          wdtk-0   273         0.000000     51867     0       223.511770     83562.991491         0.044230 /
          wdtk-2   275         0.310307     91023     0         7.662692     10810.010102         0.037154 /
          wdtk-3   276         0.000000     57334     0         0.082539      7904.126023         4.045230 /
          wdtk-4   277         0.000000    163449     0         0.102538     26621.315643         4.056077 /
          wdtk-5   278         0.000000     93033     0         0.771692     11306.508550         4.061615 /
          wdtk-6   280         0.000000     64678     0         0.490461      7293.603126         4.060538 /
          wdtk-7   281         0.000000     56991     0         0.545615      8033.133066         4.036615 /
          wdtk-8   282         0.007228    161917     0         0.490693     24287.956471         4.015231 /
          wdtk-9   283         0.000000     75886     0         0.337153      7588.440841         4.041154 /
 test_report_wor   287       771.952673         2   100         0.000000         0.084769         0.036000 /
    kworker/0:1H   299 153685958.573414    448410   100     58503.103145     82742.839000 1036576533.452228 /
         ueventd   301 153656034.928627     15997   120     11669.503522     30437.800780 1036570321.312576 /
    jbd2/sdc40-8   313 153685266.285982    396029   120    211212.571451    243355.373101 1036254798.655632 /
 ext4-rsv-conver   314      1322.048935         2   100         0.000000         0.449385         0.156692 /
 ext4-rsv-conver   319      1347.134123         2   100         0.000000         0.441845         0.152924 /
 ext4-rsv-conver   334      1417.595511         2   100         1.169616         0.066769         0.151307 /
     logd.daemon   354      9386.287600         9   118         1.011306         6.314232     31502.758998 /
     logd.writer   357 153687530.563333   4564190   118  11079641.119214   3866510.152188 1021777212.794489 /
      logd.klogd   367 153687933.066482  12443269   118  13962178.199616  13536643.532358 1009225310.240325 /
     logd.auditd   368 151566168.672440       664   118       174.122396      1168.889001 1023473784.526020 /
 logd.reader.per 25015 153687529.628782   3044070   118   5119327.722443   3596227.191390 611872098.002749 /
         ut_tlog   351 153687782.507327   1045264   120    281668.596996    111876.661992 1036325851.851125 /
     Secure Call   352      1429.938486         3   100         0.655616         0.292616         4.131307 /
       Bdrv Call   353      1430.041237         3   100         1.754154         0.239693         4.057077 /
         kauditd   359 151566168.493297       673   120       183.702927       116.953237 1023475499.246642 /
            vold   361 153474746.984645      1171   120       606.838765       528.611148 1036003350.481839 /
            vold   369 153656032.048396     13109   120      6961.612001      2795.667768 1036595788.335188 /
         healthd   375 153656044.500243     81262   120    161350.025885     79773.132743 1036363134.366245 /
    wmt_launcher   379 153687737.287803    529032   120    118096.450849    111012.409314 1036487544.290047 /
        ccci_fsd   380     12189.138243      5591   120      1909.518922      1859.451641     30157.790568 /
     ccci_mdinit   639 152601161.552368       235   120       286.497775        85.485541 1031001695.298508 /
  servicemanager   386 153687009.935692    331057   120     60490.993040    225025.657411 1036428012.292706 /
    Binder:387_1   569 153673962.710154     37011   120     16969.489097     10516.923471 1036631468.909174 /
        DispSync   570  77463667.708510     13571   111     11461.844394      2109.549787 525386479.722018 /
      SWWatchDog   571  77456659.518056       472   112       814.071314      1794.955533 525392376.724733 /
 UEventThreadHWC   598 153656031.843303     12856   112      2426.674847      1981.053513 1036597333.247354 /
     EventThread   665 153674020.380964    154583   111     24005.323746     39757.314747 1036594592.994222 /
   POSIX timer 1   666  77463665.520152       912   112      5421.412950       235.493060 525393470.823115 /
     EventThread   667  77463667.698380     12525   111      1056.579887      1495.701448 525396630.173625 /
 FpsPolicyTracke   702  77463667.723313      1187   120      1198.624090        80.131910 525397668.998965 /
        DispSync  1004  77463667.697161      7586   111       664.793393       816.670690 525394660.200500 /
    Binder:387_4  1342 153654956.018618     36048   120     15901.789224      9803.040051 1036556753.030703 /
    Binder:387_5  1940 153674007.822615     35607   120     16223.410715      9976.729711 1036608951.069827 /
          mtkmal   661      3661.416310         4   120         0.000000         1.598692      5697.205091 /
          mtkmal   663  77445022.732913       225   120       155.111313        53.880768 525390350.424944 /
          mtkmal   669  77445023.629606       305   120        94.783850       151.207084 525390304.632178 /
          mtkmal   671  77445022.990682       191   120       113.968152        44.631693 525390385.130178 /
          mtkmal   678 152475995.133042       161   120        39.026462        42.700918 1030000416.276366 /
          mtkmal   682      2778.475734         7   120         8.099923         0.318693      4309.718933 /
          mtkmal   684      3661.244943        13   120        11.790309         2.044921      5597.420014 /
          mtkmal   718  77445022.557837       131   120       153.523075        12.365926 525390133.711180 /
          mtkmal   734 153687521.348390    108018   120    181961.836117    101014.633238 1036429717.927166 /
          mtkmal  1851      6974.069888         2   120         0.512769         0.163308         0.000000 /
   POSIX timer 4  2811 153687521.439237    108769   120     43446.351322    109359.509249 1036527221.276171 /
    atci_service   391 153687674.710468    209757   120     47042.339325     34923.832801 1036634061.208546 /
          flymed   394 153477230.289829      1217   120       273.883295       486.930643 1036007855.567517 /
 mobile_log_d.wc   506 141237736.186348        89   120       289.837463       107.019845 950555096.389882 /
 mobile_log_d.rd 25016 153687930.745569   6882189   120  14953859.136289  13240534.261943 592393177.946476 /
 mobile_log_d.wr 25017 153678844.144937    244541   120    596458.050128    787803.568717 619164112.957003 /
         netdiag   405 141237342.607831       112   120       460.393235       376.842928 950555221.569565 /
       mtk_agpsd   474 153374191.634415      3403   120     12677.808115      5107.769034 1035337663.690361 /
       mtk_agpsd   841 153687840.736730   2147544   120    473399.968559    667501.048409 1035571519.822825 /
       mtk_agpsd  1171 153687770.514828   1088984   120    263201.510383    199509.752374 1036244336.253205 /
    mtkFlpDaemon   415      6840.625397         2   120        33.284307         3.262384     23108.778902 /
 thermalloadalgo   413 153687898.801494    339396   120    199927.385588    403353.957413 1036113673.985265 /
         thermal   422 153687095.661054   1044690   120   4462586.728269    889161.434459 1031361671.163426 /
        thermald   423 153687155.456223     10504   120      2848.238660      1840.668258 1036708867.436449 /
  batterywarning   440 153685708.507369    116056   120     35721.661430    164753.143311 1036506414.448146 /
   POSIX timer 1   630      1585.897410         2   120        18.623538         0.225924         3.299769 /
   POSIX timer 5   636      1589.659892         2   120        13.396539         0.161076         0.000000 /
   POSIX timer 7   643      1604.389803         1   120        12.961769         0.105000         0.000000 /
            mnld   644 141238163.728622        63   120        54.939462        65.431306 950553624.565953 /
            MPED   496 141196342.567062        43   120       895.585007       158.467918 950400044.926280 /
       wifi2agps   505 153374190.351338      2683   120      7730.432798      2788.876705 1035344823.135785 /
     utgate_tlog   514 153687737.255265   1045413   120    281761.049313    113900.496416 1036319696.719148 /
      AudioOut_D   811  77423572.027703      6631   102      6416.165867      5234.485977 525319030.522598 /
  watchDogThread   818 153687899.963725   1059915   120    210673.948928    257611.952139 1036244853.299636 /
    Binder:547_2  4943   6503838.926647       369   120        38.477388       127.843998    980298.123104 /
        installd   549 141220149.178842     17422   120     13418.647450     33493.160461 950409754.346123 /
    Binder:556_2   556 142104903.101144      1316   120       920.215623      2613.435619 957625545.508583 /
            netd  1317 153656032.057473     12911   120      6132.675035      2951.387186 1036578597.684687 /
            netd  1318 153374190.925194      4134   120     11135.394577      4135.800240 1035324323.853964 /
            netd  1319      5431.697961         1   120         0.000000         0.087692         0.000000 /
            netd  1320      5436.799650         1   120         0.000000         0.101692         0.000000 /
            netd  1321      5441.983800         3   120         0.000000         0.184153         0.053539 /
            netd  1323 152478198.472391       105   120        31.755690        93.771539 1030009624.229153 /
            netd  1324      5991.620423         3   120         0.059615         0.512231      2836.775392 /
    Binder:556_1  1326 142104887.497373       218   120       166.588700        60.492451 957614183.463558 /
    Binder:556_3  1477 142104887.497066       193   120       119.942772        40.034222 957608875.916163 /
    Binder:556_4  1479 142104887.518912       188   120       116.159925        43.184541 957608840.780691 /
    Binder:556_5  1481 142104887.516605       193   120       106.840998        47.108921 957608809.474699 /
    Binder:556_6  1492 142104887.495758       175   120       139.239848        41.452539 957608640.656693 /
    Binder:556_7  1553 142104887.496527       171   120       146.211150        62.160151 957607996.415620 /
    Binder:556_8  2094 142104887.520373       133   120       122.075769        36.036701 957601618.554284 /
    Binder:556_9  7057 142104887.517527        57   120        66.348611        13.755926 435000933.554842 /
    Binder:556_A 17406 142104887.514989        14   120        34.960308         4.101462  87600285.819691 /
  mpower_manager   559 153687897.737416   2058840   120    650550.766640    301419.435846 1035763439.272171 /
       perfprofd   562 152351479.437254       163   120        92.279305       176.774769 1029052135.266795 /
   pvr_workqueue   588  26362955.609802         8   100         8.554077         3.945845 135759507.652276 /
   pvr_workqueue   590  26362511.370661         5   100         9.110616         0.915768 135759481.943198 /
  ksdioirqd/mmc0   654         0.000000    117198    98         6.680308     32557.476254         0.000000 /
  md1_tx1_worker   705 153444892.135341       287   100       297.604536        76.314079 1035848754.686920 /
  md1_rx1_worker   721 153583222.762631       106   100       244.276380        18.966851 1036352476.341197 /
       debuggerd   791         0.630538      3574    71        32.939997      7428.480093         0.000000 /
     debuggerd64   792         0.000000      3688    71         3.024383      6583.963700         0.000000 /
   mdl_sock_host   829 141249041.826680        30   120        72.073771        22.491925 950555026.052644 /
     gsm0710muxd   801 153687214.560068    218292   120     75692.773063    184409.357043 1036450373.720321 /
     gsm0710muxd   876 153687527.227620    890806   120   5312280.396414   1466623.069152 1029932424.739024 /
     gsm0710muxd   877 153687522.851544    394180   120    743239.234510    570686.925671 1035397099.085414 /
     gsm0710muxd   880 153687523.645931    156234   120    487674.432388    283788.535172 1035939540.574625 /
     gsm0710muxd   881 139142375.948393        20   120         2.802540        18.009922 936696798.663608 /
     gsm0710muxd   882 153685959.385982    156328   120    170389.685231    301043.854569 1036232458.732792 /
     gsm0710muxd   889     10768.359351         9   120         0.483231         2.677231     27507.196680 /
   mdl_sock_host   838 141249047.147373        30   120        28.144001        20.794536 950554967.232957 /
         viarild   840      2063.699187         2   120         6.901615         0.164154         0.000000 /
         viarild   871 153687310.944561    213639   120     62270.218233     83803.863806 1036564286.141243 /
         viarild   872 153687737.756496    217191   120     63014.313662    146525.224954 1036502324.133367 /
         mtkrild   929 153687520.333774    347988   120    132221.700444     26801.640361 1036551734.599106 /
         mtkrild   931 153687520.331621    348108   120    133117.404515     26691.766296 1036550947.317260 /
         mtkrild   933 153687520.402929    479465   120    331792.023591    345246.430652 1036033729.862808 /
         mtkrild   934 153687520.331775    348069   120    132799.233182     26599.578068 1036551355.411042 /
         mtkrild   935 153687520.341929    348078   120    133441.763279     26741.501223 1036550570.000391 /
         mtkrild   936 153687520.332313    348046   120    133788.852808     26777.614211 1036550184.801253 /
         mtkrild   937 153687520.331775    348043   120    133228.990770     26875.716348 1036550649.353324 /
         mtkrild   938 153687520.332467    348038   120    133501.124365     26662.167095 1036550589.456369 /
         mtkrild   939 153687520.331929    348061   120    132156.945460     26586.273902 1036552009.306371 /
         mtkrild   940 153687520.334005    348030   120    133021.017114     26700.422558 1036551031.429836 /
         mtkrild   941 153687520.332621    348269   120    133436.975066     26864.353104 1036550451.167977 /
         mtkrild   942 153687520.333005    348100   120    133154.415619     26837.357255 1036550760.611566 /
         mtkrild   943 153687520.353082    348038   120    133009.144451     26694.200364 1036551050.495802 /
         mtkrild   944 153687520.337852    348071   120    132950.072433     26756.505593 1036551044.787110 /
         mtkrild   945 153687520.333544    348083   120    133514.303721     26838.344276 1036550399.324903 /
         mtkrild   946 153687520.332620    348052   120    133207.367309     26844.250568 1036550699.317478 /
         mtkrild   947 153687520.330698    348113   120    133400.468989     26808.496775 1036550541.446747 /
         mtkrild   949 153687520.332928    348052   120    133417.180604     26858.239306 1036550472.776347 /
         mtkrild   950 153687520.333160    348027   120    133224.433808     26717.650995 1036550805.224187 /
         mtkrild   951 153687522.374083   2754901   120   2550376.949017    658618.042882 1033501808.790758 /
         mtkrild   953     10785.121762        42   120         7.201923        29.732541     27506.773143 /
         mtkrild   956  77445024.512068       302   120       196.371228       212.220923 525387908.394873 /
         mtkrild   958 139142451.059438        27   120         5.617921        11.390847 936696615.508380 /
         mtkrild   959 153685958.452443    231013   120     90066.001122    145704.528784 1036467955.730875 /
         mtkrild   969     10768.257768        19   120         6.373771         4.768849     27341.750907 /
         mtkrild  1086  77445022.840913       125   120       131.736455        40.676161 525386206.945787 /
        rilproxy  1023      2549.934229         1   120         0.030077         0.166616         0.000000 /
 Ril Proxy reque  1047 153687520.769313    106931   120     17870.276454     52232.745454 1036639095.001220 /
 Ril Proxy reque  1049  77445023.025067       259   120        97.129004        76.282690 525386587.347481 /
        rilproxy  1052 139142374.598801       255   120        75.081840        23.904236 936694971.100527 /
   disp_queue_P0  1067         0.000000      7509     5         0.000000      6663.293293         0.065308 /
 ReferenceQueueD  1081 153682594.530032    211902   120     64560.326125     55233.173286 1036568501.426237 /
 FinalizerDaemon  1082 153682594.765725    170186   120     26702.126008     38259.289937 1036623415.578785 /
 FinalizerWatchd  1083 153687276.748512    102967   120     38357.037158     15650.622791 1036653865.675069 /
  HeapTaskDaemon  1084 153683632.246834    136781   120    126921.006335    456707.330349 1036109382.086557 /
   Binder:1074_1  1088 153682111.709561    122841   120     78872.844413    146079.569769 1036461599.469990 /
   Binder:1074_2  1089 153686856.027901    122796   120     80204.474188    145200.814182 1036480185.545158 /
   Binder:1074_3  1097 153685009.264302    122627   120     76696.735939    146568.223790 1036474099.052120 /
 MessageMonitorS  1100      2830.202676         5   120         0.318616         2.436076         0.560385 /
 ActivityManager  1103 153676383.299721    239709   120    263545.893262    584063.260455 1035815536.534049 /
 batterystats-sy  1108 153656039.928935     14942   120     12781.432829     32676.181237 1036551257.560695 /
    FileObserver  1121 151366817.419279       825   120        65.442466       297.350079 1023092619.021270 /
      android.fg  1123 153683533.983685    297083   120    105632.072176     98216.273033 1036487470.663107 /
 AnrMonitorThrea  1130 153675160.550742     49419   120     52845.788084     11755.937488 1036592795.293345 /
   system_server  1139 153686066.363307     91600   118     57301.637256    299581.131167 1036344126.422679 /
  PackageManager  1151   2123074.948987      4759   130      1240.195620      2312.241251 415972413.085622 /bg_non_interactive
   system_server  1328 153687842.490267  10515299   120   1661287.673016   4523982.148734 1030514093.832906 /
 SensorEventAckR  1329  77380587.602801        25   112        81.965156         3.918613 525299961.227505 /
   SensorService  1330 124660459.026176     11820   112       821.968447      3822.534369 842018700.920537 /
 CameraService_p  1331      5562.977288         2   116         0.000000         1.979615         0.270770 /
    AlarmManager  1333 153673687.683329    323475   120    440593.614441    313146.408706 1035888638.673051 /
 InputDispatcher  1338 153683532.159714     65434   120      2443.774990      9054.607951 1036669512.885258 /
     InputReader  1339 153683532.081450     69434   112       730.607083     18352.768368 1036661926.944220 /
 UsageStatsManag  1352 141279253.303851        33   120       127.911920        11.011152 950737091.215010 /
 RecordEventThre  1357 141279298.263923       305   120       248.339698       168.402070 950736943.730005 /
 ConnectivitySer  1363      9598.972239        33   120        34.935611         9.438923     11220.329723 /
          ranker  1372   4565371.160528       384   130      1349.076921       417.196927 950546588.719088 /bg_non_interactive
        Thread-2  1380 153687523.448852    180049   120    398656.719490    360551.032935 1035936896.261191 /
  UEventObserver  1382 153656032.077319     12889   120      4584.844190      2768.674498 1036577008.248908 /
 LazyTaskWriterT  1399  77424893.172705       581   120       411.879232      3258.867779 525315760.112617 /
        Thread-6  1407  62235633.721656        18   120       433.368077         7.805001 416049482.146046 /
     WifiMonitor  1554 153685586.307786     58189   120     79983.829818     49360.832359 1036555234.885332 /
   Binder:1074_4  1559 153687007.436001    123117   120     80382.236938    146363.086260 1036463979.975185 /
   Binder:1074_5  1581 153685232.851598    122911   120     78403.314388    146576.526616 1036457766.535833 /
        watchdog  1586 153683534.136868    190448   120    379190.269123    130147.942387 1036166833.749468 /
   Binder:1074_6  1702 153684512.972293    121830   120     77062.555248    144379.772867 1036456834.918037 /
   Binder:1074_7  1703 153685926.719162    122133   120     76991.508773    145227.034814 1036462079.016466 /
   Binder:1074_8  1732 153685708.983522    122911   120     79298.708026    148578.064620 1036455350.025026 /
   Binder:1074_9  1733 153683590.358220    122429   120     79839.075065    144017.716199 1036451351.341669 /
   Binder:1074_A  1796 153683159.783045    123352   120     79372.426161    146617.312842 1036447028.127665 /
   Binder:1074_B  1797 153684298.982037    122080   120     78931.959383    146000.895846 1036452104.223656 /
   Binder:1074_C  2206 153686642.452142    120747   120     77223.220336    143600.665373 1036458341.297521 /
   Binder:1074_D  2209 153682945.087426    121870   120     78295.881347    145648.265225 1036440169.904481 /
   Binder:1074_E  2210 153687277.794050    121664   120     78439.285423    146604.981940 1036457122.950540 /
   Binder:1074_F  2239 153684725.913929    121982   120     78355.280914    146470.241330 1036446240.651269 /
  Binder:1074_10  2254 153686185.336303    121722   120     77501.592685    145260.973289 1036454263.745288 /
  Binder:1074_11  2260 153681898.513704    121216   120     77819.367663    144912.080759 1036437254.447294 /
  Binder:1074_12  2261 153685491.839055    121449   120     77906.308399    145427.732519 1036450679.182231 /
  Binder:1074_13  2296 153682326.508649    122286   120     76514.209123    148391.056751 1036436897.203377 /
  Binder:1074_14  2341 153687738.552957    121444   120     76185.676242    145166.095491 1036462300.297987 /
  Binder:1074_15  2343 153687009.028000    120720   120     76485.907350    144045.950438 1036459886.356113 /
  Binder:1074_16  2347 153687073.499568    121852   120     78299.015578    145664.956027 1036456682.439460 /
  Binder:1074_17  2392 153687013.070538    121885   120     77183.446301    146149.328531 1036456944.487662 /
  Binder:1074_18  6053 153682740.943569    118188   120     75940.600043    145263.284358 1036012198.311994 /
  Binder:1074_19  6750 153687487.328692    116103   120     74612.769704    142644.254578 1031536447.077220 /
  Binder:1074_1A 12662 153687007.525154    112196   120     73592.882672    137882.356743 982356495.324826 /
  Binder:1074_1B 25851 153683370.953239     98198   120     63092.674811    120013.296472 870685204.714987 /
 pool-2-thread-1 16027  52771351.371422        30   120        18.764846        15.623616         2.682615 /
  Binder:1074_1C 27751 153683985.135331     65807   120     41683.217693     79776.753019 599143784.201869 /
  Binder:1074_1D 13727 153686430.731973     47989   120     28144.572272     57754.509396 458075585.275814 /
        Timer-27 26862 141218617.463102         1   120         0.263385         0.378461         0.000000 /
  Binder:1074_1E 32757 153687011.000538      3742   120      2301.935495      4479.763321  35565800.170536 /
 ndroid.systemui  1400 153674029.391078    802567   120   1490238.530816   4628618.280948 1030523400.937512 /
 ReferenceQueueD  1436 153673937.126769     73659   120     37821.546018     17272.829365 1036584356.285985 /
 FinalizerDaemon  1437 153673937.308999     73822   120     24221.339337     25821.406277 1036589594.767052 /
 FinalizerWatchd  1438 153673936.982538     40858   120     15566.602241      8121.405435 1036615685.213122 /
  HeapTaskDaemon  1439 153675166.817896     63267   120     43267.817027    111550.189765 1036489383.497280 /
   Binder:1400_1  1447 153673962.942145     49230   120      9503.669122     34213.524599 1036595802.393532 /
   Binder:1400_2  1452 153673939.918923     49655   120      9071.358508     34090.882542 1036596185.833424 /
    RenderThread  1556 153674007.365555     64195   112     27370.470148     14443.562957 1036596480.179177 /
        MyThread  2885 141248504.920054        38   120         3.848845         4.981075 950520975.351108 /
   Binder:1400_4  4923 153673936.151570     50097   120      9323.427787     34303.515061 1036536054.355796 /
   Binder:1400_5  5927 153673933.881075     49374   120      8240.784896     34283.732470 1036165417.973325 /
          sdcard  1408 153686217.961120    104883   120     21650.043972     17745.547089 1036649863.504007 /
     main_thread  1494 153687901.920217   1284074   110    289499.966978    331088.372259 1036074898.881126 /
       rx_thread  1496 153373839.784383      7013   110       930.462637      1284.975880 1035329748.058253 /
  wpa_supplicant  1514 153687072.408030    208118   120    186723.731824    205163.663067 1036299761.764675 /
  HeapTaskDaemon  1541 151566333.012431       114   120       129.560234       157.358381 1023451645.526521 /
 m.android.phone  1671 153456065.341099     64738   120    195994.153122    325378.857233 1035397260.336678 /
 Jit thread pool  1679  77423413.821127        73   129        53.345773       209.406305 525308578.502602 /
 FinalizerWatchd  1685 153463169.693062      6682   120      2438.317711       957.194008 1035934990.536875 /
  HeapTaskDaemon  1688 153457991.293170      6942   120      6695.183963     13460.020589 1035903235.530626 /
   Binder:1671_1  1691 153659557.463568     10613   120      3858.494484      6653.202366 1036580667.771911 /
   Binder:1671_2  1694 153607348.422901     10664   120      3205.023918      6702.303852 1036400964.310939 /
      RILSender0  1861 153329911.871220      2121   120      1607.180561       865.875215 1035083991.782486 /
    RILReceiver0  1865 153321181.240054      2839   120      1860.458266      1432.889846 1035023160.965003 /
      RILSender1  1868  77476210.646389       164   120       200.306774        39.500845 525431599.693432 /
    RILReceiver1  1873  77457270.897605       292   120        64.582928        81.521462 525371714.312672 /
 GsmCellBroadcas  1945      8962.110350         7   120         0.817309         1.174846      3043.696699 /
 CdmaServiceCate  1987      9429.867167        10   120         2.032384         2.115232      3005.261160 /
 GsmCellBroadcas  1988      9432.047638        12   120         3.924230         1.928154      3014.447700 /
   Binder:1671_4  1993 153552758.731165     10401   120      3323.964976      6607.310729 1036218459.324407 /
   Binder:1671_5  2024 153589050.115160     10324   120      3529.102910      6600.056412 1036338014.995942 /
   Binder:1671_6  2026 153641306.051963     10528   120      3122.379892      6619.962136 1036518666.920672 /
   Binder:1671_7  2617 153570598.690730     10588   120      3468.699128      6753.670053 1036270365.111438 /
   Binder:1671_8  7549 153677965.304393      4408   120      1162.151387      2956.835001 511350611.110490 /
 FinalizerWatchd  1698 151576580.353577        42   120         7.126770         5.592614 1023465080.110165 /
  HeapTaskDaemon  1699 151566333.654969        42   120        19.806463        38.214304 1023450037.195515 /
 FinalizerWatchd  1714 151576580.329116        36   120         4.931231         5.621228 1023465095.884322 /
  HeapTaskDaemon  1715 151566332.464584        31   120         5.117924        11.575690 1023450047.112130 /
 FinalizerWatchd  1728 153376151.014309      1621   120       486.819523       243.113634 1035346365.269257 /
 FinalizerWatchd  1790 151576580.539038       229   120        56.336471        30.870218 1023464770.365013 /
  HeapTaskDaemon  1791 151566333.973816       186   120        78.831768       162.237079 1023449615.313051 /
 RxIoScheduler-1  1832 153676477.864536     17754   120      4113.629535      8966.233206 1036632495.957774 /
 RxScheduledExec  1835 153687675.010237     17865   120      4751.072474      9183.679586 1036677067.938482 /
 RxScheduledExec  1838 153687674.928391     15844   120      5789.074048      4375.085370 1036680839.553045 /
 RxNewThreadSche 31106  69087254.271291         5   120        17.820460         8.964001         7.879308 /
   disp_queue_E3  1990         0.000000        86     5         0.000000       376.258772         0.085077 /
 FinalizerDaemon  2072 153674034.688385     51524   120      9512.676458     14715.380138 1036607876.510080 /
 FinalizerWatchd  2073 153679286.754743     43941   120     12494.389720      7128.944986 1036632366.528259 /
  HeapTaskDaemon  2074 153675302.360345     57451   120    149522.657893    139337.098866 1036348083.156224 /
   Binder:2062_1  2075 153673973.768306     11285   120      3139.434684      9548.625798 1036619240.539446 /
   Binder:2062_2  2076 153654957.116080     11190   120      2993.360505      9517.939179 1036559336.023332 /
 RxCachedWorkerP  2149 153678943.910673     17873   120      3839.018438     10731.506063 1036632895.123627 /
 RxComputationTh  2164 153672609.883688     16840   120      4270.907654     13509.856036 1036606176.066075 /
 RxComputationTh  2165 153672609.716765     16410   120      3629.842658     11332.828742 1036609039.139982 /
 nisdk-scheduler  2281 153546949.324479      1805   120       318.169318      1174.158093 1036201387.906121 /
  nisdk-report-1  6142  62424951.596934        22   120        36.502229        56.420464 415801174.001914 /
 Jit thread pool  2085  62378341.351194        10   129        33.833536        27.625772 416103721.586098 /
 FinalizerWatchd  2097 151576580.456347        67   120       111.016693         9.214770 1023460096.280458 /
  HeapTaskDaemon  2098 151566322.173200        68   120        42.300923        23.884536 1023445159.309658 /
         ged_srv  2083  77423135.704185       187   120        44.149235       110.398853 525303294.270431 /
 GpuAppSpectator  2104 153687738.459726   2160184   120    755577.199069   1512779.840696 1034419258.894172 /
           perfd  2084 153687840.601372    211760   119     53153.054786     49705.241204 1036585108.675781 /
 FinalizerWatchd  2126 152546329.433530       188   120        70.506622        26.516007 1030532222.706690 /
  HeapTaskDaemon  2127 152544607.401489       143   120       474.503920       100.198778 1030516752.539432 /
    FileObserver  2409  62201600.153920         7   120        54.950925         1.227537 415941577.149096 /
 UsageStatsManag  2411   6268494.336682        19   120        78.708385         3.489001    480079.140913 /
 RecordEventThre  2420   6268507.562295        22   120        47.589308        15.183999    480079.662145 /
 ReferenceQueueD  2160 151450933.763303        56   120        39.550846         5.384996 1023437149.642796 /
 FinalizerWatchd  2162 151576580.451653        40   120         4.036462         3.982302 1023457129.538612 /
  HeapTaskDaemon  2166 151566332.706585        33   120        13.801077        11.243384 1023442096.833804 /
 FinalizerWatchd  2189 151576580.605347        37   120        14.746846         5.262695 1023456934.182605 /
  HeapTaskDaemon  2190 151566332.461354        30   120        11.163617        10.963231 1023441931.182722 /
   Binder:2177_2  2193 139200814.188103        26   120        12.388310         8.698081 937031713.330632 /
  HeapTaskDaemon  2205 151566332.390354        30   120         8.136229         9.787697 1023441912.484260 /
   Binder:2194_2  2208 139200813.697488        28   120        19.682154         8.721074 937031680.843718 /
 FinalizerWatchd  2221 151576580.519807        38   120        11.323151         4.252845 1023456929.196456 /
  HeapTaskDaemon  2222 151566332.450508        31   120         3.717387        10.947227 1023441845.396880 /
   Binder:2211_2  2227 139200813.629103        24   120        15.103385         8.661615 937031627.576252 /
 ReferenceQueueD  2233 151442919.732524       188   120        79.365155        29.001766 1023436721.333256 /
 FinalizerDaemon  2234 151442919.780140       228   120       108.763538       154.227619 1023436562.439018 /
 FinalizerWatchd  2235 151442919.587756       116   120        51.441159        17.624764 1023436756.013866 /
  HeapTaskDaemon  2236 151566323.557892       117   120        27.663614       131.189233 1023441579.164415 /
   Binder:2224_2  2238 148354476.558177       310   120       207.696612       271.060709 1002071329.768687 /
       broadcast  2332  77699955.616123       460   120      1187.023147       227.887298 527342963.096088 /
 SystemStateMach  2439  77699954.307046       134   120        80.804307        66.102156 527343633.204301 /
   Binder:2224_6  4524 147011030.987069       335   120       329.767461       445.461836 993695051.215202 /
   thread-pool-3  4750   2715120.454983        37   130       137.140383        22.439542 525149496.160915 /bg_non_interactive
 FinalizerWatchd  2250 151576580.618577        37   120         6.792383         4.873234 1023456804.009681 /
  HeapTaskDaemon  2251 151566332.856969        28   120         3.593539        14.216156 1023441797.217721 /
 zu.monitorphone  2255 153374507.053347     23313   120     30658.942099     74396.118788 1035219141.535851 /
 Jit thread pool  2263  60121326.754303        53   129       185.753537        50.066694 400269352.323963 /
 FinalizerWatchd  2269 153376165.871944      1668   120       490.878603       294.966324 1035338231.130007 /
  HeapTaskDaemon  2270 153374487.329086      1615   120      1288.813696       943.774773 1035321780.758123 /
            JDWP  2316    165621.976191        12   120         6.464693         2.573615      5709.260706 /bg_non_interactive
  HeapTaskDaemon  2323   4929615.720344      4105   120     12540.049633      2464.617233 1035899780.482442 /bg_non_interactive
   Binder:2306_2  2325 153455937.357636       572   120       499.884461       356.031295 1035908849.044926 /
   Profile Saver  2368    165627.237879        17   120         9.563385         6.197307     41999.177408 /bg_non_interactive
   Binder:2306_3  2775 153272239.142669       511   120       454.633548       345.553139 1034803416.102666 /
   Binder:2306_4  3580 153214247.576015       567   120       426.434140       372.572249 1034350337.741113 /
   Binder:2306_6  3583 153386539.590118       508   120       434.018464       328.658239 1035427640.735906 /
   Binder:2306_8  4082 153398802.779254       446   120       518.737922       336.860931 1035443926.459100 /
 Jit thread pool  2376 129117639.281574        46   129       306.059766        88.577768 873723055.073558 /
  HeapTaskDaemon  2383 153457988.740632      4632   120      2572.219385      4838.282863 1035907172.107135 /
    PowerService  2494 153455935.916095      3642   112      1243.145921      1767.676394 1035905845.788520 /
 PowerBroadcastC  2498 153455937.470790      4387   120      4546.316927      2240.978539 1035902049.731286 /
 AppManagerThrea  2521  77700159.379244       957   120       788.123930      1250.029387 527341924.509988 /
   Binder:2370_3  2634 153398802.626477      1031   120      1196.700778       754.020696 1035464986.272766 /
 DataBuryManager  2755 153457811.822403      4486   120      4388.477662      3318.852159 1035904451.511173 /
 CalculateHandle  2756 153455936.573637      5196   120      4485.102201     10217.561808 1035892449.315728 /
   Binder:2370_4  4601 153455937.164021      1024   120      1082.080305       746.937702 1035875274.488751 /
   Binder:2370_5 17183 153373172.295877       589   120       642.641694       431.542241 684317939.848991 /
 Jit thread pool  2393 135184516.056864       180   129       450.197538       250.721163 912965606.827107 /
   Binder:2386_1  2400 152941683.503012      1048   120       305.336614       458.842168 1033516019.755495 /
 launcher-loader  2464 147640609.291375      1965   120       480.978096      1410.193368 997499150.594798 /
         GslbLog  2515  77423738.694114         5   120         0.281844         4.914384        65.480924 /
            JDWP  2465   2718229.696057         6   120         5.063693         2.972921      4968.774629 /bg_non_interactive
 FinalizerWatchd  2468   4926896.250988      1707   120      2994.854231       235.679996 1035334808.499933 /bg_non_interactive
  HeapTaskDaemon  2469   4926837.599030      1539   120      9832.421495       502.375003 1035312707.367791 /bg_non_interactive
   Binder:2453_1  2470 153373574.290838      2452   120      2383.459523      1602.088245 1035316282.873895 /
 InternalService  2573   4926837.556569      4404   120     28527.467460      1607.263485 1035292487.911507 /bg_non_interactive
 FinalizerDaemon  2483   4874469.760561        53   120        11.876234        11.116998 1023436226.733171 /bg_non_interactive
 FinalizerWatchd  2484   4875071.360861        41   120        16.975924         5.060533 1023456228.459763 /bg_non_interactive
  HeapTaskDaemon  2485   4874940.515605        38   120         7.086154        33.866692 1023441083.109646 /bg_non_interactive
   Binder:2471_1  2487 139200812.526239        30   120        64.471383        10.477311 937030423.559631 /
            JDWP  2526   4879494.229494         9   120         6.907539         3.492152      4606.546397 /bg_non_interactive
 ReferenceQueueD  2528   4879494.229494       343   120       444.815227        83.319386 1023751969.999695 /bg_non_interactive
 FinalizerDaemon  2529   4879494.229494      1461   120       977.744310      1377.863377 1023750155.018392 /bg_non_interactive
 FinalizerWatchd  2531   4879494.229494       219   120       221.557460        43.858011 1023772233.182348 /bg_non_interactive
  HeapTaskDaemon  2535   4879494.229494       900   120      2340.425708      3026.118835 1023435451.820717 /bg_non_interactive
   Binder:2516_1  2536 152859025.446139      1625   120      1366.220917      2129.038387 1032748974.039854 /
   Binder:2516_2  2537 153019628.312271      1596   120      1606.599778      1969.122159 1033648898.918982 /
 ComputationThre  2624   4879494.229494        49   120        72.545233        25.741844 605101606.088220 /bg_non_interactive
    MmsSpamUtils  2725   4879494.229494        66   120       199.600079        51.753692 950701141.719919 /bg_non_interactive
 ComputationThre  2765   4879494.229494        20   120        61.870923        12.047308 691500801.022200 /bg_non_interactive
 ComputationThre  2766   4879494.229494        30   120        69.029157        12.513998 777900901.637951 /bg_non_interactive
   Binder:2516_3  4024 153592065.417199      1525   120      1130.701234      2088.736609 1036328049.366219 /
   Binder:2516_4  4741 153389149.265276      1411   120       928.597086      2088.525466 1035410703.845248 /
   Binder:2516_5  5465 153235556.413354      1240   120       952.652701      1814.625088 1034458839.176286 /
 ComputationThre  5667   4879494.229494        49   120        68.687153        30.131848 864001108.437218 /bg_non_interactive
 ComputationThre 16641   4879494.229494        47   120       126.310156        21.785538 864001088.180450 /bg_non_interactive
 ComputationThre  6275   4879494.229494        21   120        61.856846        11.021078         1.312538 /bg_non_interactive
 ComputationThre 16592   4879494.229494        15   120        30.146769         9.996615        13.905769 /bg_non_interactive
 ComputationThre  6607   4879494.229494        22   120       104.020385        10.908232         2.720153 /bg_non_interactive
 ReferenceQueueD  2565   4926818.317371      2627   120      3417.470781       304.652145 1035315913.779735 /bg_non_interactive
 FinalizerDaemon  2566   4926818.601678      2136   120      1424.990830       546.505455 1035317669.947364 /bg_non_interactive
 FinalizerWatchd  2567   4926905.075009      1547   120      1058.899160       276.491403 1035338246.872981 /bg_non_interactive
 UsageStatsManag  2583     55273.961031        72   120        27.364000        26.052228     16743.314038 /bg_non_interactive
 FinalizerWatchd  2659   4875071.371554        57   120       222.673691         8.677461 1023454596.091141 /bg_non_interactive
  HeapTaskDaemon  2660   4874936.157603        63   120       188.704770        57.936768 1023439456.924643 /bg_non_interactive
   Binder:2648_1  2661 139200782.447872        79   120        57.664844        33.099161 937029347.862321 /
       EventCore  2715    173164.499246        14   120         7.286692        13.048232       562.757540 /bg_non_interactive
 pool-2-thread-1  2717   2715515.195643        68   120       196.756694        62.666078 525189046.979238 /bg_non_interactive
            adbd  2981 153687909.196880       354   120       286.601932       428.353245 1036678457.142419 /
     ->transport  2983 153687903.166956        75   120        35.155772        25.443308 1036679086.988433 /
     <-transport  2984 153687902.908341        47   120         7.993075        28.124463 1036679110.021974 /
 shell srvc 5914  5915 153684076.539986         1   120        10.262769         0.316693         0.000000 /
 FinalizerDaemon  3013   4874463.747252        69   120        76.556153        21.954766 1023431328.734781 /bg_non_interactive
 FinalizerWatchd  3014   4875071.204631        52   120        92.466612         6.623930 1023451219.747589 /bg_non_interactive
  HeapTaskDaemon  3015   4874940.686758        50   120       323.734614        48.424845 1023435944.795789 /bg_non_interactive
   Binder:3004_2  3017 139200655.179608        60   120        33.663157        30.861542 937025621.225387 /
 m.meizu.account  3235   4874493.568021       463   120      6298.587483      1214.190996 1023421449.367754 /bg_non_interactive
 FinalizerWatchd  3245   4875071.101016        36   120        30.210387         4.336692 1023448688.910201 /bg_non_interactive
  HeapTaskDaemon  3246   4874940.658221        36   120        91.133772        47.492309 1023433585.694240 /bg_non_interactive
 FinalizerDaemon  3297   4874469.753792        42   120       108.651154         8.881619 1023427853.577842 /bg_non_interactive
 FinalizerWatchd  3298   4875071.446246        36   120        15.184770         5.157227 1023447810.395511 /bg_non_interactive
  HeapTaskDaemon  3299   4874941.003912        41   120       127.383078        37.825307 1023432665.853779 /bg_non_interactive
   Binder:3288_2  3301 139200780.983322        32   120        17.559460        11.666309 937022193.168922 /
 FinalizerWatchd  3699 151576580.609577        61   120       464.238694         7.161152 1023437936.517409 /
  HeapTaskDaemon  3700 151566322.811431        54   120       159.171768        32.943850 1023423213.327216 /
 IntentService[S  3720  77463776.013014        64   120       241.037999        28.528075 525350955.662628 /
 FinalizerDaemon  3993   4874403.669944       128   120        86.522923        71.100691 1023415025.929199 /bg_non_interactive
 FinalizerWatchd  3994   4875071.119015        72   120        20.989154         9.073075 1023435005.832326 /bg_non_interactive
  HeapTaskDaemon  3995   4874940.931990       198   120      1681.085695       554.542687 1023417801.220905 /bg_non_interactive
 UsageStats_Logg  4006   2717866.055833        83   120        53.857616        86.775307 525311519.039916 /bg_non_interactive
 pool-1-thread-1  4011   2717369.824107        42   120        18.573846        47.764231 525277923.379144 /bg_non_interactive
 Worker.Thread.A  4045   2717369.824107         6   139         4.305153         1.320000         9.863924 /bg_non_interactive
 pool-10-thread-  4077   2717369.824107         3   120        11.546923         5.484539         8.515847 /bg_non_interactive
 xy_update_pubin  4733   2717369.824107        22   130         8.302540        17.555998      2005.064928 /bg_non_interactive
   MonitorThread  5486   4874319.506486        91   120       185.583844         3.911225 1023336899.373176 /bg_non_interactive
 FinalizerDaemon  4191   4874470.304323        41   120         7.524078         8.646614 1023411084.924190 /bg_non_interactive
 FinalizerWatchd  4192   4875071.109938        37   120         8.843078         4.674229 1023430928.153160 /bg_non_interactive
  HeapTaskDaemon  4193   4874940.546759        35   120        31.777077        35.682696 1023415874.435197 /bg_non_interactive
   Binder:4182_1  4194 139200654.974223        31   120        23.097769        12.786766 937005303.862117 /
 izu.flyme.input  4368   4874413.890944      1848   120     11106.884570      5997.001699 1023390626.755446 /bg_non_interactive
 FinalizerWatchd  4381   4875071.177169       101   120       116.400768        16.009385 1023427369.903230 /bg_non_interactive
  HeapTaskDaemon  4383   4874942.181528       124   120       526.631686       111.917009 1023411856.708576 /bg_non_interactive
   Binder:4368_1  4384 139200655.216223       135   120        43.088457        61.384469 937002006.730256 /
 RecordEventThre  9831   4763572.323871        68   120       147.170854        35.100995 968031669.072467 /bg_non_interactive
 mecommunication  4486   4874824.917215       834   120      8407.227405      2529.794082 1023396259.220537 /bg_non_interactive
 FinalizerWatchd  4500   4875071.363477       103   120        71.722766        17.089162 1023425569.173064 /bg_non_interactive
  HeapTaskDaemon  4502   4874939.004679        93   120       373.878386       165.223309 1023410103.036569 /bg_non_interactive
   Binder:4486_1  4503 151442910.903269        75   120        40.915386        32.767545 1023405240.779553 /
   Binder:4486_2  4504 151558719.429287        67   120        17.583153        28.219616 1023407087.343563 /
 FinalizerDaemon  4516   4874471.660791        59   120         4.393537        14.496154 1023405833.895948 /bg_non_interactive
 FinalizerWatchd  4517   4875071.084169        47   120        37.573462         6.350847 1023425694.079377 /bg_non_interactive
  HeapTaskDaemon  4518   4874943.648913        49   120        10.575691        62.207307 1023410647.703651 /bg_non_interactive
   Binder:4507_3  5469 125822732.500103        20   120         9.547999        11.114385 850479948.852447 /
 Jit thread pool  4562   4932487.687306        58   129       272.066846       122.068154 1023038020.410839 /bg_non_interactive
 ReferenceQueueD  4565   4932487.687306        39   120       389.864691        17.409464 997474450.844815 /bg_non_interactive
 FinalizerDaemon  4566   4932487.687306       171   120       136.802156       607.694072 997474112.467049 /bg_non_interactive
 FinalizerWatchd  4567   4932487.687306        45   120        26.990538         8.104925 997474820.224047 /bg_non_interactive
  HeapTaskDaemon  4568   4932487.687306       197   120       321.725461      1536.712006 979840373.683157 /bg_non_interactive
   Binder:4556_2  4570   4932487.687306     12588   120      3170.012024      9894.166677 1036462901.908176 /bg_non_interactive
 pool-2-thread-1  4581   4932487.687306        63   120        16.174769        33.924854     60121.161290 /bg_non_interactive
 pool-3-thread-1  4585   4932487.687306      1002   120      3354.127324       400.006363 1033834562.641973 /bg_non_interactive
   Binder:4556_3 30310   4932487.687306       449   120        54.693382       345.959694  56698532.472949 /bg_non_interactive
 FinalizerDaemon  4595   4874473.639945       106   120        16.911463        26.406768 1023404570.199635 /bg_non_interactive
 FinalizerWatchd  4596   4875071.255862        94   120        92.491845        12.901697 1023424383.698603 /bg_non_interactive
  HeapTaskDaemon  4597   4874944.279836       105   120       464.528156       145.433234 1023408878.431796 /bg_non_interactive
 UsageStatsManag  5454    166013.123625        28   120         3.736692         7.303461      2259.492547 /bg_non_interactive
 ReferenceQueueD  4676   4910959.490989       411   120       352.937851        44.359372 1031682076.525765 /bg_non_interactive
 FinalizerWatchd  4678   4911158.926681       159   120       173.740456        27.154920 1031702277.606272 /bg_non_interactive
   Binder:4667_4  4690 152544946.497651       602   120       399.673303       329.037221 1030480462.449134 /
 Jit thread pool  4775   4676806.937632        46   129       584.051773        42.575534 977649836.816554 /bg_non_interactive
 ReferenceQueueD  4778   4929598.950938     10766   120     12425.656847      1177.020436 1035855083.415536 /bg_non_interactive
 FinalizerWatchd  4780   4929683.948035      5397   120      9073.970941      1432.005829 1035878083.124398 /bg_non_interactive
  HeapTaskDaemon  4781   4929617.753216      4944   120     17246.513660      3386.164492 1035852959.703211 /bg_non_interactive
   Binder:4770_1  4782 153455937.324790      1999   120      1364.448929      1482.585784 1035865597.476943 /
         GslbLog  4788   2718236.732285        29   120         4.840078        35.634228    181009.474510 /bg_non_interactive
   Picasso-Stats  4789   4932671.018182   1135788   130   1156678.064264    388576.250197 1035097223.872074 /bg_non_interactive
 Picasso-Dispatc  4790   4932668.013223   1136455   130   1153701.173067    390023.714073 1035098268.297291 /bg_non_interactive
 Picasso-refQueu  4791   4932668.680573   1105559   130   1108971.651174    357981.978098 1035175806.485399 /bg_non_interactive
 Auto Update Han  4795   2718236.732285       126   120        18.471618        43.528536 525322997.882021 /bg_non_interactive
 UpdateCheckerDb  4811   2718236.732285        22   120         5.695768        22.241387         3.827460 /bg_non_interactive
 pool-5-thread-1  4822   4580680.916630        89   120       144.582386        83.838150 953955259.782065 /bg_non_interactive
     checkThread  4833   2718236.732285         4   120        30.073615         5.264231    180196.976738 /bg_non_interactive
        Thread-5  4855    167204.271450        19   130        20.805920        14.629311         2.252384 /bg_non_interactive
        Thread-7  4903    170543.208437       118   130        50.924775       109.628070    177290.785962 /bg_non_interactive
        Thread-9  4905    167204.271450        60   130        13.859845        31.225922        72.239310 /bg_non_interactive
 UsageStats_Logg  4909   4926760.340217      5964   120     22053.790798      5761.618276 1035245874.753318 /bg_non_interactive
 StatsUploadThre  4912   4926759.146834     12999   120     54240.817566      9151.491249 1035210290.136870 /bg_non_interactive
    RenderThread  5557   4926807.417005      1889   112      1250.415468       357.573920 1035200281.077223 /bg_non_interactive
 ConditionReceiv  5860   2718236.732285         1   120         0.000000         0.452077         0.000000 /bg_non_interactive
 ConditionReceiv  7537   2718236.732285         1   120         0.870385         0.589769         0.000000 /bg_non_interactive
 ReferenceQueueD  5119   4874824.395524       623   120       247.946313        60.611858 1023381907.948260 /bg_non_interactive
 FinalizerDaemon  5120   4874824.636218       472   120       439.513615       151.602623 1023381629.713423 /bg_non_interactive
 FinalizerWatchd  5121   4875071.132092       321   120       175.652843        46.923088 1023400547.552312 /bg_non_interactive
  HeapTaskDaemon  5122   4874941.189912       336   120      1268.151007       282.455313 1023384219.224809 /bg_non_interactive
   Binder:5111_1  5123 151562740.236414       620   120       234.087695       534.092152 1023381599.064658 /
 RecordEventThre  5128   2717244.946023         3   120         0.000000         1.235154       351.413308 /bg_non_interactive
 ContactsProvide  5130    164725.945930        77   130         3.492922       355.248080        59.653076 /bg_non_interactive
 ShadowCallLogPr  5131    164199.748667         2   130         0.221615         0.885231         0.000000 /bg_non_interactive
   Binder:5111_3  5550 151588383.731263       566   120       138.195767       338.813849 1023377208.928262 /
 FinalizerDaemon  5940   4926802.467679      2996   120      1211.203446       921.037418 1034895345.976093 /bg_non_interactive
 FinalizerWatchd  5941   4926896.333372      1551   120      1382.823089       259.089685 1034915738.585768 /bg_non_interactive
  HeapTaskDaemon  5942   4926837.855107      1400   120      7606.161641       747.696530 1034894159.126035 /bg_non_interactive
    RenderThread  5958   4926817.971183      2040   112       256.337921       364.432690 1034896104.914733 /bg_non_interactive
 pool-1-thread-4  5978   2718422.397701         4   120         0.094307         2.826308         0.099539 /bg_non_interactive
 FinalizerWatchd  6034   4926896.301757      1625   120      1309.874313       246.169316 1034910838.694057 /bg_non_interactive
  HeapTaskDaemon  6035   4926837.642415      1549   120      7602.939161       607.713152 1034889208.386257 /bg_non_interactive
 pool-3-thread-1  6105    176683.651475         5   120         2.044231         6.268077         1.515308 /bg_non_interactive
            JDWP  6363    189383.083640         5   120         2.577309         3.248076         1.101769 /bg_non_interactive
 FinalizerWatchd  6366   4875071.128477        33   120         4.719311         5.176692 1021571573.815724 /bg_non_interactive
  HeapTaskDaemon  6367   4874949.929220        54   120        28.675620       137.207768 1021556426.199150 /bg_non_interactive
   Binder:6356_1  6368 139200655.027685        27   120        13.769384        17.090851 935145903.597830 /
   Profile Saver  6370    502518.947288        16   120         0.525539        12.477845  53705005.221652 /bg_non_interactive
 ActivatePhone-E  6371   4932663.825490    116283   120     29060.985439     25698.987260 1034743433.584097 /bg_non_interactive
 pool-1-thread-1  6372   4932551.087247    379405   120    493087.468487    117425.493907 1034146432.703501 /bg_non_interactive
 u.flyme.weather  8508   4905811.948491      5163   120     32263.811569     12932.488929 1011798084.250727 /bg_non_interactive
 Jit thread pool  8515   3406445.199813        11   129       515.446693        21.770309 663649283.376569 /bg_non_interactive
 FinalizerWatchd  8520   4905919.728867       330   120       540.873161        51.405691 1011862451.056423 /bg_non_interactive
  HeapTaskDaemon  8521   4905823.533323       370   120      2163.653919       173.199539 1011845714.509549 /bg_non_interactive
   Binder:8508_1  8522 152543520.689689       517   120       254.493609       357.255224 1011842468.014930 /
 izu.filemanager 22235   4926838.092492     14687   120     88515.185914     40241.578170 900070775.600047 /bg_non_interactive
 FinalizerWatchd 22245   4926896.305372      1210   120      1057.282075       176.189841 900213058.864104 /bg_non_interactive
  HeapTaskDaemon 22246   4926837.523491      1058   120      6561.753559       396.577236 900192486.234959 /bg_non_interactive
  Binder:22235_1 22247 153373763.845287      1679   120       643.323002       972.536144 900194930.243593 /
  Binder:22235_2 22249 153374509.821315      1681   120       927.923146       945.608908 900197520.775242 /
   Profile Saver 22254   4404127.941539         6   120        11.033231         7.011000      2006.106389 /bg_non_interactive
 RecordEventThre 22278   4404127.941539         7   120        11.476077        10.047846       486.063540 /bg_non_interactive
         netdiag 25003 153686677.010546     72720   120     23727.429479    138365.018860 620420216.077324 /
         tcpdump 25007 153687840.646576    661806   120    136663.612588    141003.425955 620309679.813657 /
 Jit thread pool 32200   4905688.355315       114   129       531.165236       262.794842 548860639.510215 /bg_non_interactive
 ReferenceQueueD 32203   4931408.375734      4990   120      2664.916953       735.110252 560697699.213534 /bg_non_interactive
 FinalizerDaemon 32204   4931408.910788      5162   120      3258.085998      2519.146186 560695332.192561 /bg_non_interactive
  Binder:32195_1 32207   4905688.355315       288   120       965.093920       144.552158 548099770.003167 /bg_non_interactive
 eServiceManager 32217   4905692.405392        37   120        61.290693       100.176846     29858.050073 /bg_non_interactive
 load task queue 32225   4905692.405392         2   120         1.388846         1.530538         0.055461 /bg_non_interactive
 RxIoScheduler-1 32230   4932632.476062     10111   120      6998.219256      4827.876811 561007206.105590 /bg_non_interactive
 ndHandlerThread 32233   4905692.405392         1   120         1.320077         1.304769         0.000000 /bg_non_interactive
         Timer-0 32239   4905692.405392         8   120         9.669155         2.807460 518400098.723286 /bg_non_interactive
 ConnectivityThr 32255   4905692.405392         1   120         0.578231         3.009769         0.000000 /bg_non_interactive
  Binder:32195_C 21697   4905700.773777        47   120        38.927382        16.638927 117194938.540326 /bg_non_interactive
 eizu.net.search  7374   4874526.505708       563   120      3817.714257      1792.029295 498315682.619555 /bg_non_interactive
 FinalizerDaemon  7384   4874487.087331        75   120        38.659155       619.698844 498320574.611186 /bg_non_interactive
 FinalizerWatchd  7385   4875071.103016        41   120        38.801924         6.240234 498340998.706537 /bg_non_interactive
  HeapTaskDaemon  7386   4874941.416989        45   120         6.312769       189.854542 498325847.371117 /bg_non_interactive
   Binder:7374_1  7387 151388627.489882        60   120        22.970614        24.918078 497952907.628462 /
 RecordEventThre  7395   2708751.986536         3   120         0.000000         1.197384        52.350001 /bg_non_interactive
        Thread-5  7399   2708733.441875         6   130         0.000000         1.308616         2.880846 /bg_non_interactive
 xiaoyuan_taskqu  7413   2711349.434600        47   130         7.209923        83.617154       216.662232 /bg_non_interactive
   Binder:7374_3  7419 150603778.359888        41   120        27.433076        17.922618 492202363.782363 /
 tcontactservice  7438   4926822.916447      9328   120     34532.478719     25485.498125 510128204.376491 /bg_non_interactive
 FinalizerWatchd  7448   4926896.504680       801   120       648.024688       124.142688 510207191.928763 /bg_non_interactive
  HeapTaskDaemon  7449   4926837.782569       690   120      2899.197323       396.883075 510189813.455169 /bg_non_interactive
   Binder:7438_1  7450 153361658.893905       782   120       231.238922       408.341308 510187290.110555 /
   Profile Saver  7452   2717244.946023         3   120        16.336847        10.979385      1999.089389 /bg_non_interactive
 StatsUploadThre  7465   4926768.377294      6690   120     21795.181484      4891.672419 510160959.193273 /bg_non_interactive
   kworker/u21:1  7528  77456579.022522        29   100         2.079615         2.696386     79018.115341 /
  Signal Catcher 26897 141247121.961602         1   120         0.072384         1.705077         0.000000 /
 FinalizerWatchd 26901 151576580.448038        11   120        12.091385         1.284922  72933188.080650 /
  HeapTaskDaemon 26902 151566336.109277        14   120         9.638461        21.348229  72918114.231922 /
 pool-1-thread-1 26907 141247124.102425        20   120         4.840536         4.694618       595.173231 /
        Thread-2 26908 141247165.375599         5   120         2.529615         9.752539         4.382000 /
        Thread-3 26909 141247289.081469         5   120         3.092537         1.270462         0.344539 /
        Thread-5 26911 141247289.081469         1   120         0.281538         0.917385         0.000000 /
        Thread-7 26913 141247616.858611         2   120         1.230692         1.083308         0.000000 /
        Thread-8 26914 141249054.473219        39   120       102.416694        62.661384      3005.702622 /
        Thread-9 26915 153687017.260693     22712   120     35826.112336     14820.009771  86105692.741905 /
       Thread-10 26916 141247616.858611         2   120        33.137616         3.859154         0.000000 /
       Thread-11 26917 141249050.508089         2   120        13.911385         1.536846         0.000000 /
       Thread-12 26918 141249059.488678         1   120         5.745231         1.408308         0.000000 /
 ReferenceQueueD  4204   4926810.873294        57   120        38.779382         7.243154  11881209.951559 /bg_non_interactive
 FinalizerWatchd  4206   4926896.324680        20   120        40.476922         4.574999  11900937.558912 /bg_non_interactive
  HeapTaskDaemon  4207   4926837.998953        29   120       163.482307        92.142231  11886043.970876 /bg_non_interactive
   Binder:4185_1  4208 152682089.243315        17   120        22.831307        19.882538   8279296.350892 /
   Binder:4185_2  4209 153358867.541611        12   120        19.765231         7.029537  11880834.676019 /
   Binder:4185_3  4210 152941815.748155        11   120         6.345691         7.885924  10079893.571877 /
   Profile Saver  4211   4874938.486445         5   120         6.651539        22.292616      1999.705158 /bg_non_interactive
 AsyncQueryWorke  4213   4874809.596223        13   120        10.297616         8.320231        37.206691 /bg_non_interactive
 RxScheduledExec  4216   4932517.764388       207   120       188.374771       114.464999  13198706.400390 /bg_non_interactive
 RxScheduledExec  4217   4932517.485080       216   120       294.389230        50.240241  13198661.174919 /bg_non_interactive
 RxIoScheduler-1  4218   4932520.674991       239   120       100.250163       112.263457  13200160.377083 /bg_non_interactive
 RecordEventThre  4223   4874916.402142         2   120         0.000000         1.628307       133.367924 /bg_non_interactive
 pool-3-thread-1  4225   4874899.422758         2   120         0.515923         0.997692         0.000000 /bg_non_interactive
     kworker/0:3  5730 153681617.910271      2437   120      1631.211916      2204.719392   1580708.999166 /
   kworker/u20:3  5821 153687520.776697       694   120       570.427698       306.502469    862766.286894 /
     kworker/0:2  5839 153687946.923183      8382   120      6399.375723      9281.080705    700862.506975 /
     kworker/0:0  5871 153687675.097160      1551   120       755.621222      1569.718394    413031.572758 /
   kworker/u20:1  5878 153687902.579266       340   120       281.762769       133.692472    361061.771160 /
     kworker/0:1  5888 153613107.461324         3   120         3.720154         0.158000         4.040308 /


cpu#1: Online
  .nr_running                    : 2
  .load                          : 2048
  .nr_switches                   : 50330891
  .nr_load_updates               : 18465962
  .nr_uninterruptible            : -282929
  .next_balance                  : 4554.077177
  .curr->pid                     : 5914
  .clock                         : 1036739631.224580
  .clock_task                    : 1036739631.224580
  .cpu_load[0]                   : 304
  .cpu_load[1]                   : 271
  .cpu_load[2]                   : 371
  .cpu_load[3]                   : 442
  .cpu_load[4]                   : 451
  .yld_count                     : 328297
  .sched_count                   : 52031170
  .sched_goidle                  : 13402190
  .avg_idle                      : 157078
  .max_idle_balance_cost         : 78539
  .ttwu_count                    : 28394891
  .ttwu_local                    : 19995708

cfs_rq[1]:/bg_non_interactive
  .exec_clock                    : 577939.399761
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 879925.689476
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : -152808035.003624
  .nr_spread_over                : 3440
  .nr_running                    : 0
  .load                          : 0
  .load_avg                      : 0
  .runnable_load_avg             : 0
  .util_avg                      : 0
  .removed_load_avg              : 0
  .removed_util_avg              : 0
  .tg_load_avg_contrib           : 0
  .tg_load_avg                   : 943
  .se->exec_start                : 1036722623.319617
  .se->vruntime                  : 56084769.222524
  .se->sum_exec_runtime          : 578346.130491
  .se->statistics.wait_start     : 0.000000
  .se->statistics.sleep_start    : 0.000000
  .se->statistics.block_start    : 0.000000
  .se->statistics.sleep_max      : 0.000000
  .se->statistics.block_max      : 0.000000
  .se->statistics.exec_max       : 268.577308
  .se->statistics.slice_max      : 158.383846
  .se->statistics.wait_max       : 449.603155
  .se->statistics.wait_sum       : 474626.775818
  .se->statistics.wait_count     : 405003
  .se->load.weight               : 2
  .se->avg.load_avg              : 0
  .se->avg.util_avg              : 1

cfs_rq[1]:/
  .exec_clock                    : 42386409.280566
  .MIN_vruntime                  : 0.000001
  .min_vruntime                  : 56084976.869536
  .max_vruntime                  : 0.000001
  .spread                        : 0.000000
  .spread0                       : -97602983.874333
  .nr_spread_over                : 104638
  .nr_running                    : 1
  .load                          : 1024
  .load_avg                      : 2629
  .runnable_load_avg             : 303
  .util_avg                      : 194
  .removed_load_avg              : 230
  .removed_util_avg              : 55
  .tg_load_avg_contrib           : 2629
  .tg_load_avg                   : 8008

rt_rq[1]:/bg_non_interactive
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.000000
  .rt_runtime                    : 700.000000

rt_rq[1]:/
  .rt_nr_running                 : 0
  .rt_throttled                  : 0
  .rt_time                       : 0.240462
  .rt_runtime                    : 800.000000

dl_rq[1]:
  .dl_nr_running                 : 0

runnable tasks:
            task   PID         tree-key  switches  prio     wait-time             sum-exec        sum-sleep
----------------------------------------------------------------------------------------------------------
        kthreadd     2  56084963.882383      7231   120      4231.780138     11602.057297 1036723552.983919 /
     rcu_preempt     7  56084971.464306  38012914   120  19389373.432429   4762070.363433 1012596123.004465 /
     migration/1    11         0.000000    920361     0         0.195462   1119760.329035         0.000000 /
     ksoftirqd/1    12  56084941.917306   1503624   120   2051568.246464    208690.852721  84770006.156114 /
     kworker/1:0    13  56084928.574845   1593806   120   2819612.156152   3042907.328028 1030879705.296110 /
    kworker/1:0H    14  56084928.506641    769028   100     87134.568064     44580.172387 1036607480.393581 /
  conn-md-thread    66  56032835.789987      1290   120       897.904397       265.639693 1035373385.155010 /
     ion_mm_heap    71  33752638.207281     11146   120       880.444199      1770.794336 525416463.703157 /
 gpu_dvfs_host_r   134  33752390.546093       127   120      1065.321542        11.329230 525417797.621171 /
     kworker/1:1   156  56084933.544153   1480308   120   2649053.709056   2762215.380543 1031325843.420206 /
 present_fence_w   174         0.000000      9298    12         0.000000      1922.959458         0.047924 /
      ccci_ipc_3   192     19044.703054         9   120         0.000000         1.116384     47963.967653 /
 sub_touch_suspe   225       186.599518         2   100         0.000000         0.084307         0.045385 /
          binder   234  21304801.244018        10   100        13.929847         5.653460 294186699.162210 /
    cs43130_eint   236       300.244655         2   100         0.000000         0.093307         0.049693 /
         deferwq   239       312.671963         2   100         0.000000         0.447693         0.026615 /
 ipi_cpu_dvfs_rt   240  56084971.606382   4721361   120   3352016.787765   1096347.393886 1032282573.766225 /
        hps_main   248  56084961.982216  24442794   100  11377751.354137  44892965.375321 980455518.350080 /
     kworker/1:2   253  56084928.546614    396668   120   1203003.599743     57758.489925 1035468059.093820 /
 tspdrv_workqueu   264       354.950809         2   100         0.000000         0.087309         0.043307 /
    charger_pe30   271       388.244648         2   120         0.023230         0.076770         0.044384 /
          wdtk-1   274         0.000000    431664     0         0.066230     47351.786026         4.055385 /
 irq/681-inv_irq   286         0.000000     42650    49         0.000000      7706.968076         0.000000 /
    kworker/1:1H   298  56084928.506220    934189   100   1239206.746474    130105.400793 1035357234.967516 /
 ext4-rsv-conver   339       539.640231         2   100         0.000000         0.064077         0.071000 /
 ext4-rsv-conver   344       550.112265         2   100         0.008846         0.432692         0.189308 /
     teei_daemon   347      2195.663032       161   120        90.583849       116.936697      6587.161548 /
     logd.reader   355  56032980.178008      4583   118      3264.198461      7344.057700 1035348005.371874 /
  surfaceflinger   387  33752399.837783     10181   112      4546.362293     40562.144613 525356774.143072 /
    Binder:387_2   572  56081748.643697     36132   120     17463.768519     10239.252491 1036571219.337219 /
         ged-swd   591  33719968.867936      1395   112        17.363224       110.643873 525334077.194950 /
    Dispatcher_0   600  33737930.380606      4578   112       227.688896      1541.349714 525392407.423655 /
  surfaceflinger   601      1331.493639         3   112         0.134923         0.380923         0.061846 /
   POSIX timer 0   664  56083429.748978     54059   112     32477.850144     18576.148154 1036607201.578018 /
     SceneThread  1005    667002.152976       125   130       287.304917         6.417922 525395419.028202 /bg_non_interactive
    Binder:387_3  1020  56083383.295191     36625   120     16490.325650     10295.973974 1036627444.532135 /
          mtkmal   674      2885.809776        11   120         2.375308         2.284615      4352.146780 /
          mtkmal   681  56084916.647847    216612   120    189561.945738    122792.805958 1036400650.395202 /
          mtkmal   691      6116.721328         9   120        15.282844         1.301617     22385.949054 /
          mtkmal   695  56084916.437540    107969   120     19794.837281     45825.955143 1036647275.325944 /
          mtkmal   703      6088.699496        12   120         5.913077         4.226999     22134.655515 /
          mtkmal   706  56084916.313308    107384   120     37367.077785     35296.931397 1036640172.506429 /
          mtkmal   733  56084917.083001    183879   120     56196.013705    120004.179848 1036536562.988102 /
 mobile_log_d.rd 25016  56084963.608614   6882190   120  14953859.358904  13240534.693251 592393228.035553 /
    Binder:402_1   605  33674260.181807        22   120        13.631922        11.849771 525315422.061848 /
    Binder:402_3  5381  33673729.693961         8   120         2.242077         3.615307 525223555.280785 /
  AALServiceMain   550  33744414.178102     21323   116     14364.783386     30645.313177 525349866.786645 /
    Binder:403_1   551  33731747.557534        87   120        29.063926        22.879080 525394222.701955 /
    Binder:406_2  1485   6436626.900390      1437   120        84.571312       654.617231     72026.841326 /
 OMXCallbackDisp  2019      8318.329531        92   118        97.916078         8.736992      1231.909242 /
       mtk_agpsd   517      1084.858079         4   120        12.049385         1.693538       337.371155 /
  POSIX timer 24   538      1103.398786         1   120         0.000000         0.132462         0.000000 /
  POSIX timer 25   539      1106.293924         1   120         0.000000         0.077307         0.000000 /
  POSIX timer 26   540      2529.862698         4   120        25.179538         3.483001      3981.455856 /
       mtk_agpsd  1174      6034.783670         7   120         0.020153         0.918078     13910.320725 /
    mtkFlpDaemon   414      6034.966768         4   120        27.891771         1.919154     23114.983054 /
 nvram_agent_bin   412      1046.923260        26   120       836.205694        51.939692         5.564923 /
     mtk_stp_psm   430  56032836.320371      4293   120      5001.201496       535.902918 1035349932.070863 /
     mtk_stp_btm   450  56032836.042372      1703   120      5220.534637      2280.545230 1035347923.092032 /
            mnld   628      1477.884085         6   120         8.254462         1.145461        10.269154 /
     audioserver   546  56084917.923701    171755   120     11857.801841    106574.623287 1036595591.446154 /
        ApmAudio   620  33718836.887086       138   104        18.553383        41.281532 525329069.429581 /
    Binder:546_1   819  56084918.966086    171246   120     10954.483405    105797.817593 1036594962.559220 /
    Binder:546_2  1378  56084918.195386    171057   120     10723.934008    105760.220095 1036579702.363216 /
    Binder:546_3  1379  56084918.462393    171242   120     11209.847543    105840.313395 1036579136.780463 /
    Binder:546_4  2033  56084860.297265    171247   120     10728.922457    106000.168897 1036562034.898080 /
    Binder:547_1  1193   6856488.335266       435   120        41.523615       419.513922   1051334.737197 /
        keystore   552   6640621.380343        35   120        47.693384        73.121536    444400.458292 /
       NPDecoder  2015      8365.785309       103   104        12.976155        11.382152      1575.555620 /
    NPDecoder-CL  2016      8365.784179       328   104        43.169999        44.040685      1511.266779 /
            netd  1322  55796180.978847       757   120       929.629850       742.595078 1030525637.599231 /
            netd  1325  55802490.599096       115   120        77.339538       120.681461 1030532294.556477 /
    Binder:556_2  1327  56009784.674785      2855   120      5756.591947      7942.906612 1035321139.251679 /
     gatekeeperd   561      1211.097265        25   120        69.088234        62.568842         6.285386 /
  stp_sdio_tx_rx   655  56032838.212448      6228   120      8626.732756      2927.322581 1035341675.696095 /
  md1_rx0_worker   720      1771.193680         4   100         0.095308         0.076231         4.023307 /
  md1_rx3_worker   723      1781.226273         2   100         0.000000         0.084076         0.053539 /
      cldma_rxq3   724      1786.333962         3   120         0.023231         0.107692         0.102385 /
      rx1_worker   740      1797.484147         4   100         7.173154         0.167001         4.110923 /
      rx5_worker   744      1799.266610         3   100         2.276538         0.181155         3.744000 /
      emdlogger3   833      2272.858487        27   120        11.823074        56.420155         3.626386 /
         viarild   859      2394.923245         6   120         0.270616         4.706925       349.505615 /
         viarild   928  33744440.692595       177   120        33.165922        38.994162 525391645.720026 /
         viarild   948      2392.951474         2   120         2.479846         1.405999        35.829231 /
         viarild   955  33672778.596621       184   120        64.992847        80.666316 525312612.229837 /
         mtkrild   901      2391.007168        95   120         6.483773        93.618459        20.904234 /
         mtkrild   930  56084917.935957    579410   120    672630.096924    303377.183368 1035734813.591619 /
         mtkrild   957  56084916.729881    218023   120     54871.580685    138727.014655 1036517202.838331 /
         mtkrild   960      8393.666937        15   120         8.117076         1.977388     23764.926979 /
         mtkrild   964      3347.244361       179   120        21.393846        56.120012      3287.045534 /
         mtkrild   966  33672845.577626        91   120        23.128384        44.858461 525312682.366841 /
         mtkrild   968      2392.262859         5   120         4.087923         0.583923         9.196077 /
         mtkrild   973      2392.920531         4   120         2.636769         0.535000         0.480538 /
        rilproxy  1042  55995455.819203      1134   120       509.634301       736.952397 1035044007.965842 /
 Ril Proxy Main   1045  56084917.853693    378235   120    383504.315587    317343.468843 1036008437.445670 /
 Ril Proxy reque  1048  56084916.522232    110603   120     21405.950388     59619.857487 1036628229.168552 /
        rilproxy  1050  56084916.352078    274756   120    241832.597538     54217.691363 1036413213.716060 /
     StateThread  5649  56084916.041231       515   120       977.280146       398.396472   2378692.334362 /
 android.display  1126  56033064.723570     16435   120      8984.296074      8017.095512 1035331623.792675 /
         Scanner  1291  56084920.981847   1134864   120   1716767.334229   4642037.257448 1030334663.149114 /
   system_server  1311  33731654.076557      2195   120       572.202625       563.199516 525378661.383784 /
   NetdConnector  1347  56032840.369217      9707   120     20449.771483      9725.707400 1035307228.786430 /
    NetworkStats  1354  56009792.254610     12568   120     42826.472921     20716.647153 1035269129.920753 /
   NetworkPolicy  1355  56033064.606031      8445   120      9703.243873      2762.269520 1035325421.396398 /
     WifiService  1358  56051587.806532      6453   120      4912.990426      3459.065001 1035915378.949281 /
 notification-sq  1373    843199.427417      1856   130      3926.209923      1073.862834 950533385.828158 /bg_non_interactive
       intercept  1374  27364674.588161        18   118         1.953539         8.846612 416111875.421965 /
    AudioService  1376  33717662.807677       217   120       110.230613        59.044003 525311743.419377 /
 PhotonicModulat  1391  33744500.025595       325   120       204.420156       284.282093 525376155.377218 /
 NetworkStatsObs  1501  56009482.741520       993   120       254.506925       215.775040 1035328228.342561 /
        Thread-7  1513  56032836.816371      2632   120      5037.018218       552.389622 1035328187.929843 /
   MonitorThread  1558   6724125.901720       556   120      1293.751714        25.241840    484036.380374 /
 UsageStatsManag  1677  33752235.347986        43   120         3.085000         8.958466 525377323.973147 /
         ged-swd  1969  33752390.246457      2074   112       137.238021       155.401375 525375503.744679 /
        MyThread  3070  52248501.979094      5563   120      1863.725947      1680.411822 950515136.032666 /
   Binder:1400_6  7473  56083412.858083     22852   120      3892.280790     15923.404363 511472757.682134 /
      hif_thread  1495  56032836.220330    175874   110     59579.713866     87697.692535 1035186727.668677 /
   Binder:1671_3  1949  56079125.099768     10534   120      3044.774771      6646.219846 1036459779.411004 /
 ReferenceQueueD  1712  55372150.642508        58   120        10.395077         6.460153 1023445115.112964 /
 Jit thread pool  1723  49827287.387788        35   129       367.794229        37.047156 896766777.170798 /
   Binder:1717_2  1731  56009786.034402       570   120       384.757617       385.461928 1035326176.435974 /
 Jit thread pool  1768  53531939.000335        28   129        20.202768        37.296691 979877338.655096 /
 ReferenceQueueD  1786  55373094.329253       341   120       131.093085        43.539385 1023444761.311879 /
 FinalizerDaemon  1788  55373094.298715       322   120        62.295772       131.639779 1023444740.041949 /
 RxNewThreadSche  1840      6119.851604        26   120         3.419463        14.600922        23.673923 /
   Binder:1753_3  3021  53784801.228364       215   120       153.913072       127.704533 986068421.052480 /
 RxNewThreadSche 10997   9439014.868541         5   120         3.811076         6.952154        30.429232 /
 RxNewThreadSche 21098  13494962.180242         4   120         0.107923         7.587538        12.558923 /
 RxNewThreadSche 31291  17537336.349316         3   120         9.650924         6.877922         4.645384 /
 e.systemuitools  2062  56083504.148183    215715   120    301648.420330   1953993.205708 1034376431.451683 /
 ReferenceQueueD  2092  55379666.962636       105   120        55.141998        12.229083 1023440389.673409 /
   Binder:2077_1  2099  52248473.912729       417   120       235.546998       247.534690 950528189.597516 /
         ged_srv  2103  33719180.903614       115   120        38.081692        23.509616 525303284.086359 /
 FinalizerDaemon  2125  55802309.698736       274   120        96.433851       103.236384 1030512152.509959 /
   Binder:2177_1  2191  55341260.904215        37   120        27.967996        12.658083 1023436701.924710 /
 FinalizerWatchd  2204  55372679.127459        36   120         3.917000         3.638614 1023437016.050484 /
   Binder:2194_1  2207  55341262.013907        38   120        40.090463        12.871308 1023436657.741634 /
   Binder:2224_1  2237  52840358.835322       298   120       119.847542       320.998538 963843199.328171 /
 PowerStateThrea  2458  33847236.869347      5759   120      4796.854263      3483.595340 527336075.580471 /
   Binder:2224_3  4026  52860138.850868       289   120       175.050225       233.801392 964420595.304321 /
 ReferenceQueueD  2248  55376226.178730        52   120         8.831307         5.933614 1023436902.906330 /
   Binder:2240_1  2252  51593020.581062        42   120        63.704311        33.440381 937031490.822944 /
   Binder:2255_1  2271  56033154.562491      1671   120       962.166530      1343.565687 1035321796.022525 /
   Binder:2255_3  2404  56032004.813208      1639   120       923.998699      1365.694478 1035318382.826100 /
 Jit thread pool  2314    840760.926861       401   129       636.509446       598.174388 944615143.691963 /bg_non_interactive
   Binder:2306_1  2324  56027408.099977       552   120       455.434147       356.177158 1035319812.922354 /
 downloadProvice  2763    178132.398145        47   120        14.689535        16.208308     21739.367670 /bg_non_interactive
   Binder:2306_5  3581  56045598.148836       519   120       415.198611       336.687553 1035673768.634104 /
   Binder:2306_7  3584  55995714.222897       493   120       435.668924       510.319458 1035011806.970693 /
 m.meizu.battery  2370  56053359.922863     45451   120    160503.684888    224282.216651 1035524983.063264 /
   Binder:2370_1  2384  56045598.193297      1082   120      1151.307926       831.756778 1035686997.443211 /
 UsageStatsManag  2496  55078712.476754       131   120        23.518922        35.905766 1017318302.990434 /
 RecordEventThre  2501  55078733.528312       420   120       405.771683       738.061080 1017317238.547050 /
   Binder:2370_6 27759  56035894.128884       501   120       591.378993       360.080085 598035997.279291 /
 .flyme.launcher  2386  56010160.110427     22479   120     54606.318514     83348.324293 1035180494.096128 /
   Binder:2386_2  2401  56009786.082311      1001   120       304.616536       434.117314 1035317407.078800 /
   Binder:2386_3  2445  55829592.869138       959   120       447.393002       387.808844 1031715452.154068 /
 UsageStatsManag  2508  54248979.366717        85   120         6.142921        34.401921 997500729.044958 /
 UsageStats_Logg  2509  56009809.946065      4695   120     12749.501791      6244.805194 1035298548.627207 /
 ndroid.location  2453    879441.504554     25067   120    172125.929571     57697.885234 1035093322.990406 /bg_non_interactive
   Binder:2453_2  2476  56033064.928108      2441   120      2363.857383      1541.900081 1035319107.345273 /
 ConnectivityThr  2580    666797.790817         1   120         0.000000         0.628846         0.000000 /bg_non_interactive
 trafficPollingT  2722    874614.094853       115   120        65.401389        33.299768 525283562.975460 /bg_non_interactive
 ComputationThre 26893    874614.094853        19   120        97.621847        10.583385        25.370769 /bg_non_interactive
 ComputationThre 27152    874618.017314        20   120       107.045311        11.177075         2.826846 /bg_non_interactive
 u.mzsyncservice  2557    879443.225708     18720   120     45340.206087     69031.127777 1035208250.288716 /bg_non_interactive
  HeapTaskDaemon  2568    879441.558707      1491   120      1773.618784       837.339138 1035321975.979135 /bg_non_interactive
   Binder:2557_1  2569  56031944.102234      1427   120       452.224756      1366.056482 1035317857.740724 /
   Binder:2557_2  2570  56033065.499878      1461   120       451.642318      1434.576522 1035320632.156899 /
 UsageStats_Logg  2586    873023.866714       407   120      1244.391839       517.626861 1023070746.628069 /bg_non_interactive
 RecordEventThre  2589     59246.247671         3   120         0.125692         1.544616     16777.821424 /bg_non_interactive
 StatsUploadThre  2602    873020.640639       990   120      8619.253869       829.768078 1023063005.462051 /bg_non_interactive
            JDWP  2656    184202.989590        12   120         8.830693         3.254845      3848.387855 /bg_non_interactive
 FinalizerDaemon  2658    873804.763280        90   120       134.453234        29.832462 1023434665.586243 /bg_non_interactive
   Profile Saver  2680    184202.989590         8   120         2.344231         6.287692      1999.479082 /bg_non_interactive
 FinalizerDaemon  3244    873933.193119        43   120       130.746692         9.167776 1023428731.903995 /bg_non_interactive
   Profile Saver  3256     26637.242212        10   120         5.319307         6.626540      1999.544697 /bg_non_interactive
 com.meizu.cloud  3288    873963.986808       486   120      6609.332174      1175.332080 1023420288.479594 /bg_non_interactive
 pool-1-thread-1  3304    177796.982104        14   120         1.710077         8.185537     60058.760683 /bg_non_interactive
 Jit thread pool  3694   6336221.838045        13   129         4.989386        25.141231     28019.900604 /
 UsageStatsManag  4005    666446.370581        30   120         9.254616        16.404769 525281525.981613 /bg_non_interactive
 RecordEventThre  4007    666592.986501        59   120        11.412001        21.220767 525311575.022840 /bg_non_interactive
 xiaoyuan_taskqu  4146    666445.965658        70   130        32.387618        65.221074     14545.066189 /bg_non_interactive
    RenderThread  5485    873753.210070       668   112       826.587074       404.358791 1023335862.970144 /bg_non_interactive
 ReferenceQueueD  4377    873842.548267       200   120       163.293619        26.327696 1023407430.025945 /bg_non_interactive
 FinalizerDaemon  4378    873842.579344       174   120       109.215692        59.261463 1023407449.739333 /bg_non_interactive
    input_worker  4874    666932.530464        45   125       112.201770       714.529155     60134.851374 /bg_non_interactive
 FinalizerDaemon  4499    873808.280352       108   120         8.892074        29.538537 1023405691.530181 /bg_non_interactive
   Binder:4507_2  4520  51592920.101288        44   120         7.668691        13.387922 937000053.623719 /
   Profile Saver  4525    178132.398145        12   120         0.000000         5.718693      1999.577004 /bg_non_interactive
 UsageStatsManag  4527    178132.398145        15   120         8.885155        12.735612        14.708695 /bg_non_interactive
 RecordEventThre  4528    178132.398145         1   120         5.583000         0.965000         0.000000 /bg_non_interactive
 u.net.pedometer  4556    879917.915367     27654   120     29182.461965     31458.080646 1036535463.372010 /bg_non_interactive
   Binder:4556_1  4569    879917.915367     12524   120      4035.936612      9933.564723 1036582099.286509 /bg_non_interactive
 pool-1-thread-1  4574    879917.915367        78   120       231.954385        93.533460 968654758.328956 /bg_non_interactive
 UsageStatsManag  4576    879917.915367        69   120         8.294002        42.423770        56.545690 /bg_non_interactive
 UsageStats_Logg  4578    879917.915367         6   120         1.695847         1.279999         0.000000 /bg_non_interactive
 RecordEventThre  4579    879917.915367         4   120         4.040537         1.019694         0.000000 /bg_non_interactive
 StatsUploadThre  4582    879917.915367       721   120      9082.140566       701.243915 1023031913.555590 /bg_non_interactive
 Jit thread pool  4672    844089.347915        99   129       219.860997       259.771081 951215140.006606 /bg_non_interactive
 FinalizerDaemon  4677    877719.567480       411   120       357.755000       122.855144 1031681992.910303 /bg_non_interactive
  HeapTaskDaemon  4679    877980.730985       169   120      1445.022084       131.623924 1031685964.605067 /bg_non_interactive
   Binder:4667_3  4682  55803735.812095       738   120       410.726773       385.035131 1030480627.597531 /
   Binder:4667_5  4692  55804292.061990       600   120       154.926688       340.477836 1030502748.706264 /
   Binder:4667_6  4693  55804292.501836       639   120       516.432305       419.259317 1030502299.149784 /
   Binder:4667_7  5069  55804292.599837       424   120       310.832921       295.324386 1030484320.120593 /
 FinalizerDaemon  4779    879649.286904      8457   120      8624.741429      2411.221798 1035857640.758367 /bg_non_interactive
   Binder:4770_2  4784  56040636.909456      1998   120      1417.464308      1494.671607 1035424488.218074 /
   Profile Saver  4785    666861.514442        18   120       178.477925        14.420152 273671965.767069 /bg_non_interactive
 ConditionReceiv  4794    879588.434399      3360   120     43331.675105      1305.894766 1035823588.266414 /bg_non_interactive
        Thread-8  4904    181262.101046        56   130        20.672769        33.496619        48.157075 /bg_non_interactive
 RecordEventThre  4910    666861.514442        49   120        17.687151        27.858386    103904.768787 /bg_non_interactive
   Binder:4770_3  5561  56045598.341527      1909   120      1528.026623      1455.445159 1035569117.759708 /
       Thread-16  5575    181248.254014         7   130         3.434847         6.637921         2.019385 /bg_non_interactive
 d.process.acore  5111    874184.523887      4096   120     21237.654214     10989.644702 1023350227.407823 /bg_non_interactive
   Binder:5111_2  5124  55483396.307495       596   120       169.122607       325.324772 1023395587.511312 /
   kworker/u21:0  5687  33744440.423756       102   100        40.813461        14.185077 524973868.959958 /
   Binder:5931_1  5943  56009944.247145       627   120       562.623919       337.453074 1034896321.554501 /
 StatsUploadThre  5975    879102.820921     12964   120     56768.383155     10396.350566 1034828927.067921 /bg_non_interactive
 Jit thread pool  6029    427550.143762        29   129       152.545075        54.487619 300292195.461302 /bg_non_interactive
   Profile Saver  6041    185485.753638         5   120         0.283693         8.097461      1999.721390 /bg_non_interactive
     kworker/1:3  6056  56084928.535768    448944   120   1279332.124868    159772.562317 1034815615.660702 /
 Jit thread pool  6361    342485.585058        33   129        83.270536        61.345310 160816105.186243 /bg_non_interactive
 ReferenceQueueD  8518    877556.086560       686   120       559.598381        68.911077 1011842445.257233 /bg_non_interactive
   Binder:8508_2  8523  55802328.453773       525   120       225.832153       331.340910 1011842507.836394 /
   Profile Saver  8526    264257.453028        11   120        28.885769         7.460693      1999.320081 /bg_non_interactive
 pool-1-thread-1 22257    827072.120048        26   120       164.788613       236.237080      2974.943391 /bg_non_interactive
 UsageStatsManag 22276    827072.120048        26   120        47.843462        33.031229       512.854541 /bg_non_interactive
 UsageStats_Logg 22277    827072.120048         3   120         0.206154        19.030845       449.737925 /bg_non_interactive
     kworker/1:4  3613  56084928.545460    466968   120   1049002.550413    368339.677530 794046072.191231 /
 fe:MzSecService 32195    879888.601495     22266   120     58229.455896    151346.434086 560491639.947313 /bg_non_interactive
 FinalizerWatchd 32205    879888.204058      1875   120      1640.302002       380.980634 560718993.852163 /bg_non_interactive
  HeapTaskDaemon 32206    879884.804111      2065   120      3651.827370      3054.816190 560699306.952971 /bg_non_interactive
   Profile Saver 32209    877466.255866        31   120        14.999769        33.448307  42300740.930388 /bg_non_interactive
 pool-1-thread-1 32211    877466.255866        58   120        17.607848        35.805462     13491.527491 /bg_non_interactive
 TMS_THREAD_POOL 32216    877466.255866         6   120        17.231384         4.979692      3223.318085 /bg_non_interactive
 RxComputationSc 32231    879700.294340     12062   120     46238.573074     13109.753767 560640684.713513 /bg_non_interactive
  Binder:32195_3  4536    877466.255866       217   120       178.859617        98.341914 518701970.511164 /bg_non_interactive
  Binder:32195_5 11799    877466.255866       225   120        49.848075        88.434609 461756243.003943 /bg_non_interactive
  Binder:32195_6 11800    877466.255866       199   120        78.836919        82.805016 461756167.823158 /bg_non_interactive
  Binder:32195_7 16999    877466.641636       168   120        47.848227        73.005323 425629509.722716 /bg_non_interactive
  Binder:32195_8 21958  55802342.747157       155   120        43.053926        55.632616 382425104.003803 /
  Binder:32195_9 32130    877466.255866       129   120        57.017692        30.586010 288948228.980481 /bg_non_interactive
  Binder:32195_A 11525    877466.255866        75   120        41.355383        21.630611 202529484.478545 /bg_non_interactive
  Binder:32195_B 11526    877466.255866        66   120         8.906075        27.998465 202529490.582460 /bg_non_interactive
  Binder:32195_D 31879    877470.176251        32   120         2.393538         5.008769  29704629.972819 /bg_non_interactive
 pool-1-thread-1  7402    659166.582163        10   120         0.423385         6.709615        56.299231 /bg_non_interactive
 StatsUploadThre  7403    873024.303640       283   120      4181.570620       329.628774 497953384.596307 /bg_non_interactive
 pool-10-thread-  7411    659200.182852        17   120         2.788923         7.052540        21.836768 /bg_non_interactive
 xiaoyuan-ipool2  7416    660020.226869        71   139        24.325766        84.926388        72.681924 /bg_non_interactive
 xy_update_pubin  7427    663033.103871        25   130        10.895387        21.876152      2000.079235 /bg_non_interactive
 Jit thread pool  7443    872285.772842        25   129       416.071385        34.537771 497582451.763115 /bg_non_interactive
   Binder:7438_2  7451  56011891.861905       792   120       359.527154       444.379243 510187118.842846 /
 RecordEventThre  7457    666234.679271        53   120        20.782694        21.278077       227.769233 /bg_non_interactive
 pool-1-thread-1  7464    847435.328289        57   120       117.316072        51.842235 435111434.820875 /bg_non_interactive
         GslbLog  7466    666234.679271         5   120        14.529231         5.980308        36.965692 /bg_non_interactive
 iatek.mtklogger 26890  55341346.736371       196   120       400.568619       554.115002  72912363.573059 /
            JDWP 26898  52248143.841881         3   120        10.140000         2.687847         0.337077 /
  Binder:26890_2 26904  52248147.021651         4   120         0.979847         3.568539      1442.756541 /
        Thread-4 26910  52248145.619651         9   120         9.059616         7.356615         0.000000 /
        Thread-6 26912  52248145.619651         8   120         6.151538         7.693923        12.106539 /
 calendar:remote  4185    879173.282384       405   120      1205.021839      1177.336782  11878945.360857 /bg_non_interactive
 UsageStatsManag  4221    874255.869878        22   120        16.338923        10.703769       180.499846 /bg_non_interactive
 UsageStats_Logg  4222    879077.960539        66   120       434.468075        78.005699  11880008.743782 /bg_non_interactive
 StatsUploadThre  4226    879083.960922       151   120       749.630460       139.289086  11879550.132628 /bg_non_interactive
         GslbLog  4227    874243.192648        14   120         5.584462         5.669845        22.937154 /bg_non_interactive
R             sh  5914  56084976.869536        76   120        55.309925       185.528918     16060.396349 /

2.2.10、”/proc/schedstat” & “/proc/pid/schedstat”

我们可以通过”/proc/schedstat”读出cpu级别的一些调度统计,具体的代码实现在kernel/sched/stats.c show_schedstat()中:

# cat /proc/schedstat
version 15
timestamp 4555707576
cpu0 498206 0 292591647 95722605 170674079 157871909 155819980602662 147733290481281 195127878      /* runqueue-specific stats */
domain0 003 5 5 0 0 0 0 0 5 0 0 0 0 0 0 0 0 7 7 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 14 1 0                /* domain-specific stats */
domain1 113 5 5 0 0 0 0 0 5 0 0 0 0 0 0 0 0 7 7 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 17 0 0
cpu1 329113 0 52366034 13481657 28584254 20127852 44090575379688 34066018366436 37345579
domain0 003 4 4 0 0 0 0 1 3 0 0 0 0 0 0 0 0 4 3 0 2 1 0 2 1 0 0 0 0 0 0 0 0 0 9 3 0
domain1 113 4 4 0 0 0 0 0 1 0 0 0 0 0 0 0 0 3 3 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 7 0 0
cpu4 18835 0 13439942 5205662 8797513 2492988 14433736408037 4420752361838 7857723
domain0 113 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 3 2 1 201 0 0 0 2 1 0 1 0 0 0 0 0 0 8 7 0
cpu8 32417 0 13380391 4938475 9351290 2514217 10454988559488 3191584640696 7933881
domain0 113 1 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 7 8 0

可以通过”/proc/pid/schedstat”读出进程级别的调度统计,具体的代码在fs/proc/base.c proc_pid_schedstat()中:

# cat /proc/824/schedstat
29099619 5601999 20             /* task->se.sum_exec_runtime, task->sched_info.run_delay, task->sched_info.pcount */

2.3、RT调度算法

分析完normal进程的cfs调度算法,我们再来看看rt进程(SCHED_RR/SCHED_FIFO)的调度算法。RT的调度算法改动很小,组织形式还是以前的链表数组,rq->rt_rq.active.queue[MAX_RT_PRIO]包含100个(0-99)个数组链表用来存储runnable的rt线程。rt进程的调度通过rt_sched_class系列函数来实现。

  • SCHED_FIFO类型的rt进程调度比较简单,优先级最高的一直运行,直到主动放弃运行。

  • SCHED_RR类型的rt进程在相同优先级下进行时间片调度,每个时间片的时间长短可以通过sched_rr_timeslice变量来控制:

# cat /proc/sys/kernel/sched_rr_timeslice_ms  // SCHED_RR的时间片为25ms
25

2.3.1、task_tick_rt()

scheduler_tick() -> task_tick_rt()

↓

static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
{
    struct sched_rt_entity *rt_se = &p->rt;

    /* (1) 更新时间统计、rt-throttle计算 */
    update_curr_rt(rq);

    /* (2) 更新rt的capacity request */
    sched_rt_update_capacity_req(rq);

    watchdog(rq, p);

    /*
     * RR tasks need a special form of timeslice management.
     * FIFO tasks have no timeslices.
     */
    /* (3) 如果是SCHED_FIFO类型的rt进行,不进行时间片调度直接返回 */
    if (p->policy != SCHED_RR)
        return;

    if (--p->rt.time_slice)
        return;

    /* (4) SCHED_RR类型的时间片用完重置时间片,
        时间片大小为sched_rr_timeslice 
     */
    p->rt.time_slice = sched_rr_timeslice;

    /*
     * Requeue to the end of queue if we (and all of our ancestors) are not
     * the only element on the queue
     */
    /* (5) 如果SCHED_RR类型的时间片已经用完,进行Round-Robin,
        将当前进程移到本优先级的链表尾部,换链表头部进程运行
     */
    for_each_sched_rt_entity(rt_se) {
        if (rt_se->run_list.prev != rt_se->run_list.next) {
            requeue_task_rt(rq, p, 0);
            resched_curr(rq);
            return;
        }
    }
}

|→

static void update_curr_rt(struct rq *rq)
{
    struct task_struct *curr = rq->curr;
    struct sched_rt_entity *rt_se = &curr->rt;
    u64 delta_exec;
    int cpu = rq_cpu(rq);
#ifdef CONFIG_MTK_RT_THROTTLE_MON
    struct rt_rq *cpu_rt_rq;
    u64 runtime;
    u64 old_exec_start;

    old_exec_start = curr->se.exec_start;
#endif

    if (curr->sched_class != &rt_sched_class)
        return;

    per_cpu(update_exec_start, rq->cpu) = curr->se.exec_start;
    /* (1.1) 计算距离上一次的delta时间 */
    delta_exec = rq_clock_task(rq) - curr->se.exec_start;
    if (unlikely((s64)delta_exec <= 0))
        return;

    schedstat_set(curr->se.statistics.exec_max,
              max(curr->se.statistics.exec_max, delta_exec));

    /* sched:update rt exec info*/
    /* (1.2) 记录当前rt的exec info,在故障时吐出 */
    per_cpu(exec_task, cpu).pid = curr->pid;
    per_cpu(exec_task, cpu).prio = curr->prio;
    strncpy(per_cpu(exec_task, cpu).comm, curr->comm, sizeof(per_cpu(exec_task, cpu).comm));
    per_cpu(exec_delta_time, cpu) = delta_exec;
    per_cpu(clock_task, cpu) = rq->clock_task;
    per_cpu(exec_start, cpu) = curr->se.exec_start;

    /* (1.3) 统计task所在线程组(thread group)的运行时间:
        tsk->signal->cputimer.cputime_atomic.sum_exec_runtime
     */
    curr->se.sum_exec_runtime += delta_exec;
    account_group_exec_runtime(curr, delta_exec);

    /* (1.4) 更新task所在cgroup之cpuacct的某个cpu运行时间ca->cpuusage[cpu]->cpuusage */
    curr->se.exec_start = rq_clock_task(rq);
    cpuacct_charge(curr, delta_exec);

    /* (1.5) 累加时间*freq_capacity到rq->rt_avg */
    sched_rt_avg_update(rq, delta_exec);

    per_cpu(sched_update_exec_start, rq->cpu) = per_cpu(update_curr_exec_start, rq->cpu);
    per_cpu(update_curr_exec_start, rq->cpu) = sched_clock_cpu(rq->cpu);

    /* (1.6) 流控使能则进行流控计算  */
    if (!rt_bandwidth_enabled())
        return;

#ifdef CONFIG_MTK_RT_THROTTLE_MON
    cpu_rt_rq = rt_rq_of_se(rt_se);
    runtime = sched_rt_runtime(cpu_rt_rq);
    if (cpu_rt_rq->rt_time == 0 && !(cpu_rt_rq->rt_throttled)) {
        if (old_exec_start < per_cpu(rt_period_time, cpu) &&
            (per_cpu(old_rt_time, cpu) + delta_exec) > runtime) {
            save_mt_rt_mon_info(cpu, delta_exec, curr);
            mt_rt_mon_switch(MON_STOP, cpu);
            mt_rt_mon_print_task(cpu);
        }
        mt_rt_mon_switch(MON_RESET, cpu);
        mt_rt_mon_switch(MON_START, cpu);
        update_mt_rt_mon_start(cpu, delta_exec);
    }
    save_mt_rt_mon_info(cpu, delta_exec, curr);
#endif

    for_each_sched_rt_entity(rt_se) {
        struct rt_rq *rt_rq = rt_rq_of_se(rt_se);

        if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
            raw_spin_lock(&rt_rq->rt_runtime_lock);
            /* (1.7) 流控计算:
                rt_rq->rt_time:为rt_rq在本周期内已经运行的时间
                rt_rq->rt_runtime:为rt_rq在本周期内可以运行的时间  //950ms
                rt_rq->tg->rt_bandwidth.rt_period:为一个周期的大小  //1s
                如果rt_rq->rt_time > rt_rq->rt_runtime,则发生rt-throttle了
             */
            rt_rq->rt_time += delta_exec;
            if (sched_rt_runtime_exceeded(rt_rq))
                resched_curr(rq);
            raw_spin_unlock(&rt_rq->rt_runtime_lock);
        }
    }
}

|→

static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta)
{
    /* (1.5.1) 累加时间*freq_capacity到rq->rt_avg ,
        注意时间单位都是ns
     */
    rq->rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq));
}

2.3.2、rq->rt_avg

我们计算rq->rt_avg(累加时间*freq_capacity),主要目的是给CPU_FREQ_GOV_SCHED使用。

CONFIG_CPU_FREQ_GOV_SCHED的主要思想是cfs和rt分别计算cpu_sched_capacity_reqs中的rt、cfs部分,在update_cpu_capacity_request()中综合cfs和rt的freq_capacity request,调用cpufreq框架调整一个合适的cpu频率。CPU_FREQ_GOV_SCHED是用来取代interactive governor的。

/* (1) cfs对cpu freq capcity的request,
    per_cpu(cpu_sched_capacity_reqs, cpu).cfs
 */
static inline void set_cfs_cpu_capacity(int cpu, bool request,
                    unsigned long capacity, int type)
{
#ifdef CONFIG_CPU_FREQ_SCHED_ASSIST
    if (true) {
#else
    if (per_cpu(cpu_sched_capacity_reqs, cpu).cfs != capacity) {
#endif
        per_cpu(cpu_sched_capacity_reqs, cpu).cfs = capacity;
        update_cpu_capacity_request(cpu, request, type);
    }
}

/* (2) rt对cpu freq capcity的request,
    per_cpu(cpu_sched_capacity_reqs, cpu).rt
 */
static inline void set_rt_cpu_capacity(int cpu, bool request,
                       unsigned long capacity,
                    int type)
{
#ifdef CONFIG_CPU_FREQ_SCHED_ASSIST
    if (true) {
#else
    if (per_cpu(cpu_sched_capacity_reqs, cpu).rt != capacity) {
#endif
        per_cpu(cpu_sched_capacity_reqs, cpu).rt = capacity;
        update_cpu_capacity_request(cpu, request, type);
    }
}

|→


/* (3) 综合cfs、rt的request,
    调整cpu频率
 */
void update_cpu_capacity_request(int cpu, bool request, int type)
{
    unsigned long new_capacity;
    struct sched_capacity_reqs *scr;

    /* The rq lock serializes access to the CPU's sched_capacity_reqs. */
    lockdep_assert_held(&cpu_rq(cpu)->lock);

    scr = &per_cpu(cpu_sched_capacity_reqs, cpu);

    new_capacity = scr->cfs + scr->rt;
    new_capacity = new_capacity * capacity_margin_dvfs
        / SCHED_CAPACITY_SCALE;
    new_capacity += scr->dl;

#ifndef CONFIG_CPU_FREQ_SCHED_ASSIST
    if (new_capacity == scr->total)
        return;
#endif

    scr->total = new_capacity;
    if (request)
        update_fdomain_capacity_request(cpu, type);
}

针对CONFIG_CPU_FREQ_GOV_SCHED,rt有3条关键计算路径:

  • 1、rt负载的(rq->rt_avg)的累加:scheduler_tick() -> task_tick_rt() -> update_curr_rt() -> sched_rt_avg_update()
  • 2、rt负载的老化:scheduler_tick() -> __update_cpu_load() -> __update_cpu_load() -> sched_avg_update()
    或者scheduler_tick() -> task_tick_rt() -> sched_rt_update_capacity_req() -> sched_avg_update()

  • 3、rt request的更新:scheduler_tick() -> task_tick_rt() -> sched_rt_update_capacity_req() -> set_rt_cpu_capacity()

同样,cfs也有3条关键计算路径:

  • 1、cfs负载的(rq->rt_avg)的累加:
  • 2、cfs负载的老化:
  • 3、cfs request的更新:scheduler_tick() -> sched_freq_tick() -> set_cfs_cpu_capacity()

在进行smp的loadbalance时也有相关计算:run_rebalance_domains() -> rebalance_domains() -> load_balance() -> find_busiest_group() -> update_sd_lb_stats() -> update_group_capacity() -> update_cpu_capacity() -> scale_rt_capacity()

我们首先对rt部分的路径进行分析:

  • rt负载老化sched_avg_update():
void sched_avg_update(struct rq *rq)
{
    /* (1) 默认老化周期为1s/2 = 500ms */
    s64 period = sched_avg_period();

    while ((s64)(rq_clock(rq) - rq->age_stamp) > period) {
        /*
         * Inline assembly required to prevent the compiler
         * optimising this loop into a divmod call.
         * See __iter_div_u64_rem() for another example of this.
         */
        asm("" : "+rm" (rq->age_stamp));
        rq->age_stamp += period;
        /* (2) 每个老化周期,负载老化为原来的1/2 */
        rq->rt_avg /= 2;
        rq->dl_avg /= 2;
    }
}

|→

static inline u64 sched_avg_period(void)
{
    /* (1.1) 老化周期 = sysctl_sched_time_avg/2 = 500ms */
    return (u64)sysctl_sched_time_avg * NSEC_PER_MSEC / 2;
}

/*
 * period over which we average the RT time consumption, measured
 * in ms.
 *
 * default: 1s
 */
const_debug unsigned int sysctl_sched_time_avg = MSEC_PER_SEC;
  • rt frq_capability request的更新:scheduler_tick() -> task_tick_rt() -> sched_rt_update_capacity_req() -> set_rt_cpu_capacity()
static void sched_rt_update_capacity_req(struct rq *rq)
{
    u64 total, used, age_stamp, avg;
    s64 delta;

    if (!sched_freq())
        return;

    /* (1) 最新的负载进行老化 */
    sched_avg_update(rq);
    /*
     * Since we're reading these variables without serialization make sure
     * we read them once before doing sanity checks on them.
     */
    age_stamp = READ_ONCE(rq->age_stamp);
    /* (2) avg=老化后的负载 */
    avg = READ_ONCE(rq->rt_avg);
    delta = rq_clock(rq) - age_stamp;

    if (unlikely(delta < 0))
        delta = 0;

    /* (3) total时间=一个老化周期+上次老化剩余时间 */
    total = sched_avg_period() + delta;

    /* (4) avg/total=request,(最大频率=1024) */
    used = div_u64(avg, total);
    if (unlikely(used > SCHED_CAPACITY_SCALE))
        used = SCHED_CAPACITY_SCALE;

    /* (5) update request */
    set_rt_cpu_capacity(rq->cpu, true, (unsigned long)(used), SCHE_ONESHOT);
}

2.3.3、rt bandwidth(rt-throttle)

基于时间我们还可以对的rt进程进行带宽控制(bandwidth),超过流控禁止进程运行。这也叫rt-throttle。

  • rt-throttle的原理是:规定一个监控周期,在这个周期里rt进程的运行时间不能超过一定时间,如果超过则进入rt-throttle状态,进程被强行停止运行退出rt_rq,且rt_rq也不能接受新的进程来运行,直到下一个周期开始才能退出rt-throttle状态,同时开始下一个周期的bandwidth计算。这样就达到了带宽控制的目的。
# cat /proc/sys/kernel/sched_rt_period_us  // rt-throttle的周期是1s
1000000

# cat /proc/sys/kernel/sched_rt_runtime_us // rt-throttle在一个周期里,可运行的时间为950ms
950000

上面这个实例的意思就是rt-throttle的周期是1s,1s周期内可以运行的时间为950ms。rt进程在1s以内如果运行时间达到950ms则会被强行停止,1s时间到了以后才会被恢复,这样进程就被强行停止了50ms。

这里写图片描述

下面我们来具体分析一下具体代码:

scheduler_tick() -> task_tick_rt()

↓

static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
{

    /* (1) 更新时间统计、rt-throttle计算 */
    update_curr_rt(rq);


}

|→

static void update_curr_rt(struct rq *rq)
{


    /* (1.6) 流控使能则进行流控计算  */
    if (!rt_bandwidth_enabled())
        return;

#ifdef CONFIG_MTK_RT_THROTTLE_MON
    cpu_rt_rq = rt_rq_of_se(rt_se);
    runtime = sched_rt_runtime(cpu_rt_rq);
    if (cpu_rt_rq->rt_time == 0 && !(cpu_rt_rq->rt_throttled)) {
        if (old_exec_start < per_cpu(rt_period_time, cpu) &&
            (per_cpu(old_rt_time, cpu) + delta_exec) > runtime) {
            save_mt_rt_mon_info(cpu, delta_exec, curr);
            mt_rt_mon_switch(MON_STOP, cpu);
            mt_rt_mon_print_task(cpu);
        }
        mt_rt_mon_switch(MON_RESET, cpu);
        mt_rt_mon_switch(MON_START, cpu);
        update_mt_rt_mon_start(cpu, delta_exec);
    }
    save_mt_rt_mon_info(cpu, delta_exec, curr);
#endif

    for_each_sched_rt_entity(rt_se) {
        struct rt_rq *rt_rq = rt_rq_of_se(rt_se);

        if (sched_rt_runtime(rt_rq) != RUNTIME_INF) {
            raw_spin_lock(&rt_rq->rt_runtime_lock);
            /* (1.7) 流控计算:
                rt_rq->rt_time:为rt_rq在本周期内已经运行的时间
                rt_rq->rt_runtime:为rt_rq在本周期内可以运行的时间  //950ms
                rt_rq->tg->rt_bandwidth.rt_period:为一个周期的大小  //1s
                如果rt_rq->rt_time > rt_rq->rt_runtime,则发生rt-throttle了
             */
            rt_rq->rt_time += delta_exec;
            if (sched_rt_runtime_exceeded(rt_rq))
                resched_curr(rq);
            raw_spin_unlock(&rt_rq->rt_runtime_lock);
        }
    }
}

||→

static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq)
{
    u64 runtime = sched_rt_runtime(rt_rq);
    u64 runtime_pre;

    if (rt_rq->rt_throttled)
        return rt_rq_throttled(rt_rq);

    if (runtime >= sched_rt_period(rt_rq))
        return 0;

    /* sched:get runtime*/
    /* (1.7.1) 如果已经达到条件(rt_rq->rt_time > rt_rq->rt_runtime)
        尝试向同一root_domain的其他cpu来借运行时间进行loadbalance,// 那其他cpu也必须在跑rt任务吧?
        借了时间以后其他cpu的实时额度会减少iter->rt_runtime -= diff,
        本cpu的实时额度会增大rt_rq->rt_runtime += diff,
     */
    runtime_pre = runtime;
    balance_runtime(rt_rq);
    runtime = sched_rt_runtime(rt_rq);
    if (runtime == RUNTIME_INF)
        return 0;

    /* (1.7.2) 做完loadbalance以后,已运行时间还是超过了额度时间,
        说明已经达到rt-throttle
     */
    if (rt_rq->rt_time > runtime) {
        struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
#ifdef CONFIG_RT_GROUP_SCHED
        int cpu = rq_cpu(rt_rq->rq);
        /* sched:print throttle*/
        printk_deferred("[name:rt&]sched: initial rt_time %llu, start at %llu\n",
                per_cpu(init_rt_time, cpu), per_cpu(rt_period_time, cpu));
        printk_deferred("[name:rt&]sched: cpu=%d rt_time %llu <-> runtime",
                cpu, rt_rq->rt_time);
        printk_deferred(" [%llu -> %llu], exec_task[%d:%s], prio=%d, exec_delta_time[%llu]",
                runtime_pre, runtime,
                per_cpu(exec_task, cpu).pid,
                per_cpu(exec_task, cpu).comm,
                per_cpu(exec_task, cpu).prio,
                per_cpu(exec_delta_time, cpu));
        printk_deferred(", clock_task[%llu], exec_start[%llu]\n",
                per_cpu(clock_task, cpu), per_cpu(exec_start, cpu));
        printk_deferred("[name:rt&]update[%llu,%llu], pick[%llu, %llu], set_curr[%llu, %llu]\n",
                per_cpu(update_exec_start, cpu), per_cpu(sched_update_exec_start, cpu),
                per_cpu(pick_exec_start, cpu), per_cpu(sched_pick_exec_start, cpu),
                per_cpu(set_curr_exec_start, cpu), per_cpu(sched_set_curr_exec_start, cpu));
#endif

        /*
         * Don't actually throttle groups that have no runtime assigned
         * but accrue some time due to boosting.
         */
        if (likely(rt_b->rt_runtime)) {
            /* (1.7.3) rt-throttle标志置位 */
            rt_rq->rt_throttled = 1;
            /* sched:print throttle every time*/
            printk_deferred("sched: RT throttling activated\n");
#ifdef CONFIG_RT_GROUP_SCHED
            mt_sched_printf(sched_rt_info, "cpu=%d rt_throttled=%d",
                    cpu, rt_rq->rt_throttled);
            per_cpu(rt_throttling_start, cpu) = rq_clock_task(rt_rq->rq);
#ifdef CONFIG_MTK_RT_THROTTLE_MON
            /* sched:rt throttle monitor */
            mt_rt_mon_switch(MON_STOP, cpu);
            mt_rt_mon_print_task(cpu);
#endif
#endif
        } else {
            /*
             * In case we did anyway, make it go away,
             * replenishment is a joke, since it will replenish us
             * with exactly 0 ns.
             */
            rt_rq->rt_time = 0;
        }

        /* (1.7.4) 如果达到rt-throttle,将rt_rq强行退出运行 */
        if (rt_rq_throttled(rt_rq)) {
            sched_rt_rq_dequeue(rt_rq);
            return 1;
        }
    }

    return 0;
}

从上面的代码中可以看到rt-throttle的计算方法大概如下:每个tick累加运行时间rt_rq->rt_time,周期内可运行的额度时间为rt_rq->rt_runtime(950ms),一个周期大小为rt_rq->tg->rt_bandwidth.rt_period(默认1s)。如果(rt_rq->rt_time > rt_rq->rt_runtime),则发生rt-throttle了。

发生rt-throttle以后,rt_rq被强行退出,rt进程被强行停止运行。如果period 1s, runtime 950ms,那么任务会被强制停止50ms。但是下一个周期到来以后,任务需要退出rt-throttle状态。系统把周期计时和退出rt-throttle状态的工作放在hrtimer do_sched_rt_period_timer()中完成。

每个rt进程组task_group公用一个hrtimer sched_rt_period_timer(),在rt task_group创建时分配,在有进程入tg任何一个rt_rq时启动,在没有任务运行时hrtimer会停止运行。

void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
{
    rt_b->rt_period = ns_to_ktime(period);
    rt_b->rt_runtime = runtime;

    raw_spin_lock_init(&rt_b->rt_runtime_lock);

    /* (1) 初始化hrtimer的到期时间为rt_period_timer,默认1s */
    hrtimer_init(&rt_b->rt_period_timer,
            CLOCK_MONOTONIC, HRTIMER_MODE_REL);
    rt_b->rt_period_timer.function = sched_rt_period_timer;
}

static void start_rt_bandwidth(struct rt_bandwidth *rt_b)
{
    if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
        return;

    raw_spin_lock(&rt_b->rt_runtime_lock);
    if (!rt_b->rt_period_active) {
        rt_b->rt_period_active = 1;
        /* (2) 启动hrtimer */
        hrtimer_forward_now(&rt_b->rt_period_timer, rt_b->rt_period);
        hrtimer_start_expires(&rt_b->rt_period_timer, HRTIMER_MODE_ABS_PINNED);
    }
    raw_spin_unlock(&rt_b->rt_runtime_lock);
}

我们看看timer时间到期后的操作:

static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer)
{
    struct rt_bandwidth *rt_b =
        container_of(timer, struct rt_bandwidth, rt_period_timer);
    int idle = 0;
    int overrun;

    raw_spin_lock(&rt_b->rt_runtime_lock);
    for (;;) {
        overrun = hrtimer_forward_now(timer, rt_b->rt_period);
        if (!overrun)
            break;

        raw_spin_unlock(&rt_b->rt_runtime_lock);
        /* (1) 实际的timer处理 */
        idle = do_sched_rt_period_timer(rt_b, overrun);
        raw_spin_lock(&rt_b->rt_runtime_lock);
    }
    if (idle)
        rt_b->rt_period_active = 0;
    raw_spin_unlock(&rt_b->rt_runtime_lock);

    /* (2) 如果没有rt进程运行,idle状态,则hrtimer退出运行 */
    return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
}

|→

static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
{
    int i, idle = 1, throttled = 0;
    const struct cpumask *span;

    span = sched_rt_period_mask();
#ifdef CONFIG_RT_GROUP_SCHED
    /*
     * FIXME: isolated CPUs should really leave the root task group,
     * whether they are isolcpus or were isolated via cpusets, lest
     * the timer run on a CPU which does not service all runqueues,
     * potentially leaving other CPUs indefinitely throttled.  If
     * isolation is really required, the user will turn the throttle
     * off to kill the perturbations it causes anyway.  Meanwhile,
     * this maintains functionality for boot and/or troubleshooting.
     */
    if (rt_b == &root_task_group.rt_bandwidth)
        span = cpu_online_mask;
#endif
    /* (1.1) 遍历root domain中的每一个cpu */
    for_each_cpu(i, span) {
        int enqueue = 0;
        struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
        struct rq *rq = rq_of_rt_rq(rt_rq);

        raw_spin_lock(&rq->lock);
        per_cpu(rt_period_time, i) = rq_clock_task(rq);

        if (rt_rq->rt_time) {
            u64 runtime;
            /* sched:get runtime*/
            u64 runtime_pre = 0, rt_time_pre = 0;

            raw_spin_lock(&rt_rq->rt_runtime_lock);
            per_cpu(old_rt_time, i) = rt_rq->rt_time;

            /* (1.2) 如果已经rt_throttled,首先尝试进行load balance */
            if (rt_rq->rt_throttled) {
                runtime_pre = rt_rq->rt_runtime;
                balance_runtime(rt_rq);
                rt_time_pre = rt_rq->rt_time;
            }
            runtime = rt_rq->rt_runtime;

            /* (1.3) 减少rt_rq->rt_time,一般情况下经过减操作,rt_rq->rt_time=0,
                相当于新周期重新开始计数
             */
            rt_rq->rt_time -= min(rt_rq->rt_time, overrun*runtime);
            per_cpu(init_rt_time, i) = rt_rq->rt_time;
            /* sched:print throttle*/
            if (rt_rq->rt_throttled) {
                printk_deferred("[name:rt&]sched: cpu=%d, [%llu -> %llu]",
                        i, rt_time_pre, rt_rq->rt_time);
                printk_deferred(" -= min(%llu, %d*[%llu -> %llu])\n",
                        rt_time_pre, overrun, runtime_pre, runtime);
            }

            /* (1.4)如果之前是rt-throttle,且throttle条件已经不成立(rt_rq->rt_time < runtime),
                退出rt-throttle
             */
            if (rt_rq->rt_throttled && rt_rq->rt_time < runtime) {
                /* sched:print throttle*/
                printk_deferred("sched: RT throttling inactivated cpu=%d\n", i);
                rt_rq->rt_throttled = 0;
                mt_sched_printf(sched_rt_info, "cpu=%d rt_throttled=%d",
                        rq_cpu(rq), rq->rt.rt_throttled);
                enqueue = 1;
#ifdef CONFIG_MTK_RT_THROTTLE_MON
                if (rt_rq->rt_time != 0) {
                    mt_rt_mon_switch(MON_RESET, i);
                    mt_rt_mon_switch(MON_START, i);
                }
#endif
                /*
                 * When we're idle and a woken (rt) task is
                 * throttled check_preempt_curr() will set
                 * skip_update and the time between the wakeup
                 * and this unthrottle will get accounted as
                 * 'runtime'.
                 */
                if (rt_rq->rt_nr_running && rq->curr == rq->idle)
                    rq_clock_skip_update(rq, false);
            }
            if (rt_rq->rt_time || rt_rq->rt_nr_running)
                idle = 0;
            raw_spin_unlock(&rt_rq->rt_runtime_lock);
        } else if (rt_rq->rt_nr_running) {
            idle = 0;
            if (!rt_rq_throttled(rt_rq))
                enqueue = 1;
        }
        if (rt_rq->rt_throttled)
            throttled = 1;

        /* (1.5) 退出rt-throttle,将rt_rq重新入队列运行 */
        if (enqueue)
            sched_rt_rq_enqueue(rt_rq);
        raw_spin_unlock(&rq->lock);
    }

    if (!throttled && (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF))
        return 1;

    return idle;
}

猜你喜欢

转载自blog.csdn.net/pwl999/article/details/78817900