一文带你了解进程

版权声明:转载请随意! https://blog.csdn.net/m0_38032942/article/details/82862352

现代计算机体系结构

冯·诺依曼结构

要了解进程的概念得先从计算机的体系结构说起,首先了解一些世界上用得最多的计算机体系结构:冯·诺依曼结构(还有其他的计算机体系结构:如哈佛结构)
在这里插入图片描述
冯·诺曼结构处理器具有以下几个特点:必须有一个存储器;必须有一个控制器;必须有一个运算器,用于完成算术运算和逻辑运算;必须有输入和输出设备,用于进行人机通信

存储设备对比

在这里插入图片描述
上图从容量、传输速度、价格上来作比较,可以看出来为什么我们平时见到的计算机为什么硬盘几百G甚至几个T,而内存却只有8G或者16G,内存的IO速度是非常快的,跟硬盘的IO速度是 数量级 的差距,和内存相比寄存器就更快了,也是数量级的差距,于是出现了缓存,现在(2018/09/27)都是三级缓存,也就几M的大小,每次CPU在执行一些指令的时候会将需要的数据放在缓存中,其实就相当于是一个过渡元件!

操作系统的定位

在这里插入图片描述
操作系统本质上就是一款软件,一款搞管理的软件,操作系统管理软件、管理硬件,为了安全操作系统不会让用户直接操作硬件,而是对外提供一套接口:也就是我们常用的系统调用接口

什么是进程

早期的内存比较小,但是伴随着应用程序(可以理解为安装包)越来越大,现在的计算机至少都是500M内存,连500M的都很少见了。为什么应用程序越大需要的内存也越大?这与冯·诺依曼计算机体系结构有关:

首先我们都学过C语言,C程序也是一个文件,既然是文件那就是在磁盘上放着的,磁盘并不属于冯诺依曼结构中的一部分,磁盘属于外部设备,这一点需要注意,因为在冯诺依曼计算机体系中只有运算器、控制器、存储器、输入输出设备,运算器和控制器集成在CPU中,存储器实际上是内存,这也就意味着没有硬盘计算机也是可以正常工作的:《网吧电脑为什么没有硬盘 那没硬盘的电脑怎么运行?》
在这里插入图片描述
可以看出计算机在执行任务的时候,都是把应用程序加载到内存中,CPU会去内存中取数据、取指令然后才执行,这也就是为什么网吧的电脑没有硬盘也可以正常使用,只要在开机的时候把操作系统加载到内存中(操作系统也是一个软件),然后要执行某个游戏的时候再次请求服务器将游戏也加载到内存中即可!
通过上面的论述我们得出一个初步结论:一个应用程序想要被CPU执行必须要先加载到内存,这个被加载到内存的程序就叫做一个进程

操作系统怎么维护进程

当你在听音乐的时候同时也可以编辑文档,还可以挂着TIM,很显然不止一个程序在执行,既然执行一个应用程序需要把它加载到内存中,那么当前肯定不止一个进程,每个程序一旦加载到内存中就是一个进程,那么这么多的进程如何维护呢?

PCB

PCB的全称是:Processing Control Block,翻译过来叫做进程控制块
操作系统是根据PCB来对并发执行的进程进行控制和管理的。 PCB存放着操作系统用于描述进程情况及控制进程运行所需的全部信息,PCB本质上就是一个结构体,这个结构体里面封装了描述进程的全部信息,它使一个在多道程序环境下不能独立运行的程序成为一个能独立运行的基本单位或一个能与其他进程并发执行的进程。所以这就叫做: 并发
那么什么是并行呢?并行指的是多核CPU同时执行多个任务,可见并发并不是真的同时执行,并行才是真的同时执行!
如果仅仅是把程序的代码和数据拷贝到内存中毫无意义,操作系统是无法管理好这个进程的,于是出现了PCB,进程之间是相互独立的,每个进程都对应一个PCB!
在Linux下描述进程的结构体叫做task_struct,接下来看看它的源码
地址是:https://elixir.bootlin.com/linux/latest/source/include/linux/sched.h
描述task_struct的位置在第593行(2019/09/27)

struct task_struct {
#ifdef CONFIG_THREAD_INFO_IN_TASK
	/*
	 * For reasons of header soup (see current_thread_info()), this
	 * must be the first element of task_struct.
	 */
	struct thread_info		thread_info;
#endif
	/* -1 unrunnable, 0 runnable, >0 stopped: */
	volatile long			state;

	/*
	 * This begins the randomizable portion of task_struct. Only
	 * scheduling-critical items should be added above here.
	 */
	randomized_struct_fields_start

	void				*stack;
	atomic_t			usage;
	/* Per task flags (PF_*), defined further below: */
	unsigned int			flags;
	unsigned int			ptrace;

#ifdef CONFIG_SMP
	struct llist_node		wake_entry;
	int				on_cpu;
#ifdef CONFIG_THREAD_INFO_IN_TASK
	/* Current CPU: */
	unsigned int			cpu;
#endif
	unsigned int			wakee_flips;
	unsigned long			wakee_flip_decay_ts;
	struct task_struct		*last_wakee;

	/*
	 * recent_used_cpu is initially set as the last CPU used by a task
	 * that wakes affine another task. Waker/wakee relationships can
	 * push tasks around a CPU where each wakeup moves to the next one.
	 * Tracking a recently used CPU allows a quick search for a recently
	 * used CPU that may be idle.
	 */
	int				recent_used_cpu;
	int				wake_cpu;
#endif
	int				on_rq;

	int				prio;
	int				static_prio;
	int				normal_prio;
	unsigned int			rt_priority;

	const struct sched_class	*sched_class;
	struct sched_entity		se;
	struct sched_rt_entity		rt;
#ifdef CONFIG_CGROUP_SCHED
	struct task_group		*sched_task_group;
#endif
	struct sched_dl_entity		dl;

#ifdef CONFIG_PREEMPT_NOTIFIERS
	/* List of struct preempt_notifier: */
	struct hlist_head		preempt_notifiers;
#endif

#ifdef CONFIG_BLK_DEV_IO_TRACE
	unsigned int			btrace_seq;
#endif

	unsigned int			policy;
	int				nr_cpus_allowed;
	cpumask_t			cpus_allowed;

#ifdef CONFIG_PREEMPT_RCU
	int				rcu_read_lock_nesting;
	union rcu_special		rcu_read_unlock_special;
	struct list_head		rcu_node_entry;
	struct rcu_node			*rcu_blocked_node;
#endif /* #ifdef CONFIG_PREEMPT_RCU */

#ifdef CONFIG_TASKS_RCU
	unsigned long			rcu_tasks_nvcsw;
	u8				rcu_tasks_holdout;
	u8				rcu_tasks_idx;
	int				rcu_tasks_idle_cpu;
	struct list_head		rcu_tasks_holdout_list;
#endif /* #ifdef CONFIG_TASKS_RCU */

	struct sched_info		sched_info;

	struct list_head		tasks;
#ifdef CONFIG_SMP
	struct plist_node		pushable_tasks;
	struct rb_node			pushable_dl_tasks;
#endif

	struct mm_struct		*mm;
	struct mm_struct		*active_mm;

	/* Per-thread vma caching: */
	struct vmacache			vmacache;

#ifdef SPLIT_RSS_COUNTING
	struct task_rss_stat		rss_stat;
#endif
	int				exit_state;
	int				exit_code;
	int				exit_signal;
	/* The signal sent when the parent dies: */
	int				pdeath_signal;
	/* JOBCTL_*, siglock protected: */
	unsigned long			jobctl;

	/* Used for emulating ABI behavior of previous Linux versions: */
	unsigned int			personality;

	/* Scheduler bits, serialized by scheduler locks: */
	unsigned			sched_reset_on_fork:1;
	unsigned			sched_contributes_to_load:1;
	unsigned			sched_migrated:1;
	unsigned			sched_remote_wakeup:1;
	/* Force alignment to the next boundary: */
	unsigned			:0;

	/* Unserialized, strictly 'current' */

	/* Bit to tell LSMs we're in execve(): */
	unsigned			in_execve:1;
	unsigned			in_iowait:1;
#ifndef TIF_RESTORE_SIGMASK
	unsigned			restore_sigmask:1;
#endif
#ifdef CONFIG_MEMCG
	unsigned			memcg_may_oom:1;
#ifndef CONFIG_SLOB
	unsigned			memcg_kmem_skip_account:1;
#endif
#endif
#ifdef CONFIG_COMPAT_BRK
	unsigned			brk_randomized:1;
#endif
#ifdef CONFIG_CGROUPS
	/* disallow userland-initiated cgroup migration */
	unsigned			no_cgroup_migration:1;
#endif

	unsigned long			atomic_flags; /* Flags requiring atomic access. */

	struct restart_block		restart_block;

	pid_t				pid;
	pid_t				tgid;

#ifdef CONFIG_STACKPROTECTOR
	/* Canary value for the -fstack-protector GCC feature: */
	unsigned long			stack_canary;
#endif
	/*
	 * Pointers to the (original) parent process, youngest child, younger sibling,
	 * older sibling, respectively.  (p->father can be replaced with
	 * p->real_parent->pid)
	 */

	/* Real parent process: */
	struct task_struct __rcu	*real_parent;

	/* Recipient of SIGCHLD, wait4() reports: */
	struct task_struct __rcu	*parent;

	/*
	 * Children/sibling form the list of natural children:
	 */
	struct list_head		children;
	struct list_head		sibling;
	struct task_struct		*group_leader;

	/*
	 * 'ptraced' is the list of tasks this task is using ptrace() on.
	 *
	 * This includes both natural children and PTRACE_ATTACH targets.
	 * 'ptrace_entry' is this task's link on the p->parent->ptraced list.
	 */
	struct list_head		ptraced;
	struct list_head		ptrace_entry;

	/* PID/PID hash table linkage. */
	struct pid_link			pids[PIDTYPE_MAX];
	struct list_head		thread_group;
	struct list_head		thread_node;

	struct completion		*vfork_done;

	/* CLONE_CHILD_SETTID: */
	int __user			*set_child_tid;

	/* CLONE_CHILD_CLEARTID: */
	int __user			*clear_child_tid;

	u64				utime;
	u64				stime;
#ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
	u64				utimescaled;
	u64				stimescaled;
#endif
	u64				gtime;
	struct prev_cputime		prev_cputime;
#ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN
	struct vtime			vtime;
#endif

#ifdef CONFIG_NO_HZ_FULL
	atomic_t			tick_dep_mask;
#endif
	/* Context switch counts: */
	unsigned long			nvcsw;
	unsigned long			nivcsw;

	/* Monotonic time in nsecs: */
	u64				start_time;

	/* Boot based time in nsecs: */
	u64				real_start_time;

	/* MM fault and swap info: this can arguably be seen as either mm-specific or thread-specific: */
	unsigned long			min_flt;
	unsigned long			maj_flt;

#ifdef CONFIG_POSIX_TIMERS
	struct task_cputime		cputime_expires;
	struct list_head		cpu_timers[3];
#endif

	/* Process credentials: */

	/* Tracer's credentials at attach: */
	const struct cred __rcu		*ptracer_cred;

	/* Objective and real subjective task credentials (COW): */
	const struct cred __rcu		*real_cred;

	/* Effective (overridable) subjective task credentials (COW): */
	const struct cred __rcu		*cred;

	/*
	 * executable name, excluding path.
	 *
	 * - normally initialized setup_new_exec()
	 * - access it with [gs]et_task_comm()
	 * - lock it with task_lock()
	 */
	char				comm[TASK_COMM_LEN];

	struct nameidata		*nameidata;

#ifdef CONFIG_SYSVIPC
	struct sysv_sem			sysvsem;
	struct sysv_shm			sysvshm;
#endif
#ifdef CONFIG_DETECT_HUNG_TASK
	unsigned long			last_switch_count;
#endif
	/* Filesystem information: */
	struct fs_struct		*fs;

	/* Open file information: */
	struct files_struct		*files;

	/* Namespaces: */
	struct nsproxy			*nsproxy;

	/* Signal handlers: */
	struct signal_struct		*signal;
	struct sighand_struct		*sighand;
	sigset_t			blocked;
	sigset_t			real_blocked;
	/* Restored if set_restore_sigmask() was used: */
	sigset_t			saved_sigmask;
	struct sigpending		pending;
	unsigned long			sas_ss_sp;
	size_t				sas_ss_size;
	unsigned int			sas_ss_flags;

	struct callback_head		*task_works;

	struct audit_context		*audit_context;
#ifdef CONFIG_AUDITSYSCALL
	kuid_t				loginuid;
	unsigned int			sessionid;
#endif
	struct seccomp			seccomp;

	/* Thread group tracking: */
	u32				parent_exec_id;
	u32				self_exec_id;

	/* Protection against (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, mempolicy: */
	spinlock_t			alloc_lock;

	/* Protection of the PI data structures: */
	raw_spinlock_t			pi_lock;

	struct wake_q_node		wake_q;

#ifdef CONFIG_RT_MUTEXES
	/* PI waiters blocked on a rt_mutex held by this task: */
	struct rb_root_cached		pi_waiters;
	/* Updated under owner's pi_lock and rq lock */
	struct task_struct		*pi_top_task;
	/* Deadlock detection and priority inheritance handling: */
	struct rt_mutex_waiter		*pi_blocked_on;
#endif

#ifdef CONFIG_DEBUG_MUTEXES
	/* Mutex deadlock detection: */
	struct mutex_waiter		*blocked_on;
#endif

#ifdef CONFIG_TRACE_IRQFLAGS
	unsigned int			irq_events;
	unsigned long			hardirq_enable_ip;
	unsigned long			hardirq_disable_ip;
	unsigned int			hardirq_enable_event;
	unsigned int			hardirq_disable_event;
	int				hardirqs_enabled;
	int				hardirq_context;
	unsigned long			softirq_disable_ip;
	unsigned long			softirq_enable_ip;
	unsigned int			softirq_disable_event;
	unsigned int			softirq_enable_event;
	int				softirqs_enabled;
	int				softirq_context;
#endif

#ifdef CONFIG_LOCKDEP
# define MAX_LOCK_DEPTH			48UL
	u64				curr_chain_key;
	int				lockdep_depth;
	unsigned int			lockdep_recursion;
	struct held_lock		held_locks[MAX_LOCK_DEPTH];
#endif

#ifdef CONFIG_UBSAN
	unsigned int			in_ubsan;
#endif

	/* Journalling filesystem info: */
	void				*journal_info;

	/* Stacked block device info: */
	struct bio_list			*bio_list;

#ifdef CONFIG_BLOCK
	/* Stack plugging: */
	struct blk_plug			*plug;
#endif

	/* VM state: */
	struct reclaim_state		*reclaim_state;

	struct backing_dev_info		*backing_dev_info;

	struct io_context		*io_context;

	/* Ptrace state: */
	unsigned long			ptrace_message;
	siginfo_t			*last_siginfo;

	struct task_io_accounting	ioac;
#ifdef CONFIG_TASK_XACCT
	/* Accumulated RSS usage: */
	u64				acct_rss_mem1;
	/* Accumulated virtual memory usage: */
	u64				acct_vm_mem1;
	/* stime + utime since last update: */
	u64				acct_timexpd;
#endif
#ifdef CONFIG_CPUSETS
	/* Protected by ->alloc_lock: */
	nodemask_t			mems_allowed;
	/* Seqence number to catch updates: */
	seqcount_t			mems_allowed_seq;
	int				cpuset_mem_spread_rotor;
	int				cpuset_slab_spread_rotor;
#endif
#ifdef CONFIG_CGROUPS
	/* Control Group info protected by css_set_lock: */
	struct css_set __rcu		*cgroups;
	/* cg_list protected by css_set_lock and tsk->alloc_lock: */
	struct list_head		cg_list;
#endif
#ifdef CONFIG_INTEL_RDT
	u32				closid;
	u32				rmid;
#endif
#ifdef CONFIG_FUTEX
	struct robust_list_head __user	*robust_list;
#ifdef CONFIG_COMPAT
	struct compat_robust_list_head __user *compat_robust_list;
#endif
	struct list_head		pi_state_list;
	struct futex_pi_state		*pi_state_cache;
#endif
#ifdef CONFIG_PERF_EVENTS
	struct perf_event_context	*perf_event_ctxp[perf_nr_task_contexts];
	struct mutex			perf_event_mutex;
	struct list_head		perf_event_list;
#endif
#ifdef CONFIG_DEBUG_PREEMPT
	unsigned long			preempt_disable_ip;
#endif
#ifdef CONFIG_NUMA
	/* Protected by alloc_lock: */
	struct mempolicy		*mempolicy;
	short				il_prev;
	short				pref_node_fork;
#endif
#ifdef CONFIG_NUMA_BALANCING
	int				numa_scan_seq;
	unsigned int			numa_scan_period;
	unsigned int			numa_scan_period_max;
	int				numa_preferred_nid;
	unsigned long			numa_migrate_retry;
	/* Migration stamp: */
	u64				node_stamp;
	u64				last_task_numa_placement;
	u64				last_sum_exec_runtime;
	struct callback_head		numa_work;

	struct list_head		numa_entry;
	struct numa_group		*numa_group;

	/*
	 * numa_faults is an array split into four regions:
	 * faults_memory, faults_cpu, faults_memory_buffer, faults_cpu_buffer
	 * in this precise order.
	 *
	 * faults_memory: Exponential decaying average of faults on a per-node
	 * basis. Scheduling placement decisions are made based on these
	 * counts. The values remain static for the duration of a PTE scan.
	 * faults_cpu: Track the nodes the process was running on when a NUMA
	 * hinting fault was incurred.
	 * faults_memory_buffer and faults_cpu_buffer: Record faults per node
	 * during the current scan window. When the scan completes, the counts
	 * in faults_memory and faults_cpu decay and these values are copied.
	 */
	unsigned long			*numa_faults;
	unsigned long			total_numa_faults;

	/*
	 * numa_faults_locality tracks if faults recorded during the last
	 * scan window were remote/local or failed to migrate. The task scan
	 * period is adapted based on the locality of the faults with different
	 * weights depending on whether they were shared or private faults
	 */
	unsigned long			numa_faults_locality[3];

	unsigned long			numa_pages_migrated;
#endif /* CONFIG_NUMA_BALANCING */

#ifdef CONFIG_RSEQ
	struct rseq __user *rseq;
	u32 rseq_len;
	u32 rseq_sig;
	/*
	 * RmW on rseq_event_mask must be performed atomically
	 * with respect to preemption.
	 */
	unsigned long rseq_event_mask;
#endif

	struct tlbflush_unmap_batch	tlb_ubc;

	struct rcu_head			rcu;

	/* Cache last used pipe for splice(): */
	struct pipe_inode_info		*splice_pipe;

	struct page_frag		task_frag;

#ifdef CONFIG_TASK_DELAY_ACCT
	struct task_delay_info		*delays;
#endif

#ifdef CONFIG_FAULT_INJECTION
	int				make_it_fail;
	unsigned int			fail_nth;
#endif
	/*
	 * When (nr_dirtied >= nr_dirtied_pause), it's time to call
	 * balance_dirty_pages() for a dirty throttling pause:
	 */
	int				nr_dirtied;
	int				nr_dirtied_pause;
	/* Start of a write-and-pause period: */
	unsigned long			dirty_paused_when;

#ifdef CONFIG_LATENCYTOP
	int				latency_record_count;
	struct latency_record		latency_record[LT_SAVECOUNT];
#endif
	/*
	 * Time slack values; these are used to round up poll() and
	 * select() etc timeout values. These are in nanoseconds.
	 */
	u64				timer_slack_ns;
	u64				default_timer_slack_ns;

#ifdef CONFIG_KASAN
	unsigned int			kasan_depth;
#endif

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
	/* Index of current stored address in ret_stack: */
	int				curr_ret_stack;

	/* Stack of return addresses for return function tracing: */
	struct ftrace_ret_stack		*ret_stack;

	/* Timestamp for last schedule: */
	unsigned long long		ftrace_timestamp;

	/*
	 * Number of functions that haven't been traced
	 * because of depth overrun:
	 */
	atomic_t			trace_overrun;

	/* Pause tracing: */
	atomic_t			tracing_graph_pause;
#endif

#ifdef CONFIG_TRACING
	/* State flags for use by tracers: */
	unsigned long			trace;

	/* Bitmask and counter of trace recursion: */
	unsigned long			trace_recursion;
#endif /* CONFIG_TRACING */

#ifdef CONFIG_KCOV
	/* Coverage collection mode enabled for this task (0 if disabled): */
	unsigned int			kcov_mode;

	/* Size of the kcov_area: */
	unsigned int			kcov_size;

	/* Buffer for coverage collection: */
	void				*kcov_area;

	/* KCOV descriptor wired with this task or NULL: */
	struct kcov			*kcov;
#endif

#ifdef CONFIG_MEMCG
	struct mem_cgroup		*memcg_in_oom;
	gfp_t				memcg_oom_gfp_mask;
	int				memcg_oom_order;

	/* Number of pages to reclaim on returning to userland: */
	unsigned int			memcg_nr_pages_over_high;
#endif

#ifdef CONFIG_UPROBES
	struct uprobe_task		*utask;
#endif
#if defined(CONFIG_BCACHE) || defined(CONFIG_BCACHE_MODULE)
	unsigned int			sequential_io;
	unsigned int			sequential_io_avg;
#endif
#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
	unsigned long			task_state_change;
#endif
	int				pagefault_disabled;
#ifdef CONFIG_MMU
	struct task_struct		*oom_reaper_list;
#endif
#ifdef CONFIG_VMAP_STACK
	struct vm_struct		*stack_vm_area;
#endif
#ifdef CONFIG_THREAD_INFO_IN_TASK
	/* A live task holds one reference: */
	atomic_t			stack_refcount;
#endif
#ifdef CONFIG_LIVEPATCH
	int patch_state;
#endif
#ifdef CONFIG_SECURITY
	/* Used by LSM modules for access restriction: */
	void				*security;
#endif

	/*
	 * New fields for task_struct should be added above here, so that
	 * they are included in the randomized portion of task_struct.
	 */
	randomized_struct_fields_end

	/* CPU-specific state of this task: */
	struct thread_struct		thread;

	/*
	 * WARNING: on x86, 'thread_struct' contains a variable-sized
	 * structure.  It *MUST* be at the end of 'task_struct'.
	 *
	 * Do not put anything below here!
	 */
};

源码有点长,操作系统想要管理好这么多的进程必须把控进程的每一个信息,所以这就和学校管理学生一样,学号、地址、电话、身份证号…信息量很大的情况下必须封装成一个结构体来管理!
我们选择部分主要的内容看一下:

结构体成员 作用
标识符 描述本进程的唯标一符,用来区别其他进程
状态 任务状态,退出代码,退出信号等。
优先级 相对于其他进程的优先级
程序计数器 程序中即将被执的下一条指令的地址
内存指针 包括程序代码和进程相关数据的指针,还有和其他进程共享的内存块的指针
上下文数据 进程执行时处理器的寄存器中的数据
I/O状态信息 包括显式的I/O请求,分配给进程的I/O设备和被进程使用的文件列表
记账信息 可能包括处理器时间总和,使用的时钟数总和,时间限制,记账号等
其他信息

所有运在系统的进程都以 task_struct 链表的形式存在内核里。
进程的信息可以通过 /proc 系统 件夹查看。要获取PID为400的进程信息,你需要查看 / proc/400 这个件夹。 多数进程信息同样可以使 top和ps这些户集具来获取

获取进程ID与父进程ID

#include <unistd.h>
#include <stdio.h>

int main() {
        printf("pid=%d ppid=%d\n", getpid(), getppid());
        return 0;
}

初识fork

man 2 fork之后:fork() creates a new process by duplicating the calling process. The new process is referred to as the child process. The calling process is referred to as the parent process.
The child process and the parent process run in separate memory spaces. At the time of fork() both memory spaces have the same content. Memory writes, file mappings
(mmap(2)), and unmappings (munmap(2)) performed by one of the processes do not affect the other.
综合上面的意思来讲就是fork函数是用来开辟子进程的,fork() 函数正常的话对父进程返回子进程的id,对子进程返回0,返回-1则表示开辟子进程失败,所以一般使用if的结构开辟子进程:

#include <unistd.h>
#include <stdio.h>
#include <error.h>

int main()
{
        int id = 0;

        //获取当前进程的进程ID和父进程
        printf("pid:%d ppid=%d\n",getpid(),getppid());

        //开辟子进程
        id = fork();
        if(id < 0){
                perror("fork failed\n");
                return -1;
        }
        else if( id == 0){
                printf("Child,id = %d, ppid = %d\n", getpid(), getppid());
        }
        else{
                printf("Parent,id = %d, ppid = %d\n",getpid(), getppid());
        }
        return 0;
}

在这里插入图片描述
很显然,子进程的ppid与父进程的id是一致的,那么父进程的ppid又是谁呢?
是Bash,记得开始学习Linux的时候老师讲过shell就是外壳程序,而Linux下面的shell就是叫做Bash,也就是命令行解释器,每当我们用Bash执行一条指令的时候,Bash就会开启一个子进程去完成需要被执行的指令!

进程状态

Linux内核源代码的解释:为了弄明白正在运行的进程是什么意思,我们需要知道进程的不同状态。一个进程可以有几个状态,在Linux内核里,进程有时候也叫做任务。 下面的状态在kernel源代码里定义:

/*
* The task state array is a strange "bitmap" of reasons to sleep. Thus "running" is zero, and 
*  you can test for combinations of others with simple bit tests.
*/
static const char * const task_state_array[] = {
"R (running)", /* 0 */
"S (sleeping)", /* 1 */
"D (disk sleep)", /* 2 */
"T (stopped)", /* 4 */
"t (tracing stop)", /* 8 */
"X (dead)", /* 16 */
"Z (zombie)", /* 32 */
};

进程状态说明

  • R运行状态(running)
    并不意味着进程一定在运行行中,它表明进程要么是在运行中要么在运行队列里里,这个其实不难理解,因为对于单核CPU而言每个单位时间里只能运行一个进程,为了看似表面上同时执行多个任务,CPU会在多个进程之间来回切换,速度非常快以至于我们是察觉不到CPU的切换,也就造成了我们误以为在同时运行多个任务的假象!
  • S睡眠状态(sleeping)
    意味着进程在等待事件完成(这里里的睡眠有时候也叫做可中断睡眠(interruptible sleep),也就是说可以随时被唤醒,或者被杀死都有可能!
  • D磁盘休眠状态(Disk sleep)
    有时候也叫不可中断睡眠状态(uninterruptible sleep),在这个状态的进程通常会等待IO的结束,如果IO一直没有结束这个进程是无法结束的!处在这种状态的进程不接受外来的任何信号,就算使用kill -9也不可以杀掉,如果长时间未响应就说明IO出了问题!比如说我开了8个进程同时访问一个IO,访问的时候势必会加锁来保护资源,那么,当一个进程正在访问的时候,其他进程如果在等待锁,那么就会进入disk sleep,当你执行kill,它不会立即响应,当锁满足条件的时候才可能响应信号。
  • T停止止状态(stopped)
    可以通过发送 SIGSTOP 信号给进程来停止止(T)进程。这个被暂停的进程可以通过发送 SIGCONT 信号让进程继续运行。
  • X死亡状态(dead)
    这个状态只是一个返回状态,你不会在任务列表里看到这个状态

修改进程状态

通过kill -l命令查看可以发送的信号:
在这里插入图片描述
比如我们经常使用的kill -9 pid就是向ID为pid的进程发送9号信号,9号信号对应的是SIGKILL
在这里插入图片描述

Z(zombie)-僵尸进程

僵死状态(Zombies)是一个比较特殊的状态。当进程退出并且父进程(使用用wait()系统调用用,后面讲)没有读取到子进程退出的返回代码时就会产生僵死(尸)进程
僵死进程会以终止状态保持在进程表中,并且会一直在等待父进程读取退出状态代码。
所以,只要子进程退出,父进程还在运行,但父进程没有读取子进程状态,子进程进入Z状态
模拟僵尸进程:
在这里插入图片描述
接下来我们写一个shell脚本来监视这两个进程的情况

while :; do ps aux|grep test.out|grep -v grep;sleep 1; echo "#######################";done

在这里插入图片描述
可以看到父进程还没有结束的时候子进程却死掉了,子进程在死掉的时候由于PCB是不会释放的,这样就没有进程来回收这个子进程,最终导致的结果就是内存泄漏!
进程的退出状态必须被维持下去,因为他要告诉关心它的进程(父进程),你交给我的任务,我办的怎么样了。可父父进程如果一直不读取,那子进程就一直处于Z状态?是的!
维护退出状态本身就是要用数据维护,也属于进程基本信息,所以保存在task_struct(PCB)中,换句话说,Z状态一直不退出,PCB一直都要维护?是的!
那一个父进程创建了很多子进程,就是不回收,是不是就会造成内存资源的浪费?是的!
因为数据结构对象本身身就要占用用内存,想想C中定义一个结构体变量(对象),是要在内存的某个位置进行行开辟空间!

孤儿进程

接下来说说孤儿进程,顾名思义孤儿进程就是没有父进程的进程,如果父进程比子进程先退出,那么这个子进程就是孤儿进程了,下面使用代码模拟一下孤儿进程:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>

int main(int argc, char *argv[])
{
	int pid = fork();
	if(pid < 0){
		perror("fork failed...");
		return -1;
	}else if(pid == 0){
		printf("child[%d], my parentid[%d]..\n", getpid(), getppid());
		sleep(5);
		printf("child[%d], my parentid[%d]..\n", getpid(), getppid());
	}
	else{
		printf("parent[%d]...\n", getpid());
		sleep(2);
		exit(0);	
	}
    return 0;
}

在这里插入图片描述
由图中可以看出,子进程还没有退出但是父进程已经退出了,于是子进程的ID变成了2915,2915号进程是/lib/systemd/systemd --user,但是远程链接的结果却是:
在这里插入图片描述

其实Ubuntu自带的终端是个桌面软件,如果不在图形界面下运行就变成了1!

进程优先级

优先级概述

cpu资源分配的先后顺序,就是指进程的优先权(priority)。
优先权高的进程有优先执行权利。配置进程优先权对多任务环境的linux很有用用,可以改善系统性能。还可以把进程运行行到指定的CPU上,这样一一来,把不重要的进程安排到某个CPU,可以大大大大改善系统整体性能
在这里插入图片描述

PRI and NI

PRI也还是比比较好理解的,即进程的优先级,或者通俗点说就是程序被CPU执行的先后顺序,此值越小进程的优先级别越高高,那NI呢?就是我们所要说的nice值了,其表示进程可被执行的优先级的修正数值,PRI值越小越快被执行,那么加入nice值后,将会使得PRI变为PRI(new)=PRI(old)+nice 这样,当nice值为负值的时候,那么该程序将会优先级值将变小,即其优先级会变高,则其越快被执行
所以,调整进程优先级,在Linux下,就是调整进程nice值,nice其取值范围是-20至19,一共40个级别!

修改进程优先级的命令

  • 启动进程前调整: nice(开始执行程序就指定nice值:)
nice -n -5 ./test.out
  • 调整已存在进程的nice: renice
renice -5 -p 5200  //PID为5200的进程nice设为-5
  • 使用top命令更改已存在进程的nice
    进入top后按r->输入进程PID->输入nice值

其他命令

  • 竞争性: 系统进程数目众多,而而CPU资源只有少量,甚至至1个,所以进程之间是具有竞争属性的。为了高效完成任务,更合理竞争相关资源,便具有了优先级
  • 独立性: 多进程运行,需要独享各种资源,多进程运行期间互不干扰
  • 并行: 多个进程在多个CPU下分别同时运行,这称之为并行

猜你喜欢

转载自blog.csdn.net/m0_38032942/article/details/82862352
今日推荐