Process, join methods, daemons, mutex

  • History of operating system development
    • History of
      1.1946 first computer was born the mid-20th Century 1950s, computer work are still using manual mode. At this time there is no concept of the operating system. 2.20 late 1950s, the emergence of human contradiction: the formation of a sharp contradiction between the slow speed of manual operation and high speed computers, manual operation has seriously damaged the utilization of system resources (the resource utilization dropped to one hundred a few points or even lower), can not be tolerated. 3. The only solution: get rid of people only manual operation, automatic transition job. This appeared to batch.
    • Batch
      -line batch system (i.e. an input job / output processing to a CPU, for example, by tape) off-line batch system (I / O-- satellite set - high-speed tape - host)
    • Multi-channel technology
      • What is a multi-channel technology
        that is simultaneously a plurality of programs into the memory, and allowing them alternately run in the CPU, they share a variety of hardware and software resources in the system. When a program due to the I / O request to suspend operation, CPU will immediately turn to run another road program.
      • Using a multichannel techniques
        multiplexing a plurality of programs on a common set of computer hardware 1. 2. spatially multiplexed on time (save state switching +) a. When the program encounters an I / O operation, the operating system will deprive CPU execute permission of the program (to improve CPU utilization, and does not affect the performance of programs) b. when the CPU execute a program permissions prolonged occupation of CPU, operating system would deprive change program (reduced efficiency of program execution)
    • After the development of multi-channel technology
      appears multiprogramming system, the operating system marks the maturing stage, has appeared in job scheduling management, processor management, memory management, external device management, file system management functions. Because multiple programs are running on your computer, start with the concept of spatial segregation, isolation only memory space in order to make the data more secure and stable. A space outside the spacer, the first multi-channel technology also reflects the characteristics of temporal and spatial multiplexing, the program encounters IO operation switches, such that the cpu utilization increased, the working efficiency of the computer also will increase.
    • Time-sharing system, real-time systems, general-purpose operating system
    • Personal computer operating system - Network operating system - distributed operating system
    • operating system
      • Understand the
        operating system is a coordinated control program management and control of computer hardware and software resources of
      • Role
        1. hides the ugly hardware call interface, providing hardware resources call for the application programmer simpler, clearer model (system call interface) 2. The application of hardware resources of the race to become the order request
  • process
    • What is the process
      the process is a process of program execution is an entity, each process has its own address space
    • Similarities and differences between processes and procedures of
      the program is an ordered set of instructions and data, which does not have any operational meaning, is a static concept. The process is a process of program execution on the processor, it is a dynamic concept. A software program can be used as long-term data exist, but there is a certain process life cycle. Program is permanent, the process is temporary.
    • The method of process scheduling
      1. FCFS First Come First Serve (FCFS) scheduling algorithm is a simple scheduling algorithm for job scheduling can also be used to schedule processes. FCFS algorithm is more conducive to a long job (process), to the detriment of short job (process). It can be seen that the algorithm is suitable for CPU busy type work, to the detriment of heavy I / O type of job (process). 2. Short short operating priority job (the process) scheduling algorithm and (SJ / PF) refers to short or short process priority job scheduling algorithm for job scheduling can also be used to schedule processes. But its work for long; not guarantee the urgency of the job (process) to be in time; the length of the job is only to be estimated. 3. Time wheel forwards round-robin basic idea (Round Robin, RR) method is to have each process in the ready queue waiting time and enjoy the service proportional to the time. In the round-robin method, the processing time of the CPU time slice needs to be divided into a fixed size, e.g., several tens of milliseconds to hundreds of milliseconds. If a process run out of time slice prescribed system after being selected schedule, but not required to complete the task, it is the release of their self-occupied CPU and discharged at the end of the ready queue, waiting for the next scheduling. At the same time, the process scheduler went Scheduling a process currently in the queue ready. In the round robin, adding to the process ready queue There are three cases: one is to divide its time slice is used up, but the process has not been completed, return to the end of the ready queue waiting for the next scheduled execution to continue. Another case is the time slice given to the process does not run out, but since the request I / O or due to the synchronization relationship with the exclusive process is blocked. After unblocking return to the ready queue. The third case is that the new process is created to enter the ready queue. 4. The multiple feedback various queues previously described as process scheduling algorithms have certain limitations. Such as short process priority scheduling algorithm, only to take care of the short process while ignoring the long process, but did not specify if the length of the process, the short process priority-based preemptive scheduling algorithm and the length of the process will not be used. The multi-level feedback queue scheduling algorithm is not necessary to know in advance the time required to perform a variety of processes, but also to meet the needs of various types of processes, so it is now being recognized as a better process scheduling algorithm. In systems that employ multiple feedback queue scheduling algorithm, the scheduling algorithm implementation process is as follows. (1) a plurality of ready queues to be provided, and given different priority for each queue. The first has the highest priority, followed by a second queue of queues, each queue priority remaining decreased by one. The algorithm gives each queue process execution time slice size can vary, the higher the priority queue, the execution time specified for each sheet the less process. For example, the second queue time slices than a first time slice twice as long as the queue, ......, i + 1-th time slot queue than the queue i-th time slice twice as long. (2) When a new process into the memory, it is first placed into the first end of the queue, waiting to be scheduled according to the principle of FCFS queuing. When it came time to perform the process, as it can be completed within the time slice, you can prepare evacuation system; if it is not completed at the end of a time slice, the process scheduler then transferred to a second end of the queue, and then Likewise waiting to be scheduled according to the principle of FCFS performed; if it is running in a second time slot queue is not yet completed, then turn it into a third queue, ......, and so on, when a long job (process) from a queue after the queue sequentially down to the n-th, n th queue at a time will take to run round-robin fashion. (3) only when the first queue is empty, the process scheduler dispatches a second run queue; only if the first 1 ~ (i-1) when the queue was empty, the scheduler will process the i-th run queue. If the processor is the i-th process in the queue for a service, but also a new process into the high priority queue (first to (any queue i-1)), the new process at this time will preempt running the processor processes, i.e., the process by the scheduler is running back into the end of the queue i, allocated to the processor to process the new arriving high priority.
    • Multilevel Feedback Queue illustrated
    • Parallel and concurrent processes
      concurrently: It looks like you can run in parallel: while performing single-core computer in the true sense can not be achieved in parallel, can be implemented concurrently
    • The process of blocking non-blocking
      blocking non-blocking: Indicates the program running blocking: blocking state (I / O operations, file operations, sleep can make the run -> blocking state) non-blocking: ready state, running state
    • Program three-state transition diagram
    • Tristate program code sample
    • Synchronous asynchronous program
      synchronous asynchronous: that of the submission process synchronization: After job submission, place to wait mission, and returns the result to get it to go, do not do anything during the (program-level performance is stuck) asynchronous: after job submission, not in place to wait twenty proceed to the next line of code (the result is to be, but obtained by other means)
  • Create a process in two ways
    • Understand the process of creating
      the creating process it is to re-open the code generated will run a memory space in memory lost in a process correspondence between the internal memory is a separate memory space process and process data can not be isolated but can directly interact with certain indirect interaction technology
    • 第一种(直接使用mutiprocessing的Process模块)
      from multiprocessing import Process​import timedef test(name): print('%s is running'%name) time.sleep(3) print('%s is over'%name)​"""windows创建进程会将代码以模块的方式 从上往下执行一遍linux会直接将代码完完整整的拷贝一份windows创建进程一定要在if __name__ == '__main__':代码块内创建 否则报错"""​​​if __name__ == '__main__': p = Process(target=test,args=('egon',)) # 创建一个进程对象 p.start() # 告诉操作系统帮你创建一个进程 print('主')​​
    • 第二种(继承Process,并覆盖__init__方法,再重写一个run方法)
      # 创建进程的第二种方式from multiprocessing import Process​import time​class MyProcess(Process): def __init__(self,name): super().__init__() self.name= name​ def run(self): print('%s is running' % self.name) time.sleep(3) print('%s is over' % self.name)​if __name__ == '__main__': p = MyProcess('egon') p.start() print('主')
    • Process模块

      Process模块用来创建进程一般需要传入target目标函数,args函数的参数​​强调:1. 需要使用关键字的方式来指定参数2. args指定的为传给target函数的位置参数,是一个元组形式,必须有逗号​参数介绍:1 group参数未使用,值始终为None2 target表示调用对象,即子进程要执行的任务3 args表示调用对象的位置参数元组,args=(1,2,'egon',)4 kwargs表示调用对象的字典,kwargs={'name':'egon','age':18}5 name为子进程的名称​​方法介绍:​​1 p.start():启动进程,并调用该子进程中的p.run() 2 p.run():进程启动时运行的方法,正是它去调用target指定的函数,我们自定义类的类中一定要实现该方法 3 p.terminate():强制终止进程p,不会进行任何清理操作,如果p创建了子进程,该子进程就成了僵尸进程,使用该方法需要特别小心这种情况。如果p还保存了一个锁那么也将不会被释放,进而导致死锁4 p.is_alive():如果p仍然运行,返回True5 p.join([timeout]):主线程等待p终止(强调:是主线程处于等的状态,而p是处于运行的状态)。timeout是可选的超时时间,需要强调的是,p.join只能join住start开启的进程,而不能join住run开启的进程 ​​​属性名称:​​1 p.daemon:默认值为False,如果设为True,代表p为后台运行的守护进程,当p的父进程终止时,p也随之终止,并且设定为True后,p不能创建自己的新进程,必须在p.start()之前设置2 p.name:进程的名称3 p.pid:进程的pid4 p.exitcode:进程在运行时为None、如果为–N,表示被信号N结束(了解即可)5 p.authkey:进程的身份验证键,默认是由os.urandom()随机生成的32字符的字符串。这个键的用途是为涉及网络连接的底层进程间通信提供安全性,这类连接只有在具有相同的身份验证键时才能成功(了解即可)​查看进程pid: os.getpid()​​查看父进程pid:os.ppid()​
  • join方法
    join 保证所有子进程执行完,主进程才能工作,不然一直阻塞​from multiprocessing import Processimport time​def test(name,i): print('%s is running'%name) time.sleep(i) print('%s is over'%name)​if __name__ == '__main__': p_list = [] p = Process(target=test,args=('egon',1)) p1 = Process(target=test,args=('kevin',2)) p2 = Process(target=test,args=('jason',3)) start_time = time.time() p.start() # 仅仅是告诉操作系统帮你创建一个进程 至于这个进程什么时候创 操作系统随机决定 p1.start() p2.start() p2.join() p.join() p1.join()​ # 主进程代码等待子进程运行结束 才继续运行 # p.join() # 主进程代码等待子进程运行结束 print('主') print(time.time() - start_time)​​​'''执行结果jason is runningegon is runningkevin is runningegon is overkevin is overjason is over主3.094853162765503​'''​​​
  • 进程间数据是隔离的
    '''​本例子:父进程是说pycharm执行这个py文件就创建了一个进程py文件的代码中​通过Process模块创建了一个进程,​那么从关系上来讲pycharm执行而创建出的就是父进程,Process创建的就是子进程'''​​from multiprocessing import Processimport time​money = 100​def test(): global money money = 99999999​if __name__ == '__main__': p = Process(target=test) p.start() p.join() print(money)# 执行结果:100​​
  • 僵尸进程与孤儿进程
    • 父进程回收子进程资源的两种方式
      1.join方法2.父进程正常死亡所有进程都会步入僵尸进程​
    • 孤儿进程
      子进程没死 父进程意外死亡​针对linux会有儿童福利院(init) ​如果父进程意外死亡他所创建的子进程都会被福利院收养
  • 守护进程
    • 理解
      会随着主进程的结束而结束。主进程创建守护进程  其一:守护进程会在主进程代码执行结束后就终止  其二:守护进程内无法再开启子进程,否则抛出异常:AssertionError: daemonic processes are not allowed to have children注意:进程之间是互相独立的,主进程代码运行结束,守护进程随即终止
    • 代码示例
      from multiprocessing import Processimport time​def test(name): print('%s总管正常活着'%name) time.sleep(3) print('%s总管正常死亡'%name)​if __name__ == '__main__': p = Process(target=test,args=('egon',)) p.daemon = True# 将该进程设为守护进程 这句话必须放在start语句之前 否则报错 p.start() time.sleep(0.1) print('皇帝jason寿正终寝')
  • 互斥锁
    • 为什么有互斥锁
      当多个进程操作同一份数据的时候 会造成数据的错乱这个时候必须加锁处理​本质是:将并发变成串行虽然降低了效率但是提高了数据的安全​
    • 注意事项
      1. 锁不要轻易使用 容易造成死锁现象2. 只在处理数据的部分加锁 不要在全局加锁3. 锁必须在主进程中产生 交给子进程去使用​
    • 代码示例
      from multiprocessing import Process,Lockimport time, json# 查票def search(i): with open('data','r',encoding='utf-8') as f: data = f.read() t_d = json.loads(data) print('用户%s查询余票为:%s'%(i,t_d.get('ticket')))​# 买票def buy(i): with open('data','r',encoding='utf-8') as f: data = f.read() t_d = json.loads(data) time.sleep(1) if t_d.get('ticket') > 0: # 票数减一 t_d['ticket'] -= 1 # 更新票数 with open('data','w',encoding='utf-8') as f: json.dump(t_d,f) print('用户%s抢票成功'%i) else: print('没票了')​def run(i,mutex): search(i) mutex.acquire() # 抢锁 只要有人抢到了锁 其他人必须等待该人释放锁 buy(i) mutex.release() # 释放锁​if __name__ == '__main__': mutex = Lock() # 生成了一把锁 for i in range(3): p = Process(target=run,args=(i,mutex)) p.start()​'''执行结果:用户0查询余票为:0用户2查询余票为:0用户1查询余票为:0没票了没票了没票了​​'''​​

 

Guess you like

Origin www.cnblogs.com/buzaiyicheng/p/11329417.html