Thread scheduling mechanism and thread synchronization issues

Scheduling mechanism

  • The specific scheduling implementation is divided into operating system and JVM

    • There are many scheduling mechanisms for operating systems, and there are two common ones: time slice Unix and preemptive windows
  • All Java virtual machines have a thread scheduler to determine which thread to run at that time. There are two main scheduling models: time-sharing scheduling model and preemptive scheduling model

    • Preemptive scheduling based on time slice rotation method. The thread scheduler will give high priority more chances to run. If the priority is the same, it will be randomly selected, and the execution time will be re-scheduled after the execution time reaches the time slice.
  • Thread scheduling is not cross-platform, it depends not only on the Java virtual machine, but also on the operating system. In some operating systems, even if the running thread is not blocked, it will give up the CPU after running for a period of time, giving other threads a chance to run

Thread priority

Java uses a preemptive scheduling mechanism, providing a thread scheduler to monitor all threads in the program that enter the ready state after startup. The thread scheduler will determine which thread should be scheduled for execution according to the priority of the thread

The priority of the thread is represented by a number, ranging from 1 to 10, and the default value is 5. But it should be noted that the specific implementation of priority needs to depend on the operating system, so the priority in Java needs to be mapped to the operating system, so there may be a small number of priorities supported in the operating system, and multiple priorities in Java are mapped to The same operating system priority. Therefore, in specific use, it is recommended that the difference in priority should be relatively large. It is recommended to use the predefined constant
t2.setPriority(Thread.MAX_PRIORITY); ----10
t1.setPriority(Thread.NORM_PRIORITY); -----5 The default value
t3.setPriority(Thread.MIN_PRIORITY);-----1

getPriority():int Get the priority of the current thread object

setPriority(int):void Set the priority of the current thread object. Note that the priority must be modified before the start() method is called

to sum up

1. When the priority of the thread is not specified, the thread has a normal priority.

2. The priority of a thread can be divided into 1 to 10; 10 represents the highest priority, 1 represents the lowest priority, and the normal priority is 5.

3. The thread with the highest priority is given priority at runtime, but there is no guarantee that the thread will enter the running state immediately after it is started.

4. Compared with the waiting threads in the thread pool, the running thread has a higher priority.

5. The scheduler decides which thread to execute.

6. Use setPriority(int) to set the priority of the thread.

7. Before the start method of the thread is called, the priority of the thread should be specified

Thread synchronization problem

Errors are prone to occur in multi-threaded programming, which is caused by the randomness of the system's thread scheduling, and these situations must be eliminated.
-The execution sequence of threads cannot be reproduced, but the execution results must be reproducible
-the solution is to synchronize processing

Synchronization method 1

When there are multiple threads operating shared data, it is necessary to ensure that there is one and only one thread operating shared data at the same time, and other threads must wait until the thread has processed the data before proceeding. This method has a name called mutual exclusion lock, which can A lock that achieves the purpose of mutual exclusion access, that is to say, when a shared data is added to the mutex by the currently accessing thread, at the same moment, other threads can only be in a waiting state until the current thread finishes processing the lock and releases the lock .

Using the synchronized keyword in specific programming can use mutual exclusion locks.

Synchronized changes parallel to serial, of course, will affect the execution efficiency of the program, the execution speed will be affected. Secondly, the blocking of the synchronized operation thread means that the operating system controls the CPU core to switch the context. This switch itself is also time-consuming. So using the synchronized keyword will reduce the efficiency of the program.

Programming realization

Requirements: Define 4 threads to perform addition and subtraction calculations on the same num, 2 threads each perform 50 additions, and 2 threads each perform 50 subtractions

1. Define a class to encapsulate the data that needs to be operated and the corresponding operation method

//封装数据和对应的操作
public class NumOps {
    
    
	private int num;//具体操作数据
	
	//提供的方法,业务处理
	public void add() {
    
    
		num=num+1;
		System.out.println(Thread.currentThread().getName()+":add...."+this.num);
	}
	public void sub() {
    
    
		num=num-1;
		System.out.println(Thread.currentThread().getName()+":sub...."+this.num);
	}

}

2. Define the corresponding thread implementation.
There are 4 ways to define threads: extends Thread, implements Runnable, Callable and Future, thread pool

  • Thread is basically not used because of single root inheritance
  • Runnable is generally used when there is no return value, and it is used more. If you need to record the execution result of the thread, you must program it yourself
  • Callable is generally used when there is a return value, and FutureTask needs to be configured for use. The call method allows exceptions to be thrown
  • The 5 types of thread pools are generally recommended to use ThreadPoolExecutor. It is not recommended to use system pre-definition, which is used when there are a large number of short-time execution thread requirements in a short time

In actual development, threads are rarely used directly, and threads are generally used through frameworks

public class Test1 {
    
    
	private static NumOps ops=new NumOps();
	
	static class AddThread extends Thread{
    
    
		@Override
		public void run() {
    
    
			for(int i=0;i<50;i++) {
    
    
				ops.add();
			}
		}
	}
	static class SubRunnable implements Runnable{
    
    
		@Override
		public void run() {
    
    
			for(int i=0;i<50;i++) {
    
    
				ops.sub();
			}
		}
	}
}

3. Start 2 plus threads and 2 minus threads

public class Test1 {
    
    
	private static NumOps ops=new NumOps();
	public static void main(String[] args) throws Exception {
    
    
		Thread[] arr=new Thread[4];
		for(int i=0;i<4;i++) {
    
    
			if(i%2==0) {
    
    
				AddThread at=new AddThread();
				arr[i]=at;
				at.start();
			}else {
    
    
				Thread st=new Thread(new SubRunnable());
				arr[i]=st;
				st.start();
			}
		}
		for(Thread tmp:arr) {
    
    
			tmp.join();
		}
		System.out.println("Main:"+ops.getNum());
	}
}

Conclusion: The calculation result is correct, but there will be problems in the output

Solution: Add the keyword synchronized to the addition and subtraction methods of the NumOps class

public class NumOps {
    
    
	private int num;//具体操作数据
	//提供的方法,业务处理
	public synchronized void add() {
    
    
		num=num+1;
		System.out.println(Thread.currentThread().getName()+":add...."+this.num);
	}
	public synchronized void sub() {
    
    
		num=num-1;
		System.out.println(Thread.currentThread().getName()+":sub...."+this.num);
	}
	public int getNum() {
    
    
		return num;
	}	
}

Understand the meaning of synchronized

1. Synchronized is to add a mutual exclusion lock mechanism to the current NumOps type object: there can only be one lock

2. When a thread is executing in a synchronized method, other threads cannot enter any synchronized method of the object, but can enter an unsynchronized method

Common 4 types of Java thread locks : atomic weight AtomicInteger, semaphore Semaphone, synchronized processing and reentrant lock ReentrantLock

Before jdk6 was a heavyweight lock, JDK6 started to be optimized for a total of four lock states, lock-free state (no synchronized), biased lock, lightweight lock, and heavyweight lock. The change of the lock state is carried out according to the degree of competition. Under conditions of almost no competition, biased locks will be used. Under conditions of light competition, they will be upgraded from biased locks to lightweight locks. Under severe competition conditions , Will be upgraded to a heavyweight lock. With the competition of locks, locks can be upgraded from biased locks to lightweight locks, and then upgraded heavyweight locks, but the upgrade of locks is one-way, that is to say, it can only be upgraded from low to high, and there will be no locks. Downgrade

When using synchronized locking bytecode, there will be two instructions, monitorenter and monitorexit, which can be understood as locking before the code block is executed, and unlocking when exiting synchronization

Guess you like

Origin blog.csdn.net/qq_43480434/article/details/114104183