[03] [01] [02] The basic principles of multi-threaded and challenges

Thoughts from a problem

Rational use of threads can improve the processing performance of a program, there are two main aspects, one is able to use the first and Hyper-Threading Technology multicore cpu to achieve parallel execution threads; second asynchronous thread of execution as compared to the synchronous execution , the asynchronous execution processing performance can be a good optimization program to enhance the concurrent throughput

At the same time, it also brings a lot of trouble, give a simple example

For multi-threaded shared variable access security problems caused by

A variable i. If a thread to access the variable to be modified, this
time for the modification and access data without any problems. However, if multiple threads to modify the same variable for this, will there is a data security issues

I ++ multithreading

For the security thread, it is essentially a state management for data access, this state and this is usually shared variable. Sharing means that variable data can be accessed by a plurality of threads; variable, refers to the value of this variable in its life cycle can be changed

Whether an object is thread-safe, depending on whether it will be accessed by multiple threads, and the program is how to use this object. Therefore, in the case if multiple threads access the same shared objects, in no additional call-side code synchronization and do not have other coordinated the shared object's state is still the correct result (correctness means that the object of our expected results consistent with the provisions), it means the object is thread safe

public class IncreaseDemo {
    private static int count = 0;

    public static void increase() {
        try {
            Thread.sleep(1);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        count++;
    }

    public static void main(String[] args) throws InterruptedException {
        for (int i = 0; i < 1000; i++) {
            new Thread(() -> {
                IncreaseDemo.increase();
            }).start();
        }

        Thread.sleep(5000);
        System.out.println("运行结果:count = " + count);
    }
}

How to solve the security problem of data due to the parallel threads in Java caused it?

How to think to ensure data security threads in parallel

We can think about the nature of the problem is that there is concurrent access to shared data. If there is a way we can make parallel threads into a serial, it is not the problem does not exist yet?

According to all existing knowledge, the first thought should be the lock bar.

After all, this is not a scene mode, and when we interact with the database, you understand too pessimistic locking, optimistic locking concept. What is the lock? It is a means of synchronizing concurrent processing, and if we say that the need to reach an object in front of the lock is required to achieve a certain mutually exclusive characteristics.

Java provides a locking method is Synchroinzed keywords.

Basic understanding of synchronized

In multi-threaded programming synchronized been a veteran role, many people will call it the heavyweight lock. However, with Java SE 1.6 Dui synchronized conducted various optimization after, in some cases it is not so bad, Java SE 1.6 in order to obtain and release locks to reduce the performance overhead introduced to bring the biased locking and lightweight level locking. In this follow-up we will start slowly

The basic syntax of synchronized

There are three ways to lock synchronized, respectively,

  • Examples of modified method, the current applied to the locking instance, before entering a synchronization code to obtain the current instance of the lock
  • Static method, the lock acting on the current class objects, the synchronization code before entering the class object to obtain the current lock
  • Modifying the code block, designated lock object, to lock a given object, before entering the synchronization code library to obtain a given lock of the object

synchronized applications

After modifying the previous case, the use of synchronized keyword, you can achieve the effect of data security

public class IncreaseDemo {
    private static int total = 0;

    public static void increaseBySynchronized() {
        synchronized (IncreaseDemo.class) {
            try {
                Thread.sleep(1);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            total++;
        }
    }

    public static void main(String[] args) throws InterruptedException {
        for (int i = 0; i < 1000; i++) {
            new Thread(() -> {
                IncreaseDemo.increaseBySynchronized();
            }).start();
        }
        
        Thread.sleep(5000);
        System.out.println("synchronized运行结果:total = " + total);
    }
}

How to think lock is stored

You can think about, to achieve mutually exclusive characteristics of a multi-threaded, then what factors need this lock?

  • The need for a lock to represent something, such as access to state what lock, lock-free state is what state
  • The state requires shared by multiple threads

So we have to analyze how memory is synchronized lock it? Observation synchronized entire grammar discovery, synchronized (lock) is based on the life cycle of this object lock to lock granularity of control, it is not stored in the lock and the lock objects have to do with it?

So we object to is how to store jvm memory as a starting point, to see what's inside the object properties can be achieved lock

Object layout in memory

In the Hotspot virtual machine, the layout object is stored in memory can be divided into three regions: the object header (Header), instance data (Instance Data), alignment padding (the Padding)

Objects are stored in memory

Explore Jvm source code implementation

When we in the Java code, use new to create an object instance, (hotspot Virtual Machine) JVM level will actually create a instanceOopDesc ​​objects

Hotspot OOP-Klass virtual machine model to describe instances of Java objects, OOP (Ordinary Object Point) means a general object pointer, Klass used to describe the specific type of the object instance. Hotspot employed instanceOopDesc ​​and arrayOopDesc ​​described object header, arrayOopDesc ​​array type is used to describe objects

instanceOopDesc ​​Hotspot definition in the source code file instanceOop.hpp in addition, arrayOopDesc ​​corresponding to the definition arrayOop.hpp

class instanceOopDesc : public oopDesc {
 public:
  // aligned header size.
  static int header_size() { return sizeof(instanceOopDesc)/HeapWordSize; }

  // If compressed, the offset of the fields of the instance may not be aligned.
  static int base_offset_in_bytes() {
    // offset computation code breaks if UseCompressedClassPointers
    // only is true
    return (UseCompressedOops && UseCompressedClassPointers) ?
             klass_gap_offset_in_bytes() :
             sizeof(instanceOopDesc);
  }

  static bool contains_field_offset(int offset, int nonstatic_field_size) {
    int base_in_bytes = base_offset_in_bytes();
    return (offset >= base_in_bytes &&
            (offset-base_in_bytes) < nonstatic_field_size * heapOopSize);
  }
};

#endif // SHARE_VM_OOPS_INSTANCEOOP_HPP

You can see from the code instanceOopDesc ​​instanceOopDesc ​​inherited from oopDesc, oop.hpp file oopDesc ​​definition contained in the source code Hotspot

In general object instance, oopDesc definition contains two members, respectively,
are _mark and _metadata

_mark object that represents the mark, are markOop type, which is next to explain Mark World, which records the object and information related to lock

_metadata class represents meta information, the meta information type is stored in an object-based metadata pointing to it (Klass) the first address, which represents an ordinary pointer Klass, _compressed_klass class pointer of a compression

Mark Word

In the Hotspot, markOop defined in markOop.hpp file, as follows

class markOopDesc: public oopDesc {
 private:
  // Conversion
  uintptr_t value() const { return (uintptr_t) this; }

 public:
  // Constants
  enum { age_bits                 = 4,
         lock_bits                = 2,
         biased_lock_bits         = 1,
         max_hash_bits            = BitsPerWord - age_bits - lock_bits - biased_lock_bits,
         hash_bits                = max_hash_bits > 31 ? 31 : max_hash_bits,
         cms_bits                 = LP64_ONLY(1) NOT_LP64(0),
         epoch_bits               = 2
  };
......

Mark word objects and record information about the lock, when an object is synchronized keyword as a synchronization lock, then the lock around a series of operations and Mark word relationship. Mark Word length 32-bit virtual machine is 32bit, the length of the 64-bit virtual machine is 64bit

Mark Word data stored inside the lock flag will change varies, Mark Word may vary in the case where the storage of the following 5
Mark Word

Why any object can be achieved lock

  • First, Java each object are derived from the Object class, and each has JVM Java Object within a native C ++ object oop / oopDesc ​​the correspondence
  • When a thread acquires the lock, in fact, get a monitor object (monitor), monitor can be considered as a synchronization object, all Java objects are born carrying the monitor. In the hotspot markOop.hpp source file, you can see the following code
ObjectMonitor* monitor() const {
  assert(has_monitor(), "check");
  // Use xor instead of &~ to provide one extra tag-bit check.
  return (ObjectMonitor*) (value() ^ monitor_value);
}

Multiple threads to access synchronization code block corresponding to the object identifier to scramble monitor lock object modification, ObjectMonitor objects and threads compete for the lock logic is closely related to the above code

synchronized lock escalation

In analyzing markword, he referred to the biased lock, lock lightweight, heavyweight lock. In analyzing the difference between these types of locks, let's think about a problem with the lock enables data security, but will bring performance degradation. Do not use locks can be based on threaded parallel program to enhance the performance, but it can not guarantee thread safety. Between the two seems to be no way to achieve that meets the performance to meet the security requirements

On the hotspot virtual machine after an investigation found that, in most cases, there is no lock code is not only multi-threaded competition, and always get the same thread several times. So based on a probability, it is the synchronized after JDK1.6 do some optimization, in order to reduce to obtain and release locks bring performance overhead introduced bias lock, the concept of lightweight lock. So you will find in synchronized, the four states are latched:

  • no lock
  • Biased locking
  • Lightweight lock
  • Heavyweight lock

State of the lock according to the fierce competition from low to high degree of escalation

The basic principle of bias lock

As mentioned above, in most cases, there is no multi-threaded lock not only competitive, but always won by the same thread, in order to allow the thread to obtain lower cost of locks on the introduction of the concept of bias lock. Understand how biased locking it?

When a thread is added a synchronization lock access code block, the current thread ID is stored in the object header, the thread enters and exits subsequent addition of this synchronization lock time code block, does not require locking and releasing the lock again. But direct comparison is stored inside the head pointing to the current thread biased locking. If equal representation biased locking is biased in favor of the current thread, you do not need to try to get a lock

Biased locking logic acquisition and revocation

  • First acquires the lock object Markword, determines whether the state is in a biased (biased_lock = 1, and ThreadId empty)
  • If the state is biased, by the CAS operation, writing the current thread ID to MarkWord
  • If cas successful, markword will become so. Represents a biased locking has been obtained in the lock object, then perform synchronized block
  • If the cas failed, indicating that there are other threads have gained favor lock, this case shows that the current lock there is competition, you need to undo the thread has been biased locking, and to upgrade the locks held it as a lightweight lock (this operation must wait global security point, which is no thread executing bytecode) to perform
  • Whether if it is already biased towards the state, need to check markword stored ThreadID equal to the current thread ThreadID
  • If they are equal, no lock is obtained again, may be performed directly synchronized block
  • If not equal, indicating that the current lock biased in favor of other threads, you need to upgrade to revoke biased locking and lightweight lock

Biased locking revocation

Biased locking revocation is not to restore the object to be biased lock-free state (because there is not only biased locking lock release), but when the process of obtaining biased locking, we found there is a thread that is cas failed to compete, directly is biased locking object is added to upgrade a lightweight lock state

When the original holders biased locking thread will be revoked, the original get biased locking thread there are two cases:

  • Original thread to obtain a lock if you have biased out of the critical zone, which is synchronized block execution is over, so this time will not object header set to lock status and lock threads can compete biased but before the CAS re-thread basis
  • If the original get biased locking thread synchronization code block has not been performed, is in the critical zone, this time will get the original biased locking thread upgraded to proceed after the lightweight lock synchronization code block

In our application development, certainly there will be more than two threads compete in most cases, if you tend to open the lock, it will enhance access to resources depletion lock. Therefore, the parameters can be set by jvm UseBiasedLocking biased locking on or off
xx

The basic principle of lightweight lock

Lightweight lock and unlock the lock logic

After the upgrade lock is a lightweight lock, Markword object will make the appropriate changes. Upgraded to lightweight lock process:

  • Create a thread lock in record LockRecord own stack frame in
  • Copy the object header locks the object in MarkWord thread lock to record just created
  • The Owner lock record pointer points to the locked object
  • The object header locks the object point to replace the locks MarkWord pointer

Lightweight lock
Lightweight lock

Spinlocks

Lightweight locked in the locking process, used the so-called spin-spin locks, refers to the time when there is another thread to compete lock this thread will wait for the cycle in place, rather than the thread to block until the get after locking thread releases the lock, the thread can obtain the lock immediately

Note that lock in place when the cycle is to consume the cpu, equivalent to some for a loop in execution had nothing

So, lightweight lock applies to those scenes synchronized block quickly executed, so that the thread still waiting for a very short time to be able to get a lock

The use of spin locks, there is also a certain probability that background, most of the time in the synchronized block execution is very short. So seemingly without objection by the cycle but can enhance the performance of lock

However, certain conditions must be spin control, or if a thread executing synchronized block of time is very long, then the thread continues to cycle but will consume CPU resources. The number of spins is 10 by default, can be modified by preBlockSpin

After JDK1.6, the introduction of adaptive spin lock, adaptive means that the number of spin is not fixed, but according to state once in the same spin lock before time, and the lock owner decide

If a lock on the same object spin-wait just successfully won locks, and thread holding the lock is running, the virtual machine will think that this spin is likely to succeed again, and then it will allow spin wait for a relatively longer time. If for a lock, spin rarely been successful, that in attempting to acquire the lock will be possible to omit the spin process, directly blocking the thread processor to avoid wasting resources in the future

Lightweight lock unlock

MarkWord lightweight lock lock release logic is actually the reverse logic to acquire a lock, the CAS operation by the thread stack frame LockRecord back to replace the lock object, if successful, that there is no competition. If it fails, there is competition represents the current lock, then the lock will be expanded into a lightweight heavyweight lock
Lightweight lock unlocking process

The basic principle of the heavyweight lock

When the lightweight heavyweight lock to lock inflation, can only mean that the thread is suspended waiting to be awakened to blocked

Published 29 original articles · won praise 10 · views 20000 +

Guess you like

Origin blog.csdn.net/csharpqiuqiu/article/details/105234255