Reading "Art Java concurrent programming," notes the

First, the multi-threaded semantics

  • Even a single core processor also supports multi-threaded code execution, by allocating CPU time slices to each thread CPU to perform tasks, the current task execution is switched to the next task after a time slice, the CPU is performed by constantly switching threads .

  • If there is no concurrent execution of a certain magnitude, it will speed slower than serial execution. This is because there is a thread creation and context switching overhead.

  • How to reduce thread creation and context switching overhead? ( "Vmstat 1" parameter of cs, Views thread switching)

    • Lock-free concurrent programming. Multi-threaded lock contention can cause a context switch, the multi-threaded processing of data, some of the approach can be used to avoid the use of locks, such as CAS algorithm, such as the ID data in accordance with the Hash algorithm modulo segment, different threads process different segments The data.
    • With a minimum thread. Avoid creating unnecessary threads, such as thread pool by other means.
    • Coroutine. Scheduling multi-task to achieve in a single-threaded in and maintain switching between multiple tasks in a single thread.
  • How to avoid deadlocks?

    • Avoid a thread simultaneously acquire multiple locks.
    • Try using time lock, using lock.tryLock (timeout) instead of using the internal locking mechanism.
    • For database locks, locking and unlocking must be a database connection, otherwise the situation will appear unsuccessful.
  • In Java SE 1.6, the lock There are four kinds of state-level from low to high are: lock-free status, tend to lock status, lock status lightweight and heavyweight lock (also known as mutex) state, these with the state will gradually upgrade competition. Lock can upgrade but can not downgrade the purpose of this strategy is to improve the efficiency of obtaining and release locks. It is interesting that in addition to biased locking, JVM achieve locking methods are used cycle CAS, that is, when a thread wants to enter the synchronized block when using a loop CAS way to get a lock when it exits sync block used when cycling CAS release the lock.

    If only one thread enters a synchronized block, it will first obtain a "biased locking"; when competition exists between the threads, "biased lock" will be withdrawn in order to use "lightweight lock"; when the thread is always by way of spin getting less than "lightweight lock" (the reason the execution time of the thread acquires the lock for too long, etc.), then the "lightweight lock" will be expanded into a "heavyweight lock."

Two, Java Memory Model

  • JMM (Java Memory Model) uses a shared memory model, by controlling the main memory (Main Memory) interaction between (Local Memory) with local memory for each thread to provide visibility to ensure that the memory for Java programmers.

  • From the Java source code into a sequence of instructions the final actual implementation, experience "compiler optimization reordering", "ILP reordering", "reordering memory system", respectively, before a compile-esteem sort, the two a processor belonging to reorder, reordering these will lead to a multi-threaded program appears visibility of memory problems. In order to ensure the visibility of memory, Java compiler generates a sequence of instructions in place of memory barrier instructions will be inserted to disable a particular type of processor reordering. The JMM memory barriers into four categories:

    • Reordering the results will change for the program, JMM requires compilers and processors must prohibit such reordering.
    • The results will not change the program for reordering, JMM compiler and processor not required.
    • Therefore, programmers view, semantics (the results) will not change the program to be executed, in essence happens-before relationship and as-if-serial semantics is one thing.
  • It happens-before is the core concept of JMM. happens-before rules corresponding to a plurality of compilers and processors or re-collation, JMM hides the complexity of reordering rules, and those rules embodied by a rule happens-before, the programmer programming the happens-before rules that ensure visibility of memory.

    • Program sequence rules: each operation a thread, happens-before any subsequent operation to the thread. This rule ensures that the as-if-serial semantics.
    • Monitor lock rule: a lock to unlock, happens-before subsequent lock on the lock.
    • volatile variable rules: write to a volatile field, happens-before reading this volatile region in any follow-up.
    • Transitive: if A happens-before B, and B happens-before C, then A happens-before C.
  • Sequential Consistency memory model is a theoretical reference model, JMM memory and processor consistency model is usually in the order of reference memory model is designed.

  • Sequential consistency model, JMM, processor memory model, memory model designed from strong to weak, because the more the pursuit of performance, the weaker memory model will be designed in order to reduce their bondage memory model.

Three, synchronized, volatile and final semantic memory

  • A good way to understand the volatile property is a single reading of the volatile variables / write, as is using the same lock on these single read / write operation done in sync. In short, volatile variable itself has the following characteristics:

    • Visibility. Reading of a volatile variable, always able to see (any thread) last write this volatile variable.
    • Atomicity. Of any single variable volatile read / write atomic, but similar to the operation of such a composite no volatile ++ atomic.
  • How to ensure the visibility of the volatile keyword? volatile memory semantics?

    • When the time to write to volatile variable data sets the current processor cache line is written back to system memory.
    • When volatile variable to read, the processor sets the current data set a cache line as invalid read from system memory the variable value.
  • How volatile keyword to ensure that orderly?

    • Each preceding write operation to insert a volatile StoreStore barrier (above normal write prohibited and the following volatile write reordering).
    • Write back operation is inserted in each of a volatile StoreLoad barrier (to prevent the volatile write above may have the following volatile read / write reordering).
    • Inserting a volatile LoadLoad barrier after each read operation (read prohibit normal operation of all of the following upper and reordering volatile read).
    • Inserting a volatile LoadStore barrier after each read operation (read prohibited below volatile reordering all common writes and above).
  • Why JDK documentation says CAS has both volatile and volatile read-write memory semantics?

    • The compiler does not reorder memory operations to any volatile volatile read and read back.
    • The compiler does not write to volatile and volatile write arbitrary memory in front of the reordering operation.
    • CAS (compare and swap) operation means for variable volatile read before write, along with volatile read and write to the semantics, the compiler can not be arbitrarily reordering memory operation with CAS CAS front and back.
  • Principle Synchonized keywords? JVM based synchronization and block synchronization enter and exit Monitor objects to implement the method, but not the same as the implementation details of the two. Block synchronization is achieved using monitorenter and monitorexit instructions, and the method of synchronization is implemented using another way, not illustrated in detail and JVM specification in detail.

  • Lock release and access to memory semantics?

    • When a thread releases the lock, JMM will corresponding to the thread local memory shared variables flushed to main memory. (And the same volatile memory write semantics)
    • When a thread to acquire the lock, the thread will JMM corresponding local memory is deasserted, the system memory is thus read from the variable value. (Read-volatile memory and the same semantics)
  • For the final variable, and a processor compiler reordered to follow two rules (object reference if not constructor "overflow"):

    • Written in the constructor of a final variable, and then the object referenced this configuration assigned to a reference variable, not reordering between these two operations.
    • Reading a first object reference contains the final variable, and then read the final initial variables, reordering is not between these two operations.
  • The final variable semantic memory?

    • Written final variable reordering rules will be asked to write a compiler after the final variable, insert a StoreStore barrier before the constructor return.
    • Read the final variable reordering rules claim LoadLoad the compiler inserts a barrier in front of the final operation of the read variable.
  • Inside the constructor can not let this be seen as a constructed object references to other threads, that is, object references can not "spill" in the constructor. Because the JMM can not guarantee "write variable" and "referenced object construction" would be reordering between the two in the constructor.

Fourth, other

  • jps and jstack command?

    • jps: View JVM process information
    • jstack: View information about a JVM stack process
    • Jstack dump command with thread information, look for the process pid 18023's threads are doing.
    jstack 18023 > /home/wwwroot/dump18023
    • Statistics threads are in what state
    grep java.lang.Thread.State dump18023 | awk '{print $2$3$4$5}' | sort | uniq -c
  • You can use ODPS, Hadoop cluster or build their own server hardware to solve the problem of resource constraints.

  • An object reference is four bytes.

Guess you like

Origin www.cnblogs.com/jmcui/p/11570366.html