Distributed lock implementation

First understand the local lock Lcok:
http://572327713.iteye.com/blog/2407789

1. The performance of distributed lock based on database
is poor, and it is prone to single point of failure
Lock has no expiration time, easy to deadlock Non
-blocking Impossible
Reentrancy

2. Distributed locks based on cache have
good performance.
The lock failure time is difficult to set, and it is easy to deadlock.
Non-blocking (using threads to wait for solution)
cannot be reentrant

. 3. The implementation of distributed locks based on zookeeper
is relatively simple, high
reliability , and high
performance.
Good reentrant

database:
poor performance, prone to single point of failure:
mysql concurrent performance bottleneck 300-700, inserting a piece of data when a thread accesses, delete this data after processing Single point of failure
is easy to occur, and the connection is strong and the performance is poor
The lock has no expiration time, and it is easy to deadlock:
a thread suddenly crashes because the data is not deleted, and other threads keep waiting -- deadlock Non-
blocking :
the result is returned immediately after the lock fails, and the thread does other things
Non -reentrant: the
thread Locking, other threads cannot lock

redis:
good performance:
easy to respond to 100,000 concurrent
lock failure time difficult to set, easy to deadlock:
Non-blocking (using threads to wait to solve):
non-reentrant:
threads are locked, and other threads cannot be locked. Correct posture
for unlocking: (summarized by antirez, the author of redis)
1. Locking:
1.1 Generate a unique random value (uuid, time rubbing), write a random value to 1.2 redis to complete the lock and set the expiration time in 1.3 .
must use sentnx? SET if Not exists (if it does not exist, then SET)
SET resource_name my_random_value NX PX 30000
2. Unlock:
2.1 Generate random values , 2.2 Compare (if the thread is consistent with redis), 2.3 Delete data on redis :
execute as follows lua script
if redis.call("get".KEYS[1]) == ARGV[1] then
  return redis.call("del",KEYS[1]);
else
return 0;
end
Note! :
1. Distributed locks must set an expiration time (for example, unlock must be placed in finally)
2. It is necessary to set a random string my_random_value
3. Lock and set expiration time must be atomic operations
4. Release the lock to ensure three-step atomicity (get, compare del value, del) can be implemented based on lua scripts
Reason : a thread When the unlocking comparison redis value is equal to the expiration time, the redis writes the value of the b thread, the a thread will continue to delete the redis, causing the b thread to fail, so the atomicity of the three steps of releasing the lock must be guaranteed . Affected by the b thread


in Figure 2: a thread deliberately paused for 3 seconds, but the timeout time has expired.
Figure 3: After the timeout period of thread a, redis is emptied, and thread b joins the lock.
Figure 4: When the a thread is unlocked, it is found that this redis is the timestamp of the b thread, and it leaves silently, which is why the random time is set.

The second step in the figure: Compare whether the redis value is consistent with the local value, and delete the redis to unlock.
At this time the local value uses the threadlocal modified variable.

Redis lock specific implementation:
pom.xml
    <!-- https://mvnrepository.com/artifact/redis.clients/jedis -->
     <dependency>
         <groupId>redis.clients</groupId>
         <artifactId>jedis</artifactId>
         <version>2.9.0</version>
     </dependency>


RedisLock.java
package com.hailong.yu.dongnaoxuexi.lock;

import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;

import redis.clients.jedis.Jedis;

public class RedisLock implements Lock{

	private static final String LOCK_KEY = "lock";
	
	// Thread context, during the execution of this thread, the saved variables are placed here, and the variables are passed
	private ThreadLocal<String> local = new ThreadLocal<String>();


    /**
     * Blocking lock (synchonied is blocking)
     */
    public void lock() {
    	if(tryLock()) {
    		
    	} else {
    		//reids are not very flexible for blocking, use ready-made blocking
    		try {
				Thread.sleep(200);
				
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace ();
			}
    		lock();
    	}
	}

    /**
	 * non-blocking lock
     */
    public boolean tryLock() {
    	
    	String uuid = UUID.randomUUID().toString();

    	Jedis redis = new Jedis("localhost");
    	// key
    	// value
    	// @param nxxx NX|XX, NX -- Only set the key if it does not already exist. XX -- Only set the key
    	// if it already exists. There must be no value to set successfully / there must be a value to set successfully
    	// expx EX|PX, expire time units: EX = seconds; PX = milliseconds
    	// 100ms validity period
    	String ret = redis.set(LOCK_KEY, uuid, "NX", "PX", 100);
    	if (ret !=null && ret.equals("OK")) {
        	local.set(uuid);
			return true;
		}
		return false;
	}

    /**
     * Unlock
     */
    public void unlock() {
    	// FileUtils.readFileByLines("E:/workspaces/.../unlock.lua");
    	String script = "";
    	// execute script command
    	Jedis redis = new Jedis("localhost");
    	List<String> keys = new ArrayList<String>();
    	keys.add(LOCK_KEY);
    	List<String> locals = new ArrayList<String>();
    	locals.add(local.get());
    	redis.eval(script, keys, locals);
	}

	public void lockInterruptibly() throws InterruptedException {
		// TODO Auto-generated method stub
		
	}

	public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
		// TODO Auto-generated method stub
		return false;
	}

	public Condition newCondition() {
		// TODO Auto-generated method stub
		return null;
	}
}


unlock.lua
if redis.call("get".KEYS[1]) == ARGV[1] then
  return redis.call("del",KEYS[1]);
else
	return 0;
end


Problems based on the distributed lock scheme of redis:
1. The lock failure time is difficult to grasp. Generally, it is two to three times the single-threaded processing time. The
timeout time cannot be long or short. Why is 100ms set, which is generally two to three times the single-threaded processing thread. .
2. There may be a lock failure situation.
If thread a times out, thread a still thinks that it is holding the lock, and there is a problem of shared resource competition.
3. This distributed lock cannot be used in the redis cluster, and redLock is available in the cluster environment.
For single-node redis, if it is not suitable for clustering, it is much more complicated to use redLock.
So it is recommended to use the distributed lock of zookeeper.

zookeeper:
1. Based on memory
2. Implement simple

liux
persistent node create /temp (path) temp (value)
temporary node create -e /temp (path) temp (value)
sequential node create -s
temporary order create -s -e

zk Application scenarios:
Data publishing and subscription (configuration center)
naming service
Master election
Cluster management
Distributed queues
Distributed locks The

herd effect is serious in environments with large distributed clusters:
1. Huge server performance loss: I go Send an event, the event I want to serialize.
The client accepts a lot of notification events that have nothing to do with it for no reason.
2. Network impact, each node network event, the network bandwidth consumption is very large
3. When the herd effect occurs frequently, the entire node hangs, the node may cause downtime
If there are more than 2 nodes in the cluster environment in the future.

There are 10 cluster nodes, and only 1 will preempt the lock.
There is a possibility of deadlock.
The production order number service is down.

Temporary sequence node:


package com.baozun.util.locks;

import java.util.Collections;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;

import org.I0Itec.zkclient.IZkDataListener;
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.serialize.SerializableSerializer;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

public class ZookeeperLock implements Lock{

	private static String LOCK_PATH = "/LOCK";
	
//	@Value("${dubbo.registry.address}")
//	private static String ZK_IP_PORT;
	private static String ZK_IP_PORT = "123.206.*.*:2181";
	
	private static final Log logger = LogFactory.getLog(ZookeeperLock.class);
	
//	private ZkClient client = new ZkClient(ZK_IP_PORT, 1000, 10, new SerializableSerializer());
	private ZkClient client = new ZkClient(ZK_IP_PORT);

	private CountDownLatch cdl;

	// previous node
	private String beforePath;

	// currently requested node
	private String currentPath;

	public ZookeeperLock() {
		if(!client.exists(LOCK_PATH)){
			client.createPersistent(LOCK_PATH);
		}
	}

    /**
     * non-blocking lock
     */
	@Override
	public boolean tryLock() {
		
		// If currentPath is empty, it is the first attempt to lock, and the first lock is assigned currentPath
		if(currentPath == null || currentPath.length() <=0) {
			// create a temporary sequence node
			currentPath = client.createEphemeralSequential(LOCK_PATH + '/', "lock");
		}
		// Get all temporary nodes and sort, the temporary node name is an auto-incrementing string: 0000000400
		List<String> childrens = client.getChildren(LOCK_PATH);
		Collections.sort(childrens);
		// If the current node ranks first among all nodes, the lock is acquired successfully
		if(currentPath.equals(LOCK_PATH + '/' + childrens.get(0))) {
			return true;
		// If the current node is not ranked first among all nodes, get the previous node name and assign it to beforePath
		} else {
			int wz = Collections.binarySearch(childrens, currentPath.substring(6));
			beforePath = LOCK_PATH + '/' + childrens.get(wz-1);
		}
		return false;
	}

	@Override
	public void unlock() {
		// TODO Auto-generated method stub
		client.delete(currentPath);
	}

    /**
     * blocking lock
     */
	@Override
	public void lock() {

		if(tryLock()) {
			waitForLock();
			lock();
		} else {
			logger.info(Thread.currentThread().getName()+"Get distributed lock!");
		}
	}

	/**
	 * wait for lock
	 * @return
	 */
	public void waitForLock() {
		
		// listener
		IZkDataListener iZkDataListener = new IZkDataListener() {
			
			@Override
			public void handleDataDeleted(String arg0) throws Exception {

				// node data is deleted
				if(cdl!=null) {
					cdl.countDown ();
				}
			}
			
			@Override
			public void handleDataChange(String arg0, Object arg1) throws Exception {
				// TODO Auto-generated method stub
				
			}
		};
		// Add a wetcher for data deletion to the front node
		client.subscribeDataChanges(beforePath, iZkDataListener);
		if(client.exists(beforePath)) {
			cdl = new CountDownLatch(1);
			try {
				cdl.await ();
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace ();
			}
		}
		client.unsubscribeDataChanges(beforePath, iZkDataListener);
	}

	// ==========================Not implemented yet====================== =======
	@Override
	public boolean tryLock(long time, TimeUnit unit) throws InterruptedException {
		// TODO Auto-generated method stub
		return false;
	}

    /**
     * Interrupt mechanism (interruptible lock)
     */
	@Override
	public void lockInterruptibly() throws InterruptedException {
		// TODO Auto-generated method stub
		
	}
	
    /**
    * Set conditions to lock or unlock
    * Multiple condition variables
    */
	@Override
	public Condition newCondition() {
		// TODO Auto-generated method stub
		return null;
	}
}


import java.util.concurrent.CountDownLatch;
import java.util.concurrent.locks.ReentrantLock;

import com.baozun.util.locks.ZookeeperLock;


/**
 * @author hailong.yu1
 *@date Jan 25, 2018 10:36:57 AM
 */
public class SecurityProcessorTest implements Runnable {
	
	public static final int NUM = 10;
	public static CountDownLatch countDownLatch = new CountDownLatch(NUM);
	public static OrderCodeGenerator orderCodeGenerator = new OrderCodeGenerator();
	public ZookeeperLock lock = new ZookeeperLock();
	
	@Override
	public void run() {
		try {
			countDownLatch.await();
		} catch (InterruptedException e) {
			// TODO Auto-generated catch block
			e.printStackTrace ();
		}
		createOrder();
	}

	/**
	 * @param args
	 */
	public static void main(String[] args) {
		
		for(int i=0; i<NUM; i++) {
			
			new Thread(new SecurityProcessorTest()).start();
			countDownLatch.countDown();
		}
	}
	
	public void createOrder() {
		String orderNum = null;
		
		try {
			lock.lock();
			orderNum = orderCodeGenerator.getOrderCode();
		} catch (Exception e) {
			// TODO: handle exception
		} finally {
			lock.unlock();
		}

		System.out.println(Thread.currentThread().getName()+"===="+orderNum);
	}
}


ps aux|grep java
zkServer.sh start

Access the zk service in the linux server:


view the znode node in the zk service:



If a thread preempts a resource, it can also be solved by queue preemption.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326230853&siteId=291194637