Using Canal to complete Mysql data synchronization Redis

The principle of the process
Canal is to simulate the slave sending a request to the Master, and Canal parses the binlog, but does not persist the parsing result, but saves it in memory. Every time a client reads a message, it deletes the message. The client mentioned here requires us to write a program that connects to Canal and continuously obtains data from Canal.

 

 

 

Step 1. Refer to https://github.com/alibaba/canal
for configuring Canal


[mysql configuration]
1. Configuration parameters

 

[mysqld]
log-bin=mysql-bin #Add this line and it's ok
binlog-format=ROW #Select row mode
server_id=1 #Configure mysql replaction needs to be defined and cannot be duplicated with canal's slaveId

2. Configure the canal database management user in mysql and configure the corresponding permissions (repication permissions)

    CREATE USER canal IDENTIFIED BY 'canal';    
    GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';  
    -- GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ;  
    FLUSH PRIVILEGES;  

 

[canal download and configuration]
1. Download canal https://github.com/alibaba/canal/releases 
2. Unzip it

 

mkdir /tmp/canal
tar zxvf canal.deployer-$version.tar.gz  -C /tmp/canal

3. Modify the configuration file

vi conf/example/instance.properties
#################################################
## mysql serverId
canal.instance.mysql.slaveId = 1234

# position info, need to be changed to your own database information
canal.instance.master.address = 127.0.0.1:3306
canal.instance.master.journal.name =
canal.instance.master.position =
canal.instance.master.timestamp =

#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =

# username/password, need to be changed to your own database information
canal.instance.dbUsername = canal  
canal.instance.dbPassword = canal
canal.instance.defaultDatabaseName =
canal.instance.connectionCharset = UTF-8

# table regex
canal.instance.filter.regex = .*\\..*

#################################################


[canal startup and shutdown]
1. Start

sh bin/startup.sh

2. View the log

vi logs/canal/canal.log
2013-02-05 22:45:27.967 [main] INFO  com.alibaba.otter.canal.deployer.CanalLauncher - ## start the canal server.
<pre name="user-content-code">2013-02-05 22:45:28.113 [main] INFO  com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[10.1.29.120:11111]
2013-02-05 22:45:28.210 [main] INFO  com.alibaba.otter.canal.deployer.CanalLauncher - ## the canal server is running now ......

The log of the specific instance:

vi logs/example/example.log
2013-02-05 22:50:45.636 [main] INFO  c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [canal.properties]
2013-02-05 22:50:45.641 [main] INFO  c.a.o.c.i.spring.support.PropertyPlaceholderConfigurer - Loading properties file from class path resource [example/instance.properties]
2013-02-05 22:50:45.803 [main] INFO  c.a.otter.canal.instance.spring.CanalInstanceWithSpring - start CannalInstance for 1-example
2013-02-05 22:50:45.810 [main] INFO  c.a.otter.canal.instance.spring.CanalInstanceWithSpring - start successful....

3. Close

sh bin/stop.sh


Note:
1. You only need to configure the parameters here, you can run it directly.
2. Canal has no parsed files and will not persist.

 

 

2. Create a client
Refer to https://github.com/alibaba/canal/wiki/ClientExample

 


One of them is a class that connects to canal and operates, and the other is a tool class for redis. It is very convenient to use maven mainly to download dependent packages.

 

 

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.alibaba.otter</groupId>
  <artifactId>canal.sample</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <dependencies>
    <dependency>  
        <groupId>com.alibaba.otter</groupId>  
        <artifactId>canal.client</artifactId>  
        <version>1.0.12</version>  
    </dependency>  
    
    <dependency>  
        <groupId>org.springframework</groupId>  
        <artifactId>spring-test</artifactId>  
        <version>3.1.2.RELEASE</version>  
        <scope>test</scope>  
    </dependency>  
      
    <dependency>  
        <groupId>redis.clients</groupId>  
        <artifactId>jedis</artifactId>  
        <version>2.4.2</version>  
    </dependency>  
    
    </dependencies>
  <build/>
</project>




2. ClientSample code
There are two main tasks here, one is to fetch data from Canal in a loop, and the other is to update data to Redis

package canal.sample;

import java.net.InetSocketAddress;  
import java.util.List;  

import com.alibaba.fastjson.JSONObject;
import com.alibaba.otter.canal.client.CanalConnector;  
import com.alibaba.otter.canal.common.utils.AddressUtils;  
import com.alibaba.otter.canal.protocol.Message;  
import com.alibaba.otter.canal.protocol.CanalEntry.Column;  
import com.alibaba.otter.canal.protocol.CanalEntry.Entry;  
import com.alibaba.otter.canal.protocol.CanalEntry.EntryType;  
import com.alibaba.otter.canal.protocol.CanalEntry.EventType;  
import com.alibaba.otter.canal.protocol.CanalEntry.RowChange;  
import com.alibaba.otter.canal.protocol.CanalEntry.RowData;  
import com.alibaba.otter.canal.client.*;  
 
public class ClientSample {  

   public static void main(String args[]) {  
	   
       // create link  
       CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress(AddressUtils.getHostIp(),  
               11111), "example", "", "");  
       int batchSize = 1000;  
       try {  
           connector.connect();  
           connector.subscribe(".*\\..*");  
           connector.rollback();    
           while (true) {  
               Message message = connector.getWithoutAck(batchSize); // Get the specified amount of data  
               long batchId = message.getId();  
               int size = message.getEntries().size();  
               if (batchId == -1 || size == 0) {  
                   try {  
                       Thread.sleep(1000);  
                   } catch (InterruptedException e) {  
                       e.printStackTrace ();  
                   }  
               } else {  
                   printEntry(message.getEntries());  
               }  
 
               connector.ack(batchId); // submit confirmation  
               // connector.rollback(batchId); // processing failed, rollback data  
           }  
 
       } finally {  
           connector.disconnect();  
       }  
   }  
 
   private static void printEntry( List<Entry> entrys) {  
       for (Entry entry : entrys) {  
           if (entry.getEntryType() == EntryType.TRANSACTIONBEGIN || entry.getEntryType() == EntryType.TRANSACTIONEND) {  
               continue;  
           }  
 
           RowChange rowChage = null;  
           try {  
               rowChage = RowChange.parseFrom(entry.getStoreValue());  
           } catch (Exception e) {  
               throw new RuntimeException("ERROR ## parser of eromanga-event has an error , data:" + entry.toString(),  
                       e);  
           }  
 
           EventType eventType = rowChage.getEventType();  
           System.out.println(String.format("================> binlog[%s:%s] , name[%s,%s] , eventType : %s",  
                   entry.getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(),  
                   entry.getHeader().getSchemaName(), entry.getHeader().getTableName(),  
                   eventType));  
 
           for (RowData rowData : rowChage.getRowDatasList()) {  
               if (eventType == EventType.DELETE) {  
            	   redisDelete (rowData.getBeforeColumnsList ());  
               } else if (eventType == EventType.INSERT) {  
            	   redisInsert(rowData.getAfterColumnsList());  
               } else {  
                   System.out.println("-------> before");  
                   printColumn(rowData.getBeforeColumnsList());  
                   System.out.println("-------> after");  
                   redisUpdate (rowData.getAfterColumnsList ());  
               }  
           }  
       }  
   }  
 
   private static void printColumn( List<Column> columns) {  
       for (Column column : columns) {  
           System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());  
       }  
   }  
   
	  private static void redisInsert( List<Column> columns){
		  JSONObject json=new JSONObject();
		  for (Column column : columns) {  
			  json.put(column.getName(), column.getValue());  
	       }  
		  if(columns.size()>0){
			  RedisUtil.stringSet("user:"+ columns.get(0).getValue(),json.toJSONString());
		  }
	   }
	  
	  private static  void redisUpdate( List<Column> columns){
		  JSONObject json=new JSONObject();
		  for (Column column : columns) {  
			  json.put(column.getName(), column.getValue());  
	       }  
		  if(columns.size()>0){
			  RedisUtil.stringSet("user:"+ columns.get(0).getValue(),json.toJSONString());
		  }
	  }
  
	   private static  void redisDelete( List<Column> columns){
		   JSONObject json=new JSONObject();
			  for (Column column : columns) {  
				  json.put(column.getName(), column.getValue());  
		       }  
			  if(columns.size()>0){
				  RedisUtil.delKey("user:"+ columns.get(0).getValue());
			  }
	   }

   
}  


3, RedisUtil code

package canal.sample;

import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;

public class RedisUtil {

	// Redis server IP
	private static String ADDR = "10.1.2.190";

	// Port number of Redis
	private static int PORT = 6379;

	// access password
	private static String AUTH = "admin";

	// The maximum number of available connection instances, the default value is 8;
	// If the value is -1, it means there is no limit; if the pool has allocated maxActive jedis instances, the state of the pool is exhausted at this time.
	private static int MAX_ACTIVE = 1024;

	// Control the maximum number of jedis instances in idle (idle) state in a pool, the default value is also 8.
	private static int MAX_IDLE = 200;

	// The maximum time to wait for an available connection, in milliseconds, the default value is -1, which means never timeout. If the waiting time is exceeded, JedisConnectionException is thrown directly;
	private static int MAX_WAIT = 10000;

	// Expiration
	protected static int  expireTime = 60 * 60 *24;
	
	// connection pool
	protected static JedisPool pool;

	/**
	 * Static code, only called once for the first time
	 */
	static {
		JedisPoolConfig config = new JedisPoolConfig();
		//Maximum number of connections
		config.setMaxTotal(MAX_ACTIVE);
		//Maximum idle instances
		config.setMaxIdle(MAX_IDLE);
		//overtime time
		config.setMaxWaitMillis(MAX_WAIT);
		//
		config.setTestOnBorrow(false);
		pool = new JedisPool(config, ADDR, PORT, 1000);
	}

	/**
	 * Get the jedis instance
	 */
	protected static synchronized Jedis getJedis() {
		Jedis jedis = null;
		try {
			jedis = pool.getResource();
		} catch (Exception e) {
			e.printStackTrace ();
			if (jedis != null) {
				pool.returnBrokenResource(jedis);
			}
		}
		return jedis;
	}

	/**
	 * Release jedis resources
	 *
	 * @param jedis
	 * @param isBroken
	 */
	protected static void closeResource(Jedis jedis, boolean isBroken) {
		try {
			if (isBroken) {
				pool.returnBrokenResource(jedis);
			} else {
				pool.returnResource(jedis);
			}
		} catch (Exception e) {

		}
	}

	/**
	 * whether the key exists
	 *
	 * @param key
	 */
	public static boolean existKey(String key) {
		Jedis jedis = null;
		boolean isBroken = false;
		try {
			jedis = getJedis();
			jedis.select(0);
			return jedis.exists(key);
		} catch (Exception e) {
			isBroken = true;
		} finally {
			closeResource(jedis, isBroken);
		}
		return false;
	}

	/**
	 * delete key
	 *
	 * @param key
	 */
	public static void delKey(String key) {
		Jedis jedis = null;
		boolean isBroken = false;
		try {
			jedis = getJedis();
			jedis.select(0);
			jedis.del(key);
		} catch (Exception e) {
			isBroken = true;
		} finally {
			closeResource(jedis, isBroken);
		}
	}

	/**
	 * Get the value of the key
	 *
	 * @param key
	 */
	public static String stringGet(String key) {
		Jedis jedis = null;
		boolean isBroken = false;
		String lastVal = null;
		try {
			jedis = getJedis();
			jedis.select(0);
			lastVal = jedis.get(key);
			jedis.expire(key, expireTime);
		} catch (Exception e) {
			isBroken = true;
		} finally {
			closeResource(jedis, isBroken);
		}
		return lastVal;
	}

	/**
	 * Add string data
	 *
	 * @param key
	 * @param value
	 */
	public static String stringSet(String key, String value) {
		Jedis jedis = null;
		boolean isBroken = false;
		String lastVal = null;
		try {
			jedis = getJedis();
			jedis.select(0);
			lastVal = jedis.set(key, value);
			jedis.expire(key, expireTime);
		} catch (Exception e) {
			e.printStackTrace ();
			isBroken = true;
		} finally {
			closeResource(jedis, isBroken);
		}
		return lastVal;
	}

	/**
	 * Add hash data
	 *
	 * @param key
	 * @param field
	 * @param value
	 */
	public static void hashSet(String key, String field, String value) {
		boolean isBroken = false;
		Jedis jedis = null;
		try {
			jedis = getJedis();
			if (jedis != null) {
				jedis.select(0);
				jedis.hset(key, field, value);
				jedis.expire(key, expireTime);
			}
		} catch (Exception e) {
			isBroken = true;
		} finally {
			closeResource(jedis, isBroken);
		}
	}

}

Notice:

1. The Jedis connection on the client side is different from the Jedis connection in the project, which requires Spring annotations, and you can use the static method directly.

 

 

 

Run
1, run the canal server startup.bat / startup.sh
2, run the client program

 


Note
1, although the canal server will not persist the data after parsing the binlog, the canal server will record the location of each client consumption (the server will record the pos point every time the client acks). If the canal server hangs up while the data is being updated, the client also hangs up, mysql is still inserting data, and redis stops updating because the client is closed, resulting in inconsistent data between mysql and redis. The solution is to just restart the canal server and the client. Although the canal server parses the data before restarting, the canal server records the last pos point obtained by the client. The pos point starts parsing, and the client updates to redis to achieve data consistency.
2. If there is only one canal server and one client, there must be a problem of low availability. One method is to use a program to monitor the canal server and client. If it hangs, restart it; one method is to use multiple canal services. Terminal + zk, put the configuration file of the canal server in zk, after any canal server hangs up, switch to other canal servers, the content of the configuration file read is the same (and the recorded consumption pos points), To ensure high availability of services, clients can use the same approach.

 

 

http://m.blog.csdn.net/article/details?id=50371405

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326688504&siteId=291194637