Dangdang open source task scheduling (job framework) elastic-job-1.1.1 trial
Install zookeeper
http://zookeeper.apache.org/doc/r3.4.6/zookeeperStarted.html
Zookeeper startup script under windows (I use UBUNTU, I have not tested this script)
Source: http://www.ibm.com/developerworks/cn/opensource/os-cn-zookeeper/index.html
setlocal
set ZOOCFGDIR=%~dp0%..\conf
set ZOO_LOG_DIR=%~dp0%..
set ZOO_LOG4J_PROP=INFO,CONSOLE
set CLASSPATH=%ZOOCFGDIR%
set CLASSPATH=%~dp0..\*;%~dp0..\lib\*;%CLASSPATH%
set CLASSPATH=%~dp0..\build\classes;%~dp0..\build\lib\*;%CLASSPATH%
set ZOOCFG=%ZOOCFGDIR%\zoo.cfg
set ZOOMAIN=org.apache.zookeeper.server.ZooKeeperServerMain
java "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%"
-cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" %*
endlocal
Add Maven dependencies (not to mention what spring depends on):
<!-- Introduce elastic-job core module -->
<dependency>
<groupId>com.dangdang</groupId>
<artifactId>elastic-job-core</artifactId>
<version>1.1.1</version>
</dependency>
<!-- Introduced when using springframework to customize the namespace -->
<dependency>
<groupId>com.dangdang</groupId>
<artifactId>elastic-job-spring</artifactId>
<version>1.1.1</version>
</dependency>
The component may not be found. If you are using the nexus local maven repository, you can log in to the web (eg http://192.168.1.250:8081/nexus/#view-repositories;central~browsestorage), delete the corresponding folder and try again.
If it still doesn't work, try maven's -U parameter to force an update.
Spring configuration file registration of jobs: the ip of zookeeper needs to be modified, and the namespace dd-job will be created in zookeeper.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:reg="http://www.dangdang.com/schema/ddframe/reg"
xmlns:job="http://www.dangdang.com/schema/ddframe/job"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.dangdang.com/schema/ddframe/reg
http://www.dangdang.com/schema/ddframe/reg/reg.xsd
http://www.dangdang.com/schema/ddframe/job
http://www.dangdang.com/schema/ddframe/job/job.xsd
">
<!--Configure Job Registration Center-->
<reg:zookeeper id="regCenter" server-lists="192.168.1.251:2181" namespace="dd-job" base-sleep-time-milliseconds="1000" max-sleep-time-milliseconds="3000" max-retries="3" />
<!-- configure job-->
<job:simple id="myElasticJob" class="test.MyElasticJob" registry-center-ref="regCenter" cron="0/10 * * * * ?" sharding-total-count="3" sharding-item-parameters="0=A,1=B,2=C" />
</beans>
Job class test.MyElasticJob:
package test;
import java.util.List;
import java.util.Map;
import com.dangdang.ddframe.job.api.JobExecutionMultipleShardingContext;
import com.dangdang.ddframe.job.plugin.job.type.simple.AbstractSimpleElasticJob;
public class MyElasticJob extends AbstractSimpleElasticJob {
public MyElasticJob() {
System.out.println("MyElasticJob");
}
@Override
public void process(JobExecutionMultipleShardingContext context) {
System.out.println("context:"+context);
// String param=context.getJobParameter();
// Map<Integer, String> map=context.getShardingItemParameters();
// List<Integer> list=context.getShardingItems();
// String name=context.getJobName();
// Map<Integer, String> offset=context.getOffsets();
// System.out.println(System.currentTimeMillis()/1000+":"+param+",map:"+map+",list:"+list+",name:"+name+",offset:"+offset);
}
}
Spring loads the class:
package test;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class MainBusinessProcess {
public static void main(String[] args) {
ApplicationContext ctx = new ClassPathXmlApplicationContext("classpath:spring-bean.xml");
System.out.println(ctx);
}
}
Job registration in the java method: the ip of zookeeper needs to be modified, and the namespace elastic-job-example will be created in zookeeper.
package test;
import com.dangdang.ddframe.job.api.JobScheduler;
import com.dangdang.ddframe.job.api.config.impl.DataFlowJobConfiguration;
import com.dangdang.ddframe.job.api.config.impl.DataFlowJobConfiguration.DataFlowJobConfigurationBuilder;
import com.dangdang.ddframe.job.api.config.impl.SimpleJobConfiguration;
import com.dangdang.ddframe.job.api.config.impl.SimpleJobConfiguration.SimpleJobConfigurationBuilder;
import com.dangdang.ddframe.reg.base.CoordinatorRegistryCenter;
import com.dangdang.ddframe.reg.zookeeper.ZookeeperConfiguration;
import com.dangdang.ddframe.reg.zookeeper.ZookeeperRegistryCenter;
public class JobRegDemo {
// Define the Zookeeper registry configuration object
private ZookeeperConfiguration zkConfig = new ZookeeperConfiguration("192.168.1.251:2181", "elastic-job-example", 1000, 3000, 3);
// Define the Zookeeper registry
private CoordinatorRegistryCenter regCenter = new ZookeeperRegistryCenter(zkConfig);
// Define job 1 configuration object
private SimpleJobConfigurationBuilder jobConfig1build = new SimpleJobConfiguration.SimpleJobConfigurationBuilder("simpleJobDemo", SimpleJobDemo.class, 10, "0/5 * * * * ?");
// Define job 2 configuration object
private DataFlowJobConfigurationBuilder jobConfig2build = new DataFlowJobConfiguration.DataFlowJobConfigurationBuilder("dataFlowElasticJobDemo", DataFlowElasticJobDemo.class, 10, "0/5 * * * * ?");
// Define job 3 configuration object
//private JobConfiguration jobConfig3build = new JobConfiguration("sequencePerpetualElasticDemoJob", SequencePerpetualElasticDemoJob.class, 10, "0/5 * * * * ?");
public static void main(final String[] args) {
new JobRegDemo().init();
}
private void init() {
// connect to the registry
regCenter.init();
// start job 1
new JobScheduler(regCenter, jobConfig1build.build()).init();
// start job 2
new JobScheduler(regCenter, jobConfig2build.build()).init();
// start job 3
//new JobScheduler(regCenter, jobConfig3build.build()).init();
}
}
Job class test.DataFlowElasticJobDemo:
package test;
import java.util.List;
import java.util.concurrent.ExecutorService;
import com.dangdang.ddframe.job.api.DataFlowElasticJob;
import com.dangdang.ddframe.job.api.JobExecutionMultipleShardingContext;
import com.dangdang.ddframe.job.exception.JobException;
import com.dangdang.ddframe.job.internal.schedule.JobFacade;
public class DataFlowElasticJobDemo implements DataFlowElasticJob<String, JobExecutionMultipleShardingContext> {
@Override
public void execute() {
System.out.println("DataFlowElasticJobDemo");
}
@Override
public void handleJobExecutionException(JobException jobException) {
// TODO Auto-generated method stub
}
@Override
public JobFacade getJobFacade() {
// TODO Auto-generated method stub
return null;
}
@Override
public void setJobFacade(JobFacade jobFacade) {
// TODO Auto-generated method stub
}
@Override
public List<String> fetchData(JobExecutionMultipleShardingContext shardingContext) {
// TODO Auto-generated method stub
return null;
}
@Override
public void updateOffset(int item, String offset) {
// TODO Auto-generated method stub
}
@Override
public ExecutorService getExecutorService() {
// TODO Auto-generated method stub
return null;
}
}
Job class test.SimpleJobDemo:
package test;
import java.util.List;
import java.util.Map;
import com.dangdang.ddframe.job.api.JobExecutionMultipleShardingContext;
import com.dangdang.ddframe.job.plugin.job.type.simple.AbstractSimpleElasticJob;
public class SimpleJobDemo extends AbstractSimpleElasticJob {
public SimpleJobDemo() {
System.out.println("SimpleJobDemo");
}
@Override
public void process(JobExecutionMultipleShardingContext context) {
System.out.println("context:"+context);
}
}
Both main functions can be run, and the job will be executed periodically after running.
Source code: https://github.com/dangdangdotcom/elastic-job
The elastic-job-console is the monitoring console, the web interface, which can be used to monitor and operate the currently running jobs.
After maven is packaged, elastic-job-console-1.1.1.war can be generated.
Put it in tomcat and you can directly access http://localhost:8080/elastic-job-console-1.1.1
User password is root
After entering, connect zookeeper in the registry (you need to enter the ip port and namespace), you can view the three jobs run by the previous two methods.
Note that there are two namespaces dd-job (1 job) and elastic-job-example (2 jobs).
A computer can't see whether it is sharded or not, because it is all assigned to one computer.
More than one can see the difference from the shardingItems of the context. If a computer hangs, the sharding will be restarted. This open source job framework is still very useful.