Quartz_2.2.X学习系列三:Tutorials - Lesson 3: More About Jobs and Job Details

第三课小结:

一、Job实现类:

  1. 它只有一个方法execute(),Job所需要的值全部都在execute的参数JobExecutionContext中。
  2. Job必须有一个无参数的构造函数(在使用默认的JobFactory实现时)。
  3. 在job类中定义state数据字段是没有意义的——因为它们的值不会在job执行之间保留。

 

job类中定义数据字段不会在job执行之间保留。如果想要实例间传值,可以使用JobDetail的JobDataMap成员。

实际传值的原理是,将数据序列化,所以如果将JDBC-JobStore和JobDataMap放到一个模式中,只有原生和字符串被允许存储在MAP中,就可消除序列化问题。

 

JobExecutionContext的主要成员:(其它详见源代码)

  • Job实例:它包含了该Job的run-time环境信息
  • scheduler:执行这个Job的scheduer的句柄
  • JobDetail:JobDataMap对象,用于向Job传值。可以通过在Job中添加@PersistJobDataAfterExecution实现让变化的值会影响该JobDetail(组合KEY相同,name + group)下次执行时取值(即实现跨相同组合KEY的JobDetail的传值),同时强烈地考虑使用了@PersistJobDataAfterExecution后,同时使用@disallowconcurrentexecution注释,以避免在同一作业(JobDetail)同时执行的两个实例中存储的数据的混乱(竞态条件)。
  • Trigger:JobDataMap对象,用于向Job传值。@PersistJobDataAfterExecution对其无效。
  • JobDataMap:JobDataMap与Trigger的JobDataMap的合并,在合并过程中,Trigger的JobDataMap的值覆盖了前者中任何同名的值。@PersistJobDataAfterExecution对其无效。

 

Job中获取JobDataMap值的方式有两种:

1.显示的获取,即手工根据KEY向JobDataMap取值

2.隐式赋值,可以在Job创建与JobDataMap的KEY相对应的setter方法,Quartz会就会自动调用这些方法赋值。一般使用这种方法可以减少execute()方法中用于从JobDataMap中取值的代码量。

 

关于并发控制:

我们通过在Job实现类中添加@disallowconcurrentexecution实现不允许并发。

该控制只能保证在同一个JobDetail(相同的JobKey, name + group组合)在同一时间不会并发运行。

如果是不同的JobDetail(JobKey不同),则该方法无效,它仍可以并发执行。

 

二、JobDetail类:

我们给调度器提供了一个JobDetail实例,在构建JobDetail时,只需提供job的类,它就知道要被执行的JOB类型。每次调度器执行作业时,在调用execute()方法之前,它会创建这个类的新实例。当执行完成后,job类实例的引用被删除,然后该实例被垃圾收集。

 

三、Job实现类与JobDetail类的关系:

Job实现类提供实际的执行任务的具体功能,实现这些功能可以从JobDetail或从与该JobDetail关联的Trigger中获取数据。而JobDetail类用于设置相关的信息给与它关联的Job类,并由JobDetail与Scheduler,trigger关联(Job类不会直接与scheduler,trigger关联)。

每次Trigger触发执行JobDetail时,都会实例化一个新的Job类,并在execute()方法执行完后,job类实例的引用被删除,然后该实例被垃圾收集。

 

 

Lesson 3: More About Jobs and Job Details

As you saw in Lesson 2, Jobs are rather easy to implement, having just a single ‘execute’ method in the interface. There are just a few more things that you need to understand about the nature of jobs, about the execute(..) method of the Job interface, and about JobDetails.

While a job class that you implement has the code that knows how do do the actual work of the particular type of job, Quartz needs to be informed about various attributes that you may wish an instance of that job to have. This is done via the JobDetail class, which was mentioned briefly in the previous section.

JobDetail instances are built using the JobBuilder class. You will typically want to use a static import of all of its methods, in order to have the DSL-feel within your code.

 

正如你在第2课中所看到的,Jobs很容易实现,在接口中只有一个“execute”方法。关于Job接口的execute()方法和JobDetails,这里还有一些你需要了解的信息。

 

虽然你实现的Job类具有知道如何完成特定类型Job的实际工作的代码,但是Quartz需要被通知各种属性,而这些属性你可能希望该Job实例具有。这个功能通过JobDetail类完成的,在前一节中有简要提到过。

JobDetail实例是使用JobBuilder类构建的。你通常想使用所有方法的静态导入,以便在你的代码使用DSL特性。

 

import static org.quartz.JobBuilder.*;

Let’s take a moment now to discuss a bit about the ‘nature’ of Jobs and the life-cycle of job instances within Quartz. First lets take a look back at some of that snippet of code we saw in Lesson 1:

 

现在让我们花点时间来讨论一下关于Jobs的“性质”以及Quartz中Job实例的生命周期。首先让我们回顾一下我们在第一课中看到的一些代码片段:

 

  // define the job and tie it to our HelloJob class
  JobDetail job = newJob(HelloJob.class)
      .withIdentity("myJob", "group1") // name "myJob", group "group1"
      .build();

// Trigger the job to run now, and then every 40 seconds
  Trigger trigger = newTrigger()
      .withIdentity("myTrigger", "group1")
      .startNow()
      .withSchedule(simpleSchedule()
          .withIntervalInSeconds(40)
          .repeatForever())           
      .build();

// Tell quartz to schedule the job using our trigger
  sched.scheduleJob(job, trigger);

 

Now consider the job class “HelloJob” defined as such:


  public class HelloJob implements Job {

public HelloJob() {
    }

public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      System.err.println("Hello!  HelloJob is executing.");
    }
  }

 

Notice that we give the scheduler a JobDetail instance, and that it knows the type of job to be executed by simply providing the job’s class as we build the JobDetail. Each (and every) time the scheduler executes the job, it creates a new instance of the class before calling its execute(..) method. When the execution is complete, references to the job class instance are dropped, and the instance is then garbage collected. One of the ramifications of this behavior is the fact that jobs must have a no-argument constructor (when using the default JobFactory implementation). Another ramification is that it does not make sense to have state data-fields defined on the job class - as their values would not be preserved between job executions.

 

You may now be wanting to ask “how can I provide properties/configuration for a Job instance?” and “how can I keep track of a job’s state between executions?” The answer to these questions are the same: the key is the JobDataMap, which is part of the JobDetail object.

 

请注意,我们给调度器提供了一个JobDetail实例,在构建JobDetail时,只需提供job的类,它就知道要被执行的JOB类型。每次调度器执行作业时,在调用execute()方法之前,它会创建这个类的新实例。当执行完成后,job类实例的引用被删除,然后该实例被垃圾收集。

这种行为的一个影响是,Job必须有一个无参数的构造函数(在使用默认的JobFactory实现时)。

另一个影响是在job类中定义state数据字段是没有意义的——因为它们的值不会在job执行之间保留。

 

问题:因为每一次执行execute之前,Job实现类会被创建一个新的实例,执行完execute方法后,这个实例就会马上被删除。所以当多次执行execute方法,就是不断的实例化,再删除,两个不同的实例之间无法传递任何的值。

 

你现在可能想问“我如何为一个Job实例提供属性/配置?”以及“我怎样才能在多次执行execution()之间保持某个JOB的状态联系?”这些问题的答案是一样的:关键是JobDataMap,它是JobDetail对象的一部分。

 

 

JobDataMap

The JobDataMap can be used to hold any amount of (serializable) data objects which you wish to have made available to the job instance when it executes. JobDataMap is an implementation of the Java Map interface, and has some added convenience methods for storing and retrieving data of primitive types.

Here’s some quick snippets of putting data into the JobDataMap while defining/building the JobDetail, prior to adding the job to the scheduler:

 

JobDataMap可以用来存放任何数量的(可串行化的)数据对象,当JOB实例在执行时,你希望能够将这些数据对象提供给JOB实例。JobDataMap是Java Map接口的一个实现,并且有一些附加的方便方法来存储和检索简单类型的数据。

 

下面是一些快速的片段,将数据放入JobDataMap,同时定义/构建JobDetail,然后将Job添加到调度器中:

 

  // define the job and tie it to our DumbJob class
  JobDetail job = newJob(DumbJob.class)
      .withIdentity("myJob", "group1") // name "myJob", group "group1"
      .usingJobData("jobSays", "Hello World!")
      .usingJobData("myFloatValue", 3.141f)
      .build();

 

Here’s a quick example of getting data from the JobDataMap during the job’s execution:


public class DumbJob implements Job {

public DumbJob() {
    }

public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      JobKey key = context.getJobDetail().getKey();

JobDataMap dataMap = context.getJobDetail().getJobDataMap();

String jobSays = dataMap.getString("jobSays");
      float myFloatValue = dataMap.getFloat("myFloatValue");

System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);
    }
  }

 

If you use a persistent JobStore (discussed in the JobStore section of this tutorial) you should use some care in deciding what you place in the JobDataMap, because the object in it will be serialized, and they therefore become prone to class-versioning problems. Obviously standard Java types should be very safe, but beyond that, any time someone changes the definition of a class for which you have serialized instances, care has to be taken not to break compatibility. Optionally, you can put JDBC-JobStore and JobDataMap into a mode where only primitives and strings are allowed to be stored in the map, thus eliminating any possibility of later serialization problems.

 

如果你使用持久的JobStore(在本教程的JobStore部分中讨论),对于你放什么类型的数据到JobDataMap中时,需要注意一下,因为在它里面的对象将被序列化,易于产生类版本的问题。显然,标准的Java类型应该是非常安全的,但是除此之外,任何时候有人更改了你已经序列化的实例的类的定义,都会出问题。所以注意不要破坏兼容性。

可选方案,你可以将JDBC-JobStore和JobDataMap放到一个模式中,只有原生和字符串被允许存储在MAP中,这样就消除了以后序列化问题的可能性。

 

If you add setter methods to your job class that correspond to the names of keys in the JobDataMap (such as a setJobSays(String val) method for the data in the example above), then Quartz’s default JobFactory implementation will automatically call those setters when the job is instantiated, thus preventing the need to explicitly get the values out of the map within your execute method.

 

Triggers can also have JobDataMaps associated with them. This can be useful in the case where you have a Job that is stored in the scheduler for regular/repeated use by multiple Triggers, yet with each independent triggering, you want to supply the Job with different data inputs.

The JobDataMap that is found on the JobExecutionContext during Job execution serves as a convenience. It is a merge of the JobDataMap found on the JobDetail and the one found on the Trigger, with the values in the latter overriding any same-named values in the former.

 

如果你添加setter方法 到你的job类中,这些setter需要对应在JobDataMap中键的名字(JobDataMap有一个keyjobSays,则在Job类中需要有setJobSays(String val) method)。这样在Job被实例化时,Quartz的默认JobFactory实现类将自动调用这些setter,这样可以避免我们需要在execute()方法中显式的获取map中的值。(实例参考后面的Lab2)

 

Triggers也可以有与之关联的JobDataMaps。当你一个有Job,它被存储在一个Scheduler中,被多个Triggers正常/重复的使用,且对于每一个独立的Job触发,你都想为这个Job输入不同的数据。在这种情况下,JobDataMaps是非常有用的。

 

在作业执行期间,在JobExecutionContext中找到的JobDataMap是很方便的。它是一个JobDetail对象的JobDataMap与Trigger的JobDataMap的合并,在合并过程中,Trigger的JobDataMap的值覆盖了前者中任何同名的值。但取值的时候,需要用getMergedJobDataMap()方法来取合并后jobDataMap的值。

 

看源代码如下:

--------------------

public JobExecutionContextImpl(Scheduler scheduler,

            TriggerFiredBundle firedBundle, Job job) {

             ….

        this.jobDataMap = new JobDataMap();

        this.jobDataMap.putAll(jobDetail.getJobDataMap());

//如果key值同,trigger的值会override jobDetail的值)

        this.jobDataMap.putAll(trigger.getJobDataMap()); 

    }

   /**

     * {@inheritDoc}

     */

    public JobDataMap getMergedJobDataMap() {

        return jobDataMap;

    }

------------------

 

Here’s a quick example of getting data from the JobExecutionContext’s merged JobDataMap during the job’s execution:

下面是一个简单的例子,在Job执行execute()方法期间,从JobExecutionContext对象的已合并的JobDataMap(在创建JobExecutionContext对像时,Quartz会把JobDetail与Trigger的JobDataMap数据合并,并放在JobDataMap中)获取数据。

 

---------------------------------------------------------------

package com.practice.quartz.job;

 

import java.util.ArrayList;

import java.util.Date;

 

import org.quartz.Job;

import org.quartz.JobDataMap;

import org.quartz.JobExecutionContext;

import org.quartz.JobExecutionException;

import org.quartz.JobKey;

 

public class DumbJob2 implements Job {

public DumbJob2() {

}

 

public void execute(JobExecutionContext context) throws JobExecutionException {

JobKey key = context.getJobDetail().getKey();

JobDataMap dataMap = context.getMergedJobDataMap(); // Note the difference from the previous example

String jobSays = dataMap.getString("jobSays");

float myFloatValue = dataMap.getFloat("myFloatValue");

ArrayList state = (ArrayList) dataMap.get("myStateData");

state.add(new Date());

System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);

}

 

}

---------------------------------------------------------------

 

Or if you wish to rely on the JobFactory “injecting” the data map values onto your class, it might look like this instead:

或者,如果你希望依赖于JobFactory“注入”数据Map值到你的类中,那么它可能看起来是这样的:

即在Job实现类中创建了JobDataMap Key的setter方法。

 

--------------------------------------------------------------

public class DumbJob3 implements Job {

String jobSays;

float myFloatValue;

ArrayList state;

 

public DumbJob3() {

}

 

public void execute(JobExecutionContext context) throws JobExecutionException {

JobKey key = context.getJobDetail().getKey();

JobDataMap dataMap = context.getMergedJobDataMap(); // Note the difference from the previous example

state.add(new Date());

System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);

}

 

public void setJobSays(String jobSays) {

this.jobSays = jobSays;

}

 

public void setMyFloatValue(float myFloatValue) {

myFloatValue = myFloatValue;

}

 

public void setState(ArrayList state) {

state = state;

}

}

--------------------------------------------------------------

 

 

You’ll notice that the overall code of the class is longer, but the code in the execute() method is cleaner. One could also argue that although the code is longer, that it actually took less coding, if the programmer’s IDE was used to auto-generate the setter methods, rather than having to hand-code the individual calls to retrieve the values from the JobDataMap. The choice is yours.

 

你会注意到这个类的总体代码变得更长,但在execute()方法的代码即是更简洁了(因为不需要写手动从JobDataMap中取值的代码了)。 有人可能会说,虽然代码是长了,但如果程序员的IDE有用于自动生成Setter的方法,而不必手工编写单个的调用来从JobDataMap获取值,则实际上是减少了编码。这可以按你自己的情况来决定是否需要使用这种方式。

 

 

Job “Instances”

Many users spend time being confused about what exactly constitutes a “job instance”. We’ll try to clear that up here and in the section below about job state and concurrency.

You can create a single job class, and store many ‘instance definitions’ of it within the scheduler by creating multiple instances of JobDetails - each with its own set of properties and JobDataMap - and adding them all to the scheduler.

 

许多用户花时间对一个“job instance”究竟是由什么构成的产生困惑。关于Job状态和并发性,我们将在这里和下面的章节中澄清。

你可以创建一个单一的Job类,并通过创建多个JobDetails实例来储存多个“该Job类的实例”在Scheduler中——每个实例都有自己的属性和JobDataMap——并将它们全部添加到Scheduler中。

 

换句话说,就是每创建一个JobDetail对象,都会重新实例化一个新的Job类实例,这个实例与其它的JobDetail没有任何的关联,每一个Job实例,它都有自己的属性和JobDataMap值。

 

For example, you can create a class that implements the Job interface called “SalesReportJob”. The job might be coded to expect parameters sent to it (via the JobDataMap) to specify the name of the sales person that the sales report should be based on. They may then create multiple definitions (JobDetails) of the job, such as “SalesReportForJoe” and “SalesReportForMike” which have “joe” and “mike” specified in the corresponding JobDataMaps as input to the respective jobs.

 

例如,你可以创建一个实现Job接口的实现类SalesReportJob。这个Job可能想通过JobDataMap来传递参数,来指定这个售销报表应该基于哪位销售人员的名字。然后,他们可能创建多个Job的定义(JobDetail),比如:“SalesReportForJoe” 和 “SalesReportForMike”这两个JobDetail,这两个JobDetail都关联SalesReportJob这个Job类,它们相应的JobDataMaps中对KEY分别指定值为Joe和Mike,用于各自的Job。

 

 

When a trigger fires, the JobDetail (instance definition) it is associated to is loaded, and the job class it refers to is instantiated via the JobFactory configured on the Scheduler. The default JobFactory simply calls newInstance() on the job class, then attempts to call setter methods on the class that match the names of keys within the JobDataMap. You may want to create your own implementation of JobFactory to accomplish things such as having your application’s IoC or DI container produce/initialize the job instance.

 

当触发器触发时,它与之关联的JobDetail(实例定义)被加载,并且它所引用的job类是通过在Scheduler上配置的JobFactory实例化的。默认的JobFactory简单地在job类上调用newInstance(),然后尝试在类中调用与JobDataMap中的keys名称相匹配的setter方法(这里实现自动对有setter方法Job实例类赋值)。你可能想要创建自己的JobFactory实现来完成一些事情,比如让应用程序的IoC或DI容器生产/初始化作业实例。

 

In “Quartz speak”, we refer to each stored JobDetail as a “job definition” or “JobDetail instance”, and we refer to a each executing job as a “job instance” or “instance of a job definition”. Usually if we just use the word “job” we are referring to a named definition, or JobDetail. When we are referring to the class implementing the job interface, we usually use the term “job class”.

 

在“Quartz”中,我们将每个存储的JobDetail称为“job definition”或“JobDetail instance”,我们将每个正在执行Job称为“job instance”或“instance of a job definition”。通常,如果我们使用“job”这个词,我们指的是一个命名的定义,或者说是JobDetail。当我们提到实现Job接口的类时,我们通常使用术语“job class”。

 

Job State and Concurrency

Now, some additional notes about a job’s state data (aka JobDataMap) and concurrency. There are a couple annotations that can be added to your Job class that affect Quartz’s behavior with respect to these aspects.

 

现在,关于一个作业的状态数据(又名JobDataMap)和并发性的一些附加说明。有一些注释可以添加到你的作业类中,这些注释会影响Quartz在这些方面的行为。

 

@DisallowConcurrentExecution is an annotation that can be added to the Job class that tells Quartz not to execute multiple instances of a given job definition (that refers to the given job class) concurrently.

Notice the wording there, as it was chosen very carefully. In the example from the previous section, if “SalesReportJob” has this annotation, than only one instance of “SalesReportForJoe” can execute at a given time, but it can execute concurrently with an instance of “SalesReportForMike”. The constraint is based upon an instance definition (JobDetail), not on instances of the job class. However, it was decided (during the design of Quartz) to have the annotation carried on the class itself, because it does often make a difference to how the class is coded.

 

@disallowconcurrentexecution是一个可以添加到作业类的注释,它告诉Quartz不要同时执行指定Job定义(JobDetail)的多个实例。

注意这里的措辞,因为它是非常小心地选用的。在上面的例子中,如果“SalesReportJob”有这个注释,那么只有“SalesReportForJoe”的一个实例可以在指定的时间里执行,但是它可以与“SalesReportForMike”的实例同时执行。约束基于实例定义(JobDetail),而不是作业类的实例。然而,决定(在Quartz的设计过程中)将注释设置在类本身中,因为它通常会对类的编码方式产生影响。

 

@PersistJobDataAfterExecution is an annotation that can be added to the Job class that tells Quartz to update the stored copy of the JobDetail’s JobDataMap after the execute() method completes successfully (without throwing an exception), such that the next execution of the same job (JobDetail) receives the updated values rather than the originally stored values. Like the @DisallowConcurrentExecution annotation, this applies to a job definition instance, not a job class instance, though it was decided to have the job class carry the attribute because it does often make a difference to how the class is coded (e.g. the ‘statefulness’ will need to be explicitly ‘understood’ by the code within the execute method).

If you use the @PersistJobDataAfterExecution annotation, you should strongly consider also using the @DisallowConcurrentExecution annotation, in order to avoid possible confusion (race conditions) of what data was left stored when two instances of the same job (JobDetail) executed concurrently.

 

@PersistJobDataAfterExecution是一个注释,可以添加到Job类告诉Quartz,在execute()方法成功完成后(没有抛出异常),将更新后的值存储到JobDetail 的JobDataMap副本中。,这样下一个执行同样的工作(JobDetail)收到更新后的值而不是原先存储的值。像@DisallowConcurrentExecution注释,这适用于 job definition instance,而不是job class instance。

如果你使用@persistjobdataafterexecution注释,你也应该强烈地考虑使用@disallowconcurrentexecution注释,以避免在同一作业(JobDetail)同时执行的两个实例中存储的数据的混乱(竞态条件)。

 

 

Other Attributes Of Jobs

Here’s a quick summary of the other properties which can be defined for a job instance via the JobDetail object:

  • Durability - if a job is non-durable, it is automatically deleted from the scheduler once there are no longer any active triggers associated with it. In other words, non-durable jobs have a life span bounded by the existence of its triggers.
  • RequestsRecovery - if a job “requests recovery”, and it is executing during the time of a ‘hard shutdown’ of the scheduler (i.e. the process it is running within crashes, or the machine is shut off), then it is re-executed when the scheduler is started again. In this case, the JobExecutionContext.isRecovering() method will return true.

 

Jobs的其它属性

下面是一个其它属性的简单总结,这些属性可以通过JobDetail对象为Job实例定义:

  • Durability - 如果某Job是非持久的,则当不再有任何的活动的Trigger与之关联时,这个Job它会自动从scheduler中被删除。换句话说,非持久的Job的生命周期是由其关联的触发器的存在所限制的。
  • RequestsRecovery - 如果一个Job“请求恢复”,并且它是在调度程序“硬关闭”的时候正在执行(也就是它在运行的过程中崩溃,或者在运行时机器被关闭),那么当调度程序重新启动时,它将被重新执行。在这种情况下,jobexecutioncontext.isRecovering()方法将返回true。

 

JobExecutionException

Finally, we need to inform you of a few details of the Job.execute(..) method. The only type of exception (including RuntimeExceptions) that you are allowed to throw from the execute method is the JobExecutionException. Because of this, you should generally wrap the entire contents of the execute method with a ‘try-catch’ block. You should also spend some time looking at the documentation for the JobExecutionException, as your job can use it to provide the scheduler various directives as to how you want the exception to be handled.

 

最后,我们需要通知你有关Job.execute(..)方法的一些细节。你可以从execute方法中抛出的唯一类型的异常(包括runtimeexception)是JobExecutionException。因此,你通常应该用一个“try-catch”块来包装execute方法的全部内容。你还应该花一些时间查看JobExecutionException的文档,因为你的作业可以使用它为调度程序提供各种指示,说明你希望如何处理异常。

 

Pasted from <http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/tutorial-lesson-03.html>

Lab1.使用JobDataMap 来传递值

1.JobDetail使用JobDataMap 传值,然后在execute()中通过JobDetail对象来获取这些值

2.Trigger使用JobDataMap 传值,然后在execute()中通过Trigger对象来获取这些值

3.在execute()中通过JobExecutionContext 对象的getMergedJobDataMap来获取JobDetail与Trigger的JobDataMap合并后的值

4.在execute(),我们手工通过key来获取值,而不是让Quartz自动通过setter赋值

演示代码如下:

1.创建一个Job的实现类DumbJob

-------------------------------------------------------------------------------------------

package com.practice.quartz.job;

 

import java.util.ArrayList;

import java.util.Date;

 

import org.quartz.Job;

import org.quartz.JobDataMap;

import org.quartz.JobExecutionContext;

import org.quartz.JobExecutionException;

import org.quartz.JobKey;

import org.quartz.TriggerKey;

 

public class DumbJob implements Job {

public DumbJob() {

}

 

public void execute(JobExecutionContext context) throws JobExecutionException {

JobKey jobKey = context.getJobDetail().getKey();

TriggerKey triggerKey = context.getTrigger().getKey();

 

System.out.println("------------Name and Group for Job and Trigger-----------------------");

System.out.println("Jobkey-name:" + jobKey.getName() + ";Jobkey-group:" + jobKey.getGroup());

System.out.println("triggerKey-name:" + triggerKey.getName() + ";triggerKey-group:" + triggerKey.getGroup());

 

System.out.println("------------jobDataMap for jobDetail-----------------------");

JobDataMap jobDataMap = context.getJobDetail().getJobDataMap();

for(String keyEle : jobDataMap.getKeys()) {

System.out.println("key:" + keyEle + ";value:" + jobDataMap.get(keyEle));

}

 

System.out.println("------------jobDataMap for Trigger-----------------------");

JobDataMap triggerJobDataMap = context.getTrigger().getJobDataMap();

for(String keyEle : triggerJobDataMap.getKeys()) {

System.out.println("key:" + keyEle + ";value:" + triggerJobDataMap.get(keyEle));

}

 

System.out.println("------------jobDataMap for JobDetail and Trigger-----------------------");

JobDataMap mergedJobDataMap = context.getMergedJobDataMap();

for(String keyEle : mergedJobDataMap.getKeys()) {

System.out.println("key:" + keyEle + ";value:" + mergedJobDataMap.get(keyEle));

}

 

}

}

-------------------------------------------------------------------------------------------

 

2.设置JobDetail, Trigger 和scheduler

-------------------------------------------------------------------------------------------

package com.practice.quartz.lesson3;

 

import static org.quartz.JobBuilder.newJob;

import static org.quartz.TriggerBuilder.newTrigger;

import org.junit.Test;

import static org.quartz.SimpleScheduleBuilder.simpleSchedule;

import org.quartz.JobDetail;

import org.quartz.Scheduler;

import org.quartz.SchedulerException;

import org.quartz.SchedulerFactory;

import org.quartz.Trigger;

 

import com.practice.quartz.job.DumbJob;

import com.practice.quartz.job.HelloJob;

 

public class Example1 {

public static void main(String[] args) throws Exception {

 

SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();

 

Scheduler sched = schedFact.getScheduler();

 

sched.start();

 

// define the job and tie it to our HelloJob class

JobDetail job = newJob(DumbJob.class)//

.withIdentity("myJob", "group1")//

.usingJobData("jobDetail1", "Hello World!")//

.usingJobData("jobDetail2", 3.141f)//

.build();

 

// Trigger the job to run now, and then every 40 seconds

Trigger trigger = newTrigger()//

.withIdentity("myTrigger", "group1")//

.startNow()//

.usingJobData("trigger1","trigger1_string")//

.usingJobData("trigger2",100)//

.withSchedule(simpleSchedule()//

.withIntervalInSeconds(5)//

.repeatForever())//

.build();

 

// Tell quartz to schedule the job using our trigger

sched.scheduleJob(job, trigger);

}

}

-------------------------------------------------------------------------------------------

 

3.运行效果

15:06:55.956 INFO  org.quartz.core.QuartzScheduler 575 start - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.

------------Name and Group for Job and Trigger-----------------------

Jobkey-name:myJob;Jobkey-group:group1

triggerKey-name:myTrigger;triggerKey-group:group1

------------jobDataMap for jobDetail-----------------------

key:jobDetail2;value:3.141

key:jobDetail1;value:Hello World!

------------jobDataMap for Trigger-----------------------

key:trigger2;value:100

key:trigger1;value:trigger1_string

------------jobDataMap for JobDetail and Trigger-----------------------

key:jobDetail1;value:Hello World!

key:jobDetail2;value:3.141

key:trigger2;value:100

key:trigger1;value:trigger1_string

 

 

 

Lab2.测试让Quartz自动赋值与手工通过key读值的区别

1.没有对DumbJob类创建变量及setter方法,通过手工读值JobdataMap中key的值

-------------------------------------------------------------------------------------------

package cn.ss.quartz.exam1;

 

import static org.quartz.JobBuilder.*;

import static org.quartz.SimpleScheduleBuilder.*;

import static org.quartz.CronScheduleBuilder.*;

import static org.quartz.CalendarIntervalScheduleBuilder.*;

import static org.quartz.TriggerBuilder.*;

import static org.quartz.DateBuilder.*;

 

import org.quartz.JobDetail;

import org.quartz.Scheduler;

import org.quartz.SchedulerFactory;

import org.quartz.Trigger;

 

public class Lesson3 {

public static void main(String[] args) throws Exception {

 

SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();

 

Scheduler sched = schedFact.getScheduler();

 

sched.start();

 

// define the job and tie it to our HelloJob class

JobDetail job = newJob(DumbJob.class)//

.withIdentity("myJob", "group1")//

.usingJobData("jobSays", "Hello World!")//

.usingJobData("myFloatValue", 3.141f)//

.build();

 

// Trigger the job to run now, and then every 40 seconds

Trigger trigger = newTrigger().withIdentity("myTrigger", "group1").startNow()

.withSchedule(simpleSchedule().withIntervalInSeconds(5).repeatForever()).build();

 

// Tell quartz to schedule the job using our trigger

sched.scheduleJob(job, trigger);

 

}

}

-------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------

package cn.ss.quartz.exam1;

 

import java.util.ArrayList;

import java.util.Date;

 

import org.quartz.Job;

import org.quartz.JobDataMap;

import org.quartz.JobExecutionContext;

import org.quartz.JobExecutionException;

import org.quartz.JobKey;

 

public class DumbJob implements Job {

 

public DumbJob() {

}

 

public void execute(JobExecutionContext context) throws JobExecutionException {

JobKey key = context.getJobDetail().getKey();

 

//JobDataMap dataMap = context.getJobDetail().getJobDataMap();

JobDataMap dataMap = context.getMergedJobDataMap();

 

String jobSays = dataMap.getString("jobSays");

float myFloatValue = dataMap.getFloat("myFloatValue");

 

System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);

}

}

-------------------------------------------------------------------------------------------

Execute Result:

2017-3-26 12:12:17 org.quartz.core.QuartzScheduler start

信息: Scheduler MyScheduler_$_NON_CLUSTERED started.

Instance group1.myJob of DumbJob says: Hello World!, and val is: 3.141

Instance group1.myJob of DumbJob says: Hello World!, and val is: 3.141

Instance group1.myJob of DumbJob says: Hello World!, and val is: 3.141

Instance group1.myJob of DumbJob says: Hello World!, and val is: 3.141

2.Change DumbJob as a Javabean format, we only create the setter method of DataMap, System will fill values automatically. We need not hard-code to get the values of DataMap.

------------------------------------------------------------------------------------------

package cn.ss.quartz.exam1;

 

import java.util.ArrayList;

import java.util.Date;

 

import org.quartz.Job;

import org.quartz.JobDataMap;

import org.quartz.JobExecutionContext;

import org.quartz.JobExecutionException;

import org.quartz.JobKey;

 

public class DumbJob implements Job {

String jobSays;

float myFloatValue;

 

public void setJobSays(String jobSays) {

this.jobSays = jobSays;

}

 

public void setMyFloatValue(float myFloatValue) {

this.myFloatValue = myFloatValue;

}

 

public DumbJob() {

}

 

public void execute(JobExecutionContext context) throws JobExecutionException {

JobKey key = context.getJobDetail().getKey();

// Note the difference from the previous example

JobDataMap dataMap = context.getMergedJobDataMap();

System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);

}

}

------------------------------------------------------------------------------------------

2017-3-26 12:39:01 org.quartz.impl.StdSchedulerFactory instantiate

信息: Quartz scheduler version: 2.2.3

2017-3-26 12:39:01 org.quartz.core.QuartzScheduler start

信息: Scheduler MyScheduler_$_NON_CLUSTERED started.

Instance group1.myJob of DumbJob says: Hello World!, and val is: 3.141

Instance group1.myJob of DumbJob says: Hello World!, and val is: 3.141

Instance group1.myJob of DumbJob says: Hello World!, and val is: 3.141

Lab3:关于@DisallowConcurrentExecution注解的测试

@DisallowConcurrentExecution:将该注解加到job类上,告诉Quartz不要并发地执行同一个job定义(这里指特定的job类)的多个实例。请注意这里的用词。拿前一小节的例子来说,如果“SalesReportJob”类上有该注解,则同一时刻仅允许执行一个“SalesReportForJoe”实例,但可以并发地执行“SalesReportForMike”类的一个实例。所以该限制是针对JobDetail的,而不是job类的。但是我们认为(在设计Quartz的时候)应该将该注解放在job类上,因为job类的改变经常会导致其行为发生变化。

 

测试方案:我们在Job实现类中输出1到8,每输出1位数,暂停1秒,即输出到8需要8秒,执行execute完成需要8秒钟。设一个定时任务,每5秒执行一次。

测试一:没有在Job类中添加@DisallowConcurrentExecution时,当输出到5后,Quartz会并发创建一个新的JobDetail实例,执行关联的Job任务。

测试二:在Job类中添加@DisallowConcurrentExecution,当输出到5后,Quartz会的并发创建一个新的JobDetail实例前,发现关闭了并发执行的功能,所有任务触发被取消。

测试三:在Job类中添加@DisallowConcurrentExecution,但我们定义了两个JobDetail(都是关联同一个Job类)。两个JobDetail分别关联了不同的Trigger到Scheduler中。

当任务触发后,两个JOB会同时执行,因为DisallowConcurrentExecution只能控制相同KEY(name + group组合键)的JobDetail不会并发执行。但如果两组KEY不一样的Jobdetail,是可以同时执行的。

 

测试代码如下:

1.Job的实现类ColorJob ,没有添加@DisallowConcurrentExecution

-----------------------------------------------

package com.practice.quartz.job;

 

import java.util.Date;

 

import org.quartz.DisallowConcurrentExecution;

import org.quartz.Job;

import org.quartz.JobDataMap;

import org.quartz.JobExecutionContext;

import org.quartz.JobExecutionException;

import org.quartz.JobKey;

 

public class ColorJob implements Job {

 

    public ColorJob() {

    }

 

    public void execute(JobExecutionContext context)

        throws JobExecutionException {

            JobKey jobKey = context.getJobDetail().getKey();

            System.out.println("==============start:" + new Date() + "================");

            System.out.println("-----------jobKey--------------------");

            System.out.println("Name:" + jobKey.getName() + ";Group:" + jobKey.getGroup());

            

            System.out.println("-----------JobDataMap--------------------");

            JobDataMap jobDataMap = context.getMergedJobDataMap();

            System.out.println("color:" + jobDataMap.getString("color"));

            try {

                    for(int i = 1;i<=8;i++) {

                            System.out.println("i=" + i);

                            Thread.sleep(1000);

                    }

} catch (InterruptedException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

    }

 

   

}

-----------------------------------------------

2.执行任务代码

-----------------------------------------------

package com.practice.quartz.lesson3;

 

import static org.quartz.JobBuilder.newJob;

import static org.quartz.TriggerBuilder.newTrigger;

import static org.quartz.SimpleScheduleBuilder.simpleSchedule;

import org.quartz.JobDetail;

import org.quartz.Scheduler;

import org.quartz.SchedulerFactory;

import org.quartz.Trigger;

 

import com.practice.quartz.job.ColorJob;

import com.practice.quartz.job.DumbJob2;

 

public class ConcurrentExample {

public static void main(String[] args) throws Exception {

 

SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();

 

Scheduler sched = schedFact.getScheduler();

 

sched.start();

 

// define the job and tie it to our HelloJob class

JobDetail job = newJob(ColorJob.class)//

.withIdentity("myJob", "group1")//

.usingJobData("color", "red")//

.build();

 

Trigger trigger = newTrigger()//

.withIdentity("myTrigger", "group1")//

.startNow()//

.withSchedule(simpleSchedule()//

.withIntervalInSeconds(5)//

.repeatForever())//

.build();

 

// Tell quartz to schedule the job using our trigger

sched.scheduleJob(job, trigger);

}

}

-----------------------------------------------

执行结果如下:

16:57:40.472 INFO  org.quartz.core.QuartzScheduler 575 start - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.

==============start:Sun Aug 05 16:57:40 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

i=1

i=2

i=3

i=4

i=5

==============start:Sun Aug 05 16:57:45 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

i=1

i=6

i=2

i=7

i=3

i=8

i=4

i=5

 

在5秒后,Name:myJob;Group:group1出现了并发的情况

 

3.Job的实现类ColorJob ,添加@DisallowConcurrentExecution

-----------------------------------------------

package com.practice.quartz.job;

 

import java.util.Date;

 

import org.quartz.DisallowConcurrentExecution;

import org.quartz.Job;

import org.quartz.JobDataMap;

import org.quartz.JobExecutionContext;

import org.quartz.JobExecutionException;

import org.quartz.JobKey;

 

@DisallowConcurrentExecution

public class ColorJob implements Job {

 

    public ColorJob() {

    }

 

    public void execute(JobExecutionContext context)

        throws JobExecutionException {

            JobKey jobKey = context.getJobDetail().getKey();

            System.out.println("==============start:" + new Date() + "================");

            System.out.println("-----------jobKey--------------------");

            System.out.println("Name:" + jobKey.getName() + ";Group:" + jobKey.getGroup());

            

            System.out.println("-----------JobDataMap--------------------");

            JobDataMap jobDataMap = context.getMergedJobDataMap();

            System.out.println("color:" + jobDataMap.getString("color"));

            try {

                    for(int i = 1;i<=8;i++) {

                            System.out.println("i=" + i);

                            Thread.sleep(1000);

                    }

} catch (InterruptedException e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

    }

 

   

}

-----------------------------------------------

执行结果如下:

==============start:Sun Aug 05 17:01:38 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

i=1

i=2

i=3

i=4

i=5

i=6

i=7

i=8

==============start:Sun Aug 05 17:01:47 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

i=1

i=2

i=3

i=4

i=5

i=6

i=7

i=8

没有出现并发现象。

注:在schedule中,JobDetail的group与name组合是作为组合唯一标识,相同的KEY是无法加入到schedule中的。

Exception in thread "main" org.quartz.ObjectAlreadyExistsException: Unable to store Job : 'group1.myJob', because one already exists with this identification.

at org.quartz.simpl.RAMJobStore.storeJob(RAMJobStore.java:279)

4.Job的实现类ColorJob ,添加@DisallowConcurrentExecution,创建一个新的JobDetail(使用不同的name,即name+group组合与第一个Jobdetail的不同)

-----------------------------------------------

package com.practice.quartz.lesson3;

 

import static org.quartz.JobBuilder.newJob;

import static org.quartz.TriggerBuilder.newTrigger;

import static org.quartz.SimpleScheduleBuilder.simpleSchedule;

import org.quartz.JobDetail;

import org.quartz.Scheduler;

import org.quartz.SchedulerFactory;

import org.quartz.Trigger;

 

import com.practice.quartz.job.ColorJob;

import com.practice.quartz.job.DumbJob2;

 

public class ConcurrentExample {

public static void main(String[] args) throws Exception {

 

SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();

 

Scheduler sched = schedFact.getScheduler();

 

sched.start();

 

// define the job and tie it to our HelloJob class

JobDetail job = newJob(ColorJob.class)//

.withIdentity("myJob", "group1")//

.usingJobData("color", "red")//

.build();

// define the job and tie it to our HelloJob class

JobDetail job2 = newJob(ColorJob.class)//

.withIdentity("myJob2", "group1")//

.usingJobData("color", "red")//

.build();

 

Trigger trigger = newTrigger()//

.withIdentity("myTrigger", "group1")//

.startNow()//

.withSchedule(simpleSchedule()//

.withIntervalInSeconds(5)//

.repeatForever())//

.build();

 

Trigger trigger2 = newTrigger()//

.withIdentity("myTrigger2", "group1")//

.startNow()//

.withSchedule(simpleSchedule()//

.withIntervalInSeconds(5)//

.repeatForever())//

.build();

// Tell quartz to schedule the job using our trigger

sched.scheduleJob(job, trigger);

sched.scheduleJob(job2, trigger2);

 

}

}

-----------------------------------------------

执行结果如下:两个任务并发执行

17:17:21.864 INFO  org.quartz.core.QuartzScheduler 575 start - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.

==============start:Sun Aug 05 17:17:21 CST 2018================

-----------jobKey--------------------

==============start:Sun Aug 05 17:17:21 CST 2018================

-----------jobKey--------------------

Name:myJob2;Group:group1

-----------JobDataMap--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

i=1

color:red

i=1

i=2

i=2

i=3

i=3

i=4

i=4

i=5

i=5

i=6

i=6

i=7

i=7

i=8

i=8

Lab4:关于@PersistJobDataAfterExecution注解的测试

@PersistJobDataAfterExecution:将该注解加在job类上,告诉Quartz在成功执行了job类的execute方法后(没有发生任何异常),更新JobDetail中JobDataMap的数据,使得该job(即JobDetail)在下一次执行的时候,JobDataMap中是更新后的数据,而不是更新前的旧数据。和 @DisallowConcurrentExecution注解一样,尽管注解是加在job类上的,但其限制作用是针对job实例的,而不是job类的。由job类来承载注解,是因为job类的内容经常会影响其行为状态(比如,job类的execute方法需要显式地“理解”其”状态“)。

 

测试一:对于JobExecutionContext的JobDataMap属性(它的值是由JobDetail与Trigger的JobDataMap合并而来,context.getMergedJobDataMap()),即使加了PersistJobDataAfterExecution,也是不能将值保存的。

测试二:对于JobDetail对象JobDataMap值的保存(通过序列化)

测试三:对于Trigger对象JobDataMap值的保存(通过序列化)

测试代码:

测试一:

-----------------------------------------------

package com.practice.quartz.job;

 

import java.util.Date;

 

import org.quartz.DisallowConcurrentExecution;

import org.quartz.Job;

import org.quartz.JobDataMap;

import org.quartz.JobExecutionContext;

import org.quartz.JobExecutionException;

import org.quartz.JobKey;

import org.quartz.PersistJobDataAfterExecution;

 

@PersistJobDataAfterExecution

@DisallowConcurrentExecution

public class ColorJob implements Job {

 

    public ColorJob() {

    }

 

    public void execute(JobExecutionContext context)

        throws JobExecutionException {

            JobKey jobKey = context.getJobDetail().getKey();

            System.out.println("==============start:" + new Date() + "================");

            System.out.println("-----------jobKey--------------------");

            System.out.println("Name:" + jobKey.getName() + ";Group:" + jobKey.getGroup());

            

            System.out.println("-----------JobDataMap--------------------");

            JobDataMap jobDataMap = context.getMergedJobDataMap();

            System.out.println("color:" + jobDataMap.getString("color"));

            int i = 0;

            if(jobDataMap.containsKey("value")) {

                    i = jobDataMap.getInt("value");

            }

            

            System.out.println("step:" + jobDataMap.getInt("step"));

            if(jobDataMap.containsKey("step")) {

                    i = i + jobDataMap.getInt("step");

                    jobDataMap.put("value", i);

            }

            System.out.println("i=" + i);

    }

}

 

-----------------------------------------------

 

-----------------------------------------------

package com.practice.quartz.lesson3;

 

import static org.quartz.JobBuilder.newJob;

import static org.quartz.TriggerBuilder.newTrigger;

import static org.quartz.SimpleScheduleBuilder.simpleSchedule;

import org.quartz.JobDetail;

import org.quartz.Scheduler;

import org.quartz.SchedulerFactory;

import org.quartz.Trigger;

 

import com.practice.quartz.job.ColorJob;

import com.practice.quartz.job.DumbJob2;

 

public class ConcurrentExample {

public static void main(String[] args) throws Exception {

 

SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();

 

Scheduler sched = schedFact.getScheduler();

 

sched.start();

 

// define the job and tie it to our HelloJob class

JobDetail job = newJob(ColorJob.class)//

.withIdentity("myJob", "group1")//

.usingJobData("color", "red")//

.usingJobData("value", 0)//

.usingJobData("step", 1)//

.build();

 

Trigger trigger = newTrigger()//

.withIdentity("myTrigger", "group1")//

.startNow()//

.withSchedule(simpleSchedule()//

.withIntervalInSeconds(5)//

.repeatForever())//

.build();

 

// Tell quartz to schedule the job using our trigger

sched.scheduleJob(job, trigger);

 

}

}

-----------------------------------------------

执行结果:值无法保存

==============start:Sun Aug 05 17:50:02 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

step:1

i=1

==============start:Sun Aug 05 17:50:07 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

step:1

i=1

测试二:保存值到JobDetailjobDataMap

-----------------------------------------------

@PersistJobDataAfterExecution

@DisallowConcurrentExecution

public class ColorJob implements Job {

 

    public ColorJob() {

    }

 

    public void execute(JobExecutionContext context)

        throws JobExecutionException {

            JobKey jobKey = context.getJobDetail().getKey();

            System.out.println("==============start:" + new Date() + "================");

            System.out.println("-----------jobKey--------------------");

            System.out.println("Name:" + jobKey.getName() + ";Group:" + jobKey.getGroup());

            

            System.out.println("-----------JobDataMap--------------------");

            JobDataMap jobDataMap = context.getJobDetail().getJobDataMap();

            System.out.println("color:" + jobDataMap.getString("color"));

            int i = 0;

            if(jobDataMap.containsKey("value")) {

                    i = jobDataMap.getInt("value");

            }

            

            System.out.println("step:" + jobDataMap.getInt("step"));

            if(jobDataMap.containsKey("step")) {

                    i = i + jobDataMap.getInt("step");

                    jobDataMap.put("value", i);

            }

            System.out.println("i=" + i);

    }

}

-----------------------------------------------

执行结果:值可以保存

==============start:Sun Aug 05 17:52:32 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

step:1

i=1

==============start:Sun Aug 05 17:52:37 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

step:1

i=2

==============start:Sun Aug 05 17:52:42 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

step:1

i=3

测试三:对于Trigger对象JobDataMap值的保存(通过序列化)

----------------------------------------------

public class ConcurrentExample {

public static void main(String[] args) throws Exception {

SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();

Scheduler sched = schedFact.getScheduler();

sched.start();

 

// define the job and tie it to our HelloJob class

JobDetail job = newJob(ColorJob.class)//

.withIdentity("myJob", "group1")//

.usingJobData("color", "red")//

.usingJobData("value", 0)//

.usingJobData("step", 1)//

.build();

 

Trigger trigger = newTrigger()//

.withIdentity("myTrigger", "group1")//

.startNow()//

.usingJobData("color", "red")//

.usingJobData("value", 0)//

.usingJobData("step", 1)//

.withSchedule(simpleSchedule()//

.withIntervalInSeconds(5)//

.repeatForever())//

.build();

// Tell quartz to schedule the job using our trigger

sched.scheduleJob(job, trigger);

 

}

}

----------------------------------------------

 

----------------------------------------------

@PersistJobDataAfterExecution

@DisallowConcurrentExecution

public class ColorJob implements Job {

 

    public ColorJob() {

    }

 

    public void execute(JobExecutionContext context)

        throws JobExecutionException {

            JobKey jobKey = context.getJobDetail().getKey();

            System.out.println("==============start:" + new Date() + "================");

            System.out.println("-----------jobKey--------------------");

            System.out.println("Name:" + jobKey.getName() + ";Group:" + jobKey.getGroup());

            

            System.out.println("-----------JobDataMap--------------------");

            JobDataMap jobDataMap = context.getTrigger().getJobDataMap();

            System.out.println("color:" + jobDataMap.getString("color"));

            int i = 0;

            if(jobDataMap.containsKey("value")) {

                    i = jobDataMap.getInt("value");

            }

            

            System.out.println("step:" + jobDataMap.getInt("step"));

            if(jobDataMap.containsKey("step")) {

                    i = i + jobDataMap.getInt("step");

                    jobDataMap.put("value", i);

            }

            System.out.println("i=" + i);

    }

}

----------------------------------------------

执行结果:值不可以保存

==============start:Sun Aug 05 17:57:21 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

step:1

i=1

==============start:Sun Aug 05 17:57:26 CST 2018================

-----------jobKey--------------------

Name:myJob;Group:group1

-----------JobDataMap--------------------

color:red

step:1

i=1

猜你喜欢

转载自blog.csdn.net/arnolian/article/details/82528043