Lesson 3: More About Jobs and Job Details

      正如您在第2课中看到的那样,Jobs很容易实现,在界面中只有一个“execute”方法。关于作业的性质,Job接口的execute(..)方法以及JobDetails,您还需要了解更多内容。

      虽然您实现的Job类具有知道如何处理特定类型作业的实际工作的代码,但Quartz需要了解您可能希望该Job的实例具有的各种属性。这是通过JobDetail类完成的,在上一节中已经简要提到过。

      JobDetail实例是使用JobBuilder类构建的。为了 在编写 代码中具有DSL感觉,您通常可以 使用其所有方法的静态导入。

import static org.quartz.JobBuilder.*;

现在让我们花点时间讨论一下Jobs 的“本质”以及Quartz中job instances的生命周期。首先让我们回顾一下我们在第1课中看到的一些代码片段:


  // define the job and tie it to our HelloJob class
  JobDetail job = newJob(HelloJob.class)
      .withIdentity("myJob", "group1") // name "myJob", group "group1"
      .build();

  // Trigger the job to run now, and then every 40 seconds
  Trigger trigger = newTrigger()
      .withIdentity("myTrigger", "group1")
      .startNow()
      .withSchedule(simpleSchedule()
          .withIntervalInSeconds(40)
          .repeatForever())            
      .build();

  // Tell quartz to schedule the job using our trigger
  sched.scheduleJob(job, trigger);

现在考虑将Job类“HelloJob”定义为:


  public class HelloJob implements Job {

    public HelloJob() {
    }

    public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      System.err.println("Hello!  HelloJob is executing.");
    }
  }

      请注意,我们为调度器提供了一个JobDetail实例,它通过在构建JobDetail时提供作业的类来知道要执行的作业类型。调度程序执行任务时,每个(以及每个)时间都在调用它的execute(..)方法之前创建一个类的新实例。当执行完成时,将删除对Job类实例的引用,然后对实例进行垃圾收集。这种行为的后果之一是,作业必须有一个无参数构造函数(当使用默认的JobFactory实现时)。另一个分歧是,在job类上定义状态数据字段是没有意义的——因为它们的值不会在任务执行之间保留。

      You may now be wanting to ask “how can I provide properties/configuration for a Job instance?” and “how can I keep track of a job’s state between executions?” The answer to these questions are the same: the key is the JobDataMap, which is part of the JobDetail object.

      您现在可能想问“我如何为Job实例提供属性/配置?”和“如何在执行之间跟踪Job的状态?”这些问题的答案是相同的:关键是JobDataMap ,它是JobDetail对象的一部分。

JobDataMap

      The JobDataMap can be used to hold any amount of (serializable) data objects which you wish to have made available to the job instance when it executes. JobDataMap is an implementation of the Java Map interface, and has some added convenience methods for storing and retrieving data of primitive types.

      JobDataMap可用于保存您希望在作业实例执行时可用的任何数量的(可序列化的)数据对象。JobDataMap是Java Map接口的一种实现,并且具有一些用于存储和检索基本类型数据的便利方法。

      Here’s some quick snippets of putting data into the JobDataMap while defining/building the JobDetail, prior to adding the job to the scheduler:

      以下是在将Job添加到调度器之前,在定义/构建JobDetail时将数据放入JobDataMap的一些快速代码段:


  // define the job and tie it to our DumbJob class
  JobDetail job = newJob(DumbJob.class)
      .withIdentity("myJob", "group1") // name "myJob", group "group1"
      .usingJobData("jobSays", "Hello World!")
      .usingJobData("myFloatValue", 3.141f)
      .build();

      Here’s a quick example of getting data from the JobDataMap during the job’s execution:

      这是在作业执行期间从JobDataMap获取数据的快速示例:


public class DumbJob implements Job {

    public DumbJob() {
    }

    public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      JobKey key = context.getJobDetail().getKey();

      JobDataMap dataMap = context.getJobDetail().getJobDataMap();

      String jobSays = dataMap.getString("jobSays");
      float myFloatValue = dataMap.getFloat("myFloatValue");

      System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);
    }
  }

      If you use a persistent JobStore (discussed in the JobStore section of this tutorial) you should use some care in deciding what you place in the JobDataMap, because the object in it will be serialized, and they therefore become prone to class-versioning problems. Obviously standard Java types should be very safe, but beyond that, any time someone changes the definition of a class for which you have serialized instances, care has to be taken not to break compatibility. Optionally, you can put JDBC-JobStore and JobDataMap into a mode where only primitives and strings are allowed to be stored in the map, thus eliminating any possibility of later serialization problems.

      如果您使用一个持久的JobStore(在本教程的JobStore部分中讨论),那么您应该在决定在JobDataMap中放置什么内容时小心一些,因为其中的对象将被序列化,因此很容易出现类版本问题。显然,标准Java类型应该是非常安全的,但除此之外,任何时候,只要有人更改您已序列化实例的类的定义,就必须注意不要破坏兼容性。您可以选择将JDBC-JobStore和JobDataMap放到一个模式中,其中只有原语和字符串被允许存储在映射中,从而消除了以后序列化问题的可能。

      If you add setter methods to your job class that correspond to the names of keys in the JobDataMap (such as a setJobSays(String val) method for the data in the example above), then Quartz’s default JobFactory implementation will automatically call those setters when the job is instantiated, thus preventing the need to explicitly get the values out of the map within your execute method.

      如果将setter方法添加到与JobDataMap中的键名相对应的作业类中(例如上面示例中的数据的setjobsay (String val)方法),那么Quartz的默认JobFactory实现将在作业实例化时自动调用这些setter,从而避免了在execute方法中显式地从map中获取值的需要。

      Triggers can also have JobDataMaps associated with them. This can be useful in the case where you have a Job that is stored in the scheduler for regular/repeated use by multiple Triggers, yet with each independent triggering, you want to supply the Job with different data inputs.

      触发器还可以具有与之关联的JobDataMaps。如果您有一个存储在调度程序中的作业以供多个触发器定期/重复使用,但是每次独立触发,您希望为作业提供不同的数据输入,这可能很有用。

      The JobDataMap that is found on the JobExecutionContext during Job execution serves as a convenience. It is a merge of the JobDataMap found on the JobDetail and the one found on the Trigger, with the values in the latter overriding any same-named values in the former.

      在作业执行期间在JobExecutionContext上找到的JobDataMap可以方便地使用。它是在JobDetail上找到的JobDataMap和在Trigger上找到的JobDataMap的合并,后者中的值覆盖前者中的任何同名值。

      Here’s a quick example of getting data from the JobExecutionContext’s merged JobDataMap during the job’s execution:

      以下是在Job 执行期间从JobExecutionContext的合并JobDataMap获取数据的快速示例:

 
public class DumbJob implements Job {

    public DumbJob() {
}
    public void execute(JobExecutionContext context)   throws JobExecutionException
    {
      JobKey key = context.getJobDetail().getKey();

      JobDataMap dataMap = context.getMergedJobDataMap();  // Note the difference from the previous example

      String jobSays = dataMap.getString("jobSays");
      float myFloatValue = dataMap.getFloat("myFloatValue");
      ArrayList state = (ArrayList)dataMap.get("myStateData");
      state.add(new Date());

      System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);
    }
  }

 

Or if you wish to rely on the JobFactory “injecting” the data map values onto your class, it might look like this instead:

或者,如果您希望依赖JobFactory将数据映射值“注入”到您的类中,它可能看起来像这样:


  public class DumbJob implements Job {


    String jobSays;
    float myFloatValue;
    ArrayList state;

    public DumbJob() {
    }

    public void execute(JobExecutionContext context) throws JobExecutionException
    {
      JobKey key = context.getJobDetail().getKey();

      JobDataMap dataMap = context.getMergedJobDataMap();  // Note the difference from the previous example

      state.add(new Date());

      System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);
    }

    public void setJobSays(String jobSays) {
      this.jobSays = jobSays;
    }

    public void setMyFloatValue(float myFloatValue) {
      myFloatValue = myFloatValue;
    }

    public void setState(ArrayList state) {
      state = state;
    }

  }

      You’ll notice that the overall code of the class is longer, but the code in the execute() method is cleaner. One could also argue that although the code is longer, that it actually took less coding, if the programmer’s IDE was used to auto-generate the setter methods, rather than having to hand-code the individual calls to retrieve the values from the JobDataMap. The choice is yours.

      您会注意到类的整体代码更长,但execute()方法中的代码更清晰。人们还可以争辩说,虽然代码更长,但实际上它需要更少的编码,如果程序员的IDE用于自动生成setter方法,而不是必须手动编写单个调用以从JobDataMap检索值。这是你的选择。

Job “Instances”

Many users spend time being confused about what exactly constitutes a “job instance”. We’ll try to clear that up here and in the section below about job state and concurrency.

      许多用户花时间搞不清到底什么构成了“job instance”。我们将在这里和下面关于job state和并发性concurrency的部分中尽量清除这些问题。

      You can create a single job class, and store many ‘instance definitions’ of it within the scheduler by creating multiple instances of JobDetails - each with its own set of properties and JobDataMap - and adding them all to the scheduler.

      您可以创建一个作业类,并通过创建多个JobDetails实例(每个实例都有自己的属性和JobDataMap)并将它们全部添加到调度程序中,在调度程序中存储它的许多“实例定义”。

      For example, you can create a class that implements the Job interface called “SalesReportJob”. The job might be coded to expect parameters sent to it (via the JobDataMap) to specify the name of the sales person that the sales report should be based on. They may then create multiple definitions (JobDetails) of the job, such as “SalesReportForJoe” and “SalesReportForMike” which have “joe” and “mike” specified in the corresponding JobDataMaps as input to the respective jobs.

      例如,您可以创建一个实现名为“SalesReportJob”的Job接口的类。可以对作业进行编码,以期望发送给它的参数(通过JobDataMap)指定销售报告应基于的销售人员的姓名。然后,他们可以创建Job的多个定义(JobDetails),例如“SalesReportForJoe”和“SalesReportForMike”,它们在相应的JobDataMaps中指定“joe”和“mike”作为相应作业的输入。

      When a trigger fires, the JobDetail (instance definition) it is associated to is loaded, and the job class it refers to is instantiated via the JobFactory configured on the Scheduler. The default JobFactory simply calls newInstance() on the job class, then attempts to call setter methods on the class that match the names of keys within the JobDataMap. You may want to create your own implementation of JobFactory to accomplish things such as having your application’s IoC or DI container produce/initialize the job instance.

      触发器触发时,将加载与其关联的JobDetail(实例定义),并通过Scheduler上配置的JobFactory实例化其引用的作业类。默认的JobFactory只是在作业类上调用newInstance(),然后尝试在类上调用与JobDataMap中的键名匹配的setter方法。您可能希望创建自己的JobFactory实现来完成诸如让应用程序的IoC或DI容器生成/初始化作业实例之类的事情。

      In “Quartz speak”, we refer to each stored JobDetail as a “job definition” or “JobDetail instance”, and we refer to a each executing job as a “job instance” or “instance of a job definition”. Usually if we just use the word “job” we are referring to a named definition, or JobDetail. When we are referring to the class implementing the job interface, we usually use the term “job class”.

      在“Quartz speak”中,我们将每个存储的JobDetail称为“job definition”或“JobDetail instance”,并且我们将每个执行作业称为“作业实例”或“作业定义的实例”。通常,如果我们只使用“job”这个词,我们指的是命名定义或JobDetail。当我们提到实现作业接口的类时,我们通常使用术语“job class”。

    job:Jobdetail

    job class: Job接口的实现类

Job State and Concurrency

      Now, some additional notes about a job’s state data (aka JobDataMap) and concurrency. There are a couple annotations that can be added to your Job class that affect Quartz’s behavior with respect to these aspects.

      现在,关于作业的状态数据(也称为JobDataMap)和并发性的一些附加说明。有几个注释可以添加到您的Job class中,这些注释会影响Quartz在这些方面的行为。

      @DisallowConcurrentExecution is an annotation that can be added to the Job class that tells Quartz not to execute multiple instances of a given job definition (that refers to the given job class) concurrently.

      @DisallowConcurrentExecution是一个可以添加到Job class 的注释,它告诉Quartz不要同时执行给定作业定义的多个实例(指向给定的作业类)。

      Notice the wording there, as it was chosen very carefully. In the example from the previous section, if “SalesReportJob” has this annotation, than only one instance of “SalesReportForJoe” can execute at a given time, but it can execute concurrently with an instance of “SalesReportForMike”. The constraint is based upon an instance definition (JobDetail), not on instances of the job class. However, it was decided (during the design of Quartz) to have the annotation carried on the class itself, because it does often make a difference to how the class is coded.

      

      注意那里的措辞,因为它是非常谨慎地选择的。在上一节的示例中,如果“SalesReportJob”具有此批注,则只能在给定时间执行“SalesReportForJoe”的一个实例,但它可以与“SalesReportForMike”实例同时执行。约束基于实例定义(JobDetail),而不是基于Job class 的实例。然而,决定(在Quartz的设计期间)在类本身上进行注释,因为它通常会对类的编码方式产生影响。

      @PersistJobDataAfterExecution is an annotation that can be added to the Job class that tells Quartz to update the stored copy of the JobDetail’s JobDataMap after the execute() method completes successfully (without throwing an exception), such that the next execution of the same job (JobDetail) receives the updated values rather than the originally stored values. Like the @DisallowConcurrentExecution annotation, this applies to a job definition instance, not a job class instance, though it was decided to have the job class carry the attribute because it does often make a difference to how the class is coded (e.g. the ‘statefulness’ will need to be explicitly ‘understood’ by the code within the execute method).

      @PersistJobDataAfterExecution是一个可以添加到Job类的注释,它告诉Quartz在execute()方法成功完成(没有抛出异常)之后,更新JobDetail的JobDataMap的存储副本,以便相同作业(JobDetail)的下一个执行接收更新后的值,而不是最初存储的值。与@DisallowConcurrentExecution annotation一样,它也适用于作业定义实例,而不是Job class实例,尽管它决定让Job class携带属性,因为它通常会影响类的编码方式(例如,“状态性”需要在execute方法中显式地“理解”)。

      If you use the @PersistJobDataAfterExecution annotation, you should strongly consider also using the @DisallowConcurrentExecution annotation, in order to avoid possible confusion (race conditions) of what data was left stored when two instances of the same job (JobDetail) executed concurrently.

      如果您使用@persistjobdataafterexecution注释,您也应该强烈地考虑使用@disallowconcurrentexecution注释,以避免在同一作业(JobDetail)同时执行的两个实例中存储的数据的混乱(竞态条件)。

      

Other Attributes Of Jobs

Here’s a quick summary of the other properties which can be defined for a job instance via the JobDetail object:

以下是可以通过JobDetail对象为作业实例定义的其他属性的快速摘要:

  • Durability - if a job is non-durable, it is automatically deleted from the scheduler once there are no longer any active triggers associated with it. In other words, non-durable jobs have a life span bounded by the existence of its triggers. 持久性 - 如果作业不耐用,一旦不再有与之关联的活动触发器,它将自动从调度程序中删除。换句话说,非持久性工作的寿命由其触发器的存在所限制。
  • RequestsRecovery - if a job “requests recovery”, and it is executing during the time of a ‘hard shutdown’ of the scheduler (i.e. the process it is running within crashes, or the machine is shut off), then it is re-executed when the scheduler is started again. In this case, the JobExecutionContext.isRecovering() method will return true. RequestsRecovery - 如果一个作业“请求恢复”,并且它正在调度程序的“硬关闭”期间执行(即它在崩溃中运行的进程,或者机器被关闭),那么它将被重新执行当调度程序再次启动时。在这种情况下,JobExecutionContext.isRecovering()方法将返回true。

 

JobExecutionException

      Finally, we need to inform you of a few details of the Job.execute(..) method. The only type of exception (including RuntimeExceptions) that you are allowed to throw from the execute method is the JobExecutionException. Because of this, you should generally wrap the entire contents of the execute method with a ‘try-catch’ block. You should also spend some time looking at the documentation for the JobExecutionException, as your job can use it to provide the scheduler various directives as to how you want the exception to be handled.

      最后,我们需要告知您方法的一些细节Job.execute(..)。允许从execute方法抛出的唯一类型的异常(包括RuntimeExceptions)是JobExecutionException。因此,您通常应该使用'try-catch'块来包装execute方法的全部内容。您还应该花一些时间查看JobExecutionException的文档,因为您的作业可以使用它来为调度程序提供有关如何处理异常的各种指令。

猜你喜欢

转载自blog.csdn.net/qq_30336433/article/details/80929908