Explain the introduction of Spring Batch, an excellent batch processing framework

Add dependency:

org.springframework.boot spring-boot-starter-batch com.h2database h2 runtime needs to add Spring Batch dependency, and it is more convenient to use H2 as an in-memory database. Actual production must use external databases, such as Oracle and PostgreSQL.

Entry main class:

@SpringBootApplication
@EnableBatchProcessing
public class PkslowBatchJobMain { public static void main(String[] args) { SpringApplication.run(PkslowBatchJobMain.class, args); } } It is also very simple, just add the annotation @EnableBatchProcessing on the basis of Springboot.




Domain entity class Employee:

package com.pkslow.batch.entity;
public class Employee { String id; String firstName; String lastName; } The corresponding csv file content is as follows:




id,firstName,lastName
1,Lokesh,Gupta
2,Amit,Mishra
3,Pankaj,Kumar
4,David,Miller
3.2 Input-Processing-Output
3.2.1 Reading ItemReader
Because there are multiple input files, it is defined as follows:

@Value(“input/inputData*.csv”)
private Resource[] inputResources;

@Bean
public MultiResourceItemReader multiResourceItemReader()
{
MultiResourceItemReader resourceItemReader = new MultiResourceItemReader();
resourceItemReader.setResources(inputResources);
resourceItemReader.setDelegate(reader());
return resourceItemReader;
}

@Bean
public FlatFileItemReader reader()
{ FlatFileItemReader reader = new FlatFileItemReader(); //Skip the first line of the csv file, as the header reader.setLinesToSkip(1); reader.setLineMapper(new DefaultLineMapper() { { setLineTokenizer(new DelimitedLineTokenizer) () { { //Field name setNames(new String[] {"id", "firstName", "lastName" }); } }); setFieldSetMapper(new BeanWrapperFieldSetMapper() { { //The converted target class setTargetType (Employee.class); } }); } }); return reader; }




















FlatFileItemReader is used here to facilitate us to read data from the file.

3.2.2 Processing ItemProcessor
For a simple demonstration, the processing is very simple, which is to convert the last column to uppercase:

public ItemProcessor<Employee, Employee> itemProcessor() { return employee -> { employee.setLastName(employee.getLastName().toUpperCase()); return employee; }; } 3.2.3 The output of ItremWriter is relatively simple, the code and comments are as follows:






private Resource outputResource = new FileSystemResource(“output/outputData.csv”);

@Bean
public FlatFileItemWriter writer()
{ FlatFileItemWriter writer = new FlatFileItemWriter<>(); writer.setResource(outputResource); //Whether it is append mode writer.setAppendAllowed(true); writer.setLineAggregator(new DelimitedLineAggregator() { { // Set the delimiter setDelimiter(","); setFieldExtractor(new BeanWrapperFieldExtractor() { { //Set the field setNames(new String[] {"id", "firstName", "lastName" }); } }); } }) ; return writer; } 3.3 Step After having Reader-Processor-Writer, you can define Step:



















@Bean
public Step csvStep() { return stepBuilderFactory.get(“csvStep”).<Employee, Employee>chunk(5) .reader(multiResourceItemReader()) .processor(itemProcessor()) .writer(writer()) .build (); } There is a chunk setting here, the value is 5, which means that the output will be submitted after 5 records, which can be defined according to your needs.






3.4 Job has
completed the coding of Step, and it is easy to define Job:

@Bean
public Job pkslowCsvJob() { return jobBuilderFactory .get("pkslowCsvJob") .incrementer(new RunIdIncrementer()) .start(csvStep()) .build(); } 3.5 After running the above coding, execute the program, the results are as follows :







The data is successfully read, and the last field is converted to uppercase and output to the outputData.csv file.

4 Listener
can monitor specific events through the Listener interface to achieve more business functions. For example, if the processing fails, a failure log is recorded; when the processing is completed, the downstream is notified to get the data.

We monitor the Read, Process and Write events respectively, and implement the ItemReadListener interface, ItemProcessListener interface and ItemWriteListener interface respectively. Because the code is relatively simple, just print the log, here only the implementation code of ItemWriteListener is posted:

public class PkslowWriteListener implements ItemWriteListener {
private static final Log logger = LogFactory.getLog(PkslowWriteListener.class);
@Override
public void beforeWrite(List<? extends Employee> list) {
logger.info("beforeWrite: " + list);
}

@Override
public void afterWrite(List<? extends Employee> list) {
    logger.info("afterWrite: " + list);
}

@Override
public void onWriteError(Exception e, List<? extends Employee> list) {
    logger.info("onWriteError: " + list);
}

}
Integrate the implemented listener listener into Step:

@Bean
public Step csvStep() {
return stepBuilderFactory.get(“csvStep”).<Employee, Employee>chunk(5)
.reader(multiResourceItemReader())
.listener(new PkslowReadListener())
.processor(itemProcessor())
.listener(new PkslowProcessListener())
.writer(writer())
.listener(new PkslowWriteListener())
.build();
}
亚马逊测评 www.yisuping.com

Guess you like

Origin blog.csdn.net/weixin_45032957/article/details/108578455