Spring Boot --MongoDB Integration Integration 3 (MongoDB polymerization operation)

Previous article

About Version

rely version
springboot 2.0.8.RELEASE
mongodb 4.0.14

This content just to introduce mongodb most basic use and configuration, as a well-known database, its existence a considerable number of advanced usage, start to introduce content to be large, of course, I am not the god of related fields, the following content is just finishing his Some accumulation of daily use. It is the accumulation of their own experience, but also want to help the students later

About the project

This content is what I try to finishing work in contact with the various tools used in the method in springboot. All the methods described below have been provided a test case. Because each example involves more code, so the article posted only part of the code. All code here: https: //gitee.com/daifyutils/springboot-samples.

Polymer pipes operation

MongoTemplate provides a method to effect polymerization aggregate operation on the data. Wherein the core logic accepts a TypedAggregationparameter that defines the operation of the polymerization statement particularly limited.

	@Override
	public <O> AggregationResults<O> aggregate(TypedAggregation<?> aggregation, Class<O> outputType) {
		return aggregate(aggregation, operations.determineCollectionName(aggregation.getInputType()), outputType);
	}

	/* (non-Javadoc)
	 * @see org.springframework.data.mongodb.core.MongoOperations#aggregate(org.springframework.data.mongodb.core.aggregation.TypedAggregation, java.lang.String, java.lang.Class)
	 */
	@Override
	public <O> AggregationResults<O> aggregate(TypedAggregation<?> aggregation, String inputCollectionName,
			Class<O> outputType) {

		Assert.notNull(aggregation, "Aggregation pipeline must not be null!");

		AggregationOperationContext context = new TypeBasedAggregationOperationContext(aggregation.getInputType(),
				mappingContext, queryMapper);
		return aggregate(aggregation, inputCollectionName, outputType, context);
	}

Polymer pipes operation support

MongoTemplate provide aggregate method is actually mongodb the aggregate () version of JAVA. In fact MongoTemplate aggregate method is provided to generate aggregate mongodb of () expression to the final query.

You can think of an aggregate pipeline, when MongoDB document through a pipe handling the processed data will be passed to the next pipeline for processing. This produces two impact on the use of the pipeline: a pipeline processing sequence is different from the content execution pipeline of the same order will produce different results. 2. The data processing pipeline can only be processed after the previous pipeline.

mongodb provided aggregate ()

Polymer pipes operation

The following is based on a content aggregator operatively duct provided mongodb

Compatible Operating java interfaces Explanation
$project Aggregation.project Modify the structure of the input document.
$match Aggregation.match For filtering data
$limit Aggregation.limit Used to limit the number of documents returned Polymer pipes MongoDB
$skip Aggregation.skip Skip specified number of documents in the polymerization conduit
$unwind Aggregation.unwind Split the document into one of a plurality of array type field
$group Aggregation.group The documents are grouped in the collection can be used for statistical results
$sort Aggregation.sort After sorting the input document output
$ geoNear Aggregation.geoNear Output close to the location of a document ordering

Content operating polymerization

The following is based on polymerization operation (Aggregation.group), mongodb provide an alternative expression

Aggregate expression java interfaces Explanation
$sum Aggregation.group().sum(“field”).as(“sum”) Summing
$avg Aggregation.group().avg(“field”).as(“avg”) Averaging
$ Aggregation.group().min(“field”).as(“min”) Gets a collection of all documents corresponding to the minimum worth
$max Aggregation.group().max(“field”).as(“max”) Gets a collection of all documents corresponding to the maximum value worth
$push Aggregation.group().push(“field”).as(“push”) Values ​​in the resulting document is inserted into an array
$addToSet Aggregation.group().addToSet(“field”).as(“addToSet”) In the resulting document to insert a value into an array, but does not create a copy
$first Aggregation.group().first(“field”).as(“first”) Get the first document data according to the document ordering resources
$last Aggregation.group().last(“field”).as(“last”) Gets the last document data according to the document ordering resources

The actual polymerization conduit

Group polymerization operation

    @Override
    public List<GroupVo> getAggGroup() {
        
        GroupOperation noRepeatGroup = Aggregation.group( "userId","type")
            .count().as("num")
            .sum("totalProduct").as("count");
        TypedAggregation<Order> noRepeatAggregation =
            Aggregation.newAggregation(Order.class,noRepeatGroup);
        AggregationResults<GroupVo> noRepeatDataInfoVos = mongoTemplate.aggregate(noRepeatAggregation, GroupVo.class);
        List<GroupVo> noRepeatDataList = noRepeatDataInfoVos.getMappedResults();
        System.out.println(JSON.toJSONString(noRepeatDataList));
        return noRepeatDataList;
    }

Examples of the above orderset userIdand typegrouped using the countacquired total number and mapped to numthe field of totalProductfields are summed and mapped to the countfield. In all Group polymerization operation, in addition count, other operations are basically the same format used. Its call in JAVA are using the following format

GroupOperation noRepeatGroup = Aggregation.group( {分组字段1},{分组字段2})
            .count().as({count结果映射的字段})
            .sum({需要进行sum的字段}).as({sum结果映射的字段})

project operation

    @Override
    public List<GroupVo> getAggProject() {

        // 重排的
        GroupOperation noRepeatGroup = Aggregation.group( "userId","type")
            .count().as("num")
            .sum("totalProduct").as("count");
        Field field = Fields.field("num2", "num");
        ProjectionOperation project = Aggregation.project("userId","type")
            .andInclude(Fields.from(field));

        TypedAggregation<Order> noRepeatAggregation =
            Aggregation.newAggregation(Order.class,noRepeatGroup,project);
        AggregationResults<GroupVo> noRepeatDataInfoVos = mongoTemplate.aggregate(noRepeatAggregation, GroupVo.class);
        List<GroupVo> noRepeatDataList = noRepeatDataInfoVos.getMappedResults();
        System.out.println(JSON.toJSONString(noRepeatDataList));
        return noRepeatDataList;
    }

project main role is to adjust the field contents of the document. For example, the above content data to obtain the number of entries in the beginning mapped to the num field, but below Field field = Fields.field("num2", "num")here num mapped into the mapping of num2. This method is very useful when performed with subsequent use of lookup table queries. Because the normal time-table query is connected with table appears as a subset of the result set, and the use of project may be a subset of the data came out showing the same level as the data document.

About match, unwind, sortI'll be in the back with table query, the contents inside the packet to re-introduce

Bucket operation

When polymerization pipe may be used Aggregation.bucketdividing the data into a plurality of buckets according to certain range, or by using Aggregation.bucketAutothe data into a specified number of buckets.

  1. Aggregation.bucket

Aggregation.bucket method allows a user to set field, and a multi-stage query section. The section generating a plurality of data mongodb buckets based on the contents of this field and the packet provided. We can get aggregation results for each barrel. The following example is divided into data Order based on the value set in the type field [0,1), [1,2), [2,3), [3, other) four.

    @Override
    public List<BucketVo> getAggBucket() {

        BucketOperation bucketOperation =
            // 分组的字段
            Aggregation.bucket("type")
                // 根据范围进行分组
                .withBoundaries(0,1,2,3)
                // 默认分组
                .withDefaultBucket("other")
                // 求总
                .andOutput("_id").count().as("count")
                // 求和
                .andOutput("totalProduct").sum().as("sum")
                // 求平均
                .andOutput("totalMoney").avg().as("avg");
        TypedAggregation<Order> newAggregation =
            Aggregation.newAggregation(Order.class, bucketOperation);

        AggregationResults<BucketVo> noRepeatDataInfoVos2 =
            mongoTemplate.aggregate(newAggregation, BucketVo.class);
        return noRepeatDataInfoVos2.getMappedResults();
    }

Fields Introduction

  • bucket: buckets need to be divided based on field
  • withBoundaries: sub-range of contents of the bucket sections, to be noted that a condition value range after the value of the former does not contain a condition comprising
  • withDefaultBucket: All is not set to the range of statistical data will be placed in the bucket to the other, which is the other bucket id
  • andOutput: content needs to output, usually the result of a field for the polymerization of
  1. Aggregation.bucketAuto

bucketAuto parameters and fields are specified, based on the contents of the corresponding field, the average data for the specified number of allocated buckets. Data such as the following example of the value of the type field is divided into two groups

    @Override
    public List<BucketVo> getAggBucketAuto() {
            // 分组的字段
        BucketAutoOperation autoOperation = Aggregation.bucketAuto("type", 2)
            // 求总
            .andOutput("_id").count().as("count")
            // 求和
            .andOutput("totalProduct").sum().as("sum")
            // 求平均
            .andOutput("totalMoney").avg().as("avg");
        TypedAggregation<Order> newAggregation =
            Aggregation.newAggregation(Order.class, autoOperation);

        AggregationResults<BucketVo> noRepeatDataInfoVos2 =
            mongoTemplate.aggregate(newAggregation, BucketVo.class);
        return noRepeatDataInfoVos2.getMappedResults();
    }

Fields Introduction

  • bucketAuto: you need to set the bucket is divided according to the fields, and the number of buckets needed points

  • andOutput: content needs to output, usually the result of a field for the polymerization of


Limited personal level, the contents of the above description may exist where there is no clear or wrong, if the development of the students found, please let me know, I will first modify content, I hope you see buried in the New Year can only house the home case code code hard to force, give meLike a point. Your praise is my point forward momentum. Here I wish you all a happy New Year.

Published 204 original articles · won praise 15 · views 10000 +

Guess you like

Origin blog.csdn.net/qq330983778/article/details/104079851