Es histogram aggregation histogram

1 Introduction

直方图聚合数值It is an aggregation based on multi-bucket values, which can be extracted or 数值范围值aggregated from documents . It can dynamically generate fixed-size buckets for the values ​​participating in the aggregation.

2. How to calculate bucket_key

Assuming we have a value of 32, and the bucket size is 5, then 32 rounds up to 30, so the document will fall into the bucket associated with key 30.下面的算式可以精确的确定每个文档的归属桶

bucket_key = Math.floor((value - offset) / interval) * interval + offset

  1. offset:The value defaults to 0start. And the value of offset must be [0, interval)between. and needs to be one 正数.
  2. value:The value involved in the calculation, such as the price field in a document, etc.

3. There is a set of data, how to determine which bucket it falls into

Here is my own understanding, please point out if I am wrong.

Existing data: [3, 8, 15]
offset = 0
interval = 5

Then it may be divided into the following buckets[0,5) [5,10) [10, 15) [15,+∞)

  1. The bucket where the number 3 falls into buket_key= Math.floor((3 - 0) / 5) * 5 + 0 = 0, that is, it falls into [0,5)this bucket
  2. The number 8 falls into the bucket buket_key= Math.floor((8 - 0) / 5) * 5 + 0 = 5, that is, it falls into [5,10)this bucket
  3. The number 15 falls into the bucket buket_key= Math.floor((15 - 0) / 5) * 5 + 0 = 15, that is, it falls into [15,+∞)this bucket

4. Demand

We have a set of response time data, and aggregate statistics apibased on this set of datahistogram

4.1 prepare for mapping

PUT /index_api_response_time
{
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "properties": {
      "id": {
        "type": "long"
      },
      "api": {
        "type": "keyword"
      },
      "response_time": {
        "type": "integer"
      }
    }
  }
}

   
   
    
    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

Here is mappingrelatively simple, just 3 fields id, apiand response_time.

4.2 Prepare data

PUT /index_api_response_time/_bulk
{"index":{"_id":1}}
{"api":"/user/infos","response_time": 3}
{"index":{"_id":2}}
{"api":"/user/add"}
{"index":{"_id":3}}
{"api":"/user/update","response_time": 8}
{"index":{"_id":4}}
{"api":"/user/list","response_time": 15}
{"index":{"_id":5}}
{"api":"/user/export","response_time": 30}
{"index":{"_id":6}}
{"api":"/user/detail","response_time": 32}

   
   
    
    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

The data recorded first here id=2is not response_timeavailable, and it will be processed additionally during later aggregation.

5. Histogram aggregation operation

5.1. Aggregation based on response_time with an interval of 5

5.1.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,
  "aggs": {
    "agg_01": {
      "histogram": {
        "field": "response_time",
        "interval": 5
      }
    }
  }
}

   
   
    
    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

5.1.2 java code

@Test
@DisplayName("根据response_time聚合,间隔为5")
public void test01() throws IOException {
      
      
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .aggregations("agg_01", agg -> agg.histogram(histogram -> histogram.field("response_time")
                    .interval(5D))));
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

   
   
    
    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

5.1.3 Running results

operation result

5.2 Aggregate the total response time of each bucket on the basis of 5.1

此处聚合一下是为了结合已有的数据,看看每个数据是否落入到了相应的桶中

5.2.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,

“aggs”: {
“agg_01”: {
“histogram”: {
“field”: “response_time”,
“interval”: 5
},
“aggs”: {
“agg_sum”: {
“sum”: {
“field”: “response_time”
}
}
}
}
}
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

5.2.2 java code

@Test
@DisplayName("在test01基础上聚合出每个桶总的响应时间")
public void test02() throws IOException {
    
    
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .aggregations("agg_01", agg ->
                            agg.histogram(histogram -> histogram.field("response_time").interval(5D))
                               .aggregations("agg_sum", aggSum -> aggSum.sum(sum -> sum.field("response_time")))
                    ));
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

5.2.3 Running Results

operation result

5.3 Only one document must exist in each bucket to return -min_doc_count

从5.1中的结果我们可以知道,不管桶中是否存在数据,我们都返回了,即返回了很多空桶。 简单理解就是返回的 桶中存在 doc_count=0 的数据,此处我们需要将这个数据不返回

5.3.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,

“aggs”: {
“agg_01”: {
“histogram”: {
“field”: “response_time”,
“interval”: 5,
“min_doc_count”: 1
}
}
}
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

5.3.2 java code

@Test
@DisplayName("每个桶中必须存在1个文档的结果才返回-min_doc_count")
public void test03() throws IOException {
    
    
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .aggregations("agg_01", agg -> agg.histogram(
                            histogram -> histogram.field("response_time").interval(5D).minDocCount(1)
                            )
                    )
    );
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

5.3.3 Running Results

operation result

5.4 Supplementary empty bucket data - extended_bounds

这个是什么意思?假设我们通过 response_time >= 10 进行过滤,并且 interval=5 那么es默认情况下就不会返回 bucket_key =0,5,10的桶,那么如果我想返回那么该如何处理呢?可以通过 extended_bounds 来实现. It only makes sense when
it is used . extended_bounds does not filter buckets.extended_boundsmin_doc_count=0

extended_bound explained

5.4.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,
  "query": {
    "range": {
      "response_time": {
        "gte": 10
      }
    }
  }, 
  "aggs": {
    "agg_01": {
      "histogram": {
        "field": "response_time",
        "interval": 5,
        "min_doc_count": 0,
        "extended_bounds": {
          "min": 0,
          "max": 50
        }
      }
    }
  }
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

5.4.2 java code

@Test
@DisplayName("补充空桶数据-extended_bounds")
public void test04() throws IOException {
    
    
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .query(query-> query.range(range -> range.field("response_time").gte(JsonData.of(10))))
                    .aggregations("agg_01", agg -> agg.histogram(
                            histogram -> histogram.field("response_time").interval(5D).minDocCount(0)
                                    .extendedBounds(bounds -> bounds.min(1D).max(50D))
                            )
                    )
    );
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

5.4.3 Running Results

operation result

5.5 Only display buckets between min-max -hard_bounds

Only return buckets between min-max
Data here:

PUT /index_api_response_time/_bulk
{"index":{"_id":1}}
{"api":"/user/infos","response_time": 3}
{"index":{"_id":2}}
{"api":"/user/add"}
{"index":{"_id":3}}
{"api":"/user/update","response_time": 8}
{"index":{"_id":4}}
{"api":"/user/list","response_time": 15}
{"index":{"_id":5}}
{"api":"/user/export","response_time": 25}
{"index":{"_id":6}}
{"api":"/user/detail","response_time": 32}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

5.5.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,
  "query": {
    "range": {
      "response_time": {
        "gte": 10
      }
    }
  }, 
  "aggs": {
    "agg_01": {
      "histogram": {
        "field": "response_time",
        "interval": 5,
        "min_doc_count": 0,
        "hard_bounds": {
          "min": 15,
          "max": 25
        }
      },
      "aggs": {
        "a_s": {
          "sum": {
            "field": "response_time"
          }
        }
      }
    }
  }
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

5.5.2 java code

@Test
@DisplayName("只展示min-max之间的桶-hard_bounds")
public void test05() throws IOException {
    
    
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .query(query-> query.range(range -> range.field("response_time").gte(JsonData.of(10))))
                    .aggregations("agg_01", agg ->
                            agg.histogram(
                                histogram -> histogram.field("response_time").interval(5D).minDocCount(0)
                                        .hardBounds(bounds -> bounds.min(1D).max(50D))
                            )
                               .aggregations("a_s", sumAgg -> sumAgg.sum(sum -> sum.field("response_time")))
                    )
    );
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

5.5.3 Running Results

operation result

5.6 Sorting-order

By default the returned buckets are sorted by their key ascending, though the order behaviour can be controlled using the order setting. Supports the same order functionality as the Terms Aggregation.

5.6.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,
  "query": {
    "range": {
      "response_time": {
        "gte": 10
      }
    }
  }, 
  "aggs": {
    "agg_01": {
      "histogram": {
        "field": "response_time",
        "interval": 5,
        "order": {
          "_count": "desc"
        }
      }
    }
  }
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

5.6.2 java code

@Test
@DisplayName("排序order")
public void test06() throws IOException {
    
    
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .query(query-> query.range(range -> range.field("response_time").gte(JsonData.of(10))))
                    .aggregations("agg_01", agg ->
                            agg.histogram(
                                histogram -> histogram.field("response_time").interval(5D)
                                        .order(NamedValue.of("_count", SortOrder.Desc))
                            )
                    )
    );
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

5.6.3 Running Results

operation result

5.7 How to deal with missing aggregation fields in the document -missing

missing value

5.7.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,
  "aggs": {
    "agg_01": {
      "histogram": {
        "field": "response_time",
        "interval": 5,
        "missing": 0
      }
    }
  }
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

5.7.2 java code

@Test
@DisplayName("文档中缺失聚合字段时如何处理-missing")
public void test07() throws IOException {
    
    
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .query(query-> query.range(range -> range.field("response_time").gte(JsonData.of(10))))
                    .aggregations("agg_01", agg ->
                            agg.histogram(
                                histogram -> histogram.field("response_time").interval(5D) .missing(0D)
                            )
                    )
    );
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

5.7.3 Running Results

operation result

6. Complete code

https://gitee.com/huan1993/spring-cloud-parent/blob/master/es/es8-api/src/main/java/com/huan/es8/aggregations/bucket/HistogramAggs.java

7. Reference documents

  1. https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html

1 Introduction

直方图聚合数值It is an aggregation based on multi-bucket values, which can be extracted or 数值范围值aggregated from documents . It can dynamically generate fixed-size buckets for the values ​​participating in the aggregation.

2. How to calculate bucket_key

Assuming we have a value of 32, and the bucket size is 5, then 32 rounds up to 30, so the document will fall into the bucket associated with key 30.下面的算式可以精确的确定每个文档的归属桶

bucket_key = Math.floor((value - offset) / interval) * interval + offset

  1. offset:The value defaults to 0start. And the value of offset must be [0, interval)between. and needs to be one 正数.
  2. value:The value involved in the calculation, such as the price field in a document, etc.

3. There is a set of data, how to determine which bucket it falls into

Here is my own understanding, please point out if I am wrong.

Existing data: [3, 8, 15]
offset = 0
interval = 5

Then it may be divided into the following buckets[0,5) [5,10) [10, 15) [15,+∞)

  1. The bucket where the number 3 falls into buket_key= Math.floor((3 - 0) / 5) * 5 + 0 = 0, that is, it falls into [0,5)this bucket
  2. The number 8 falls into the bucket buket_key= Math.floor((8 - 0) / 5) * 5 + 0 = 5, that is, it falls into [5,10)this bucket
  3. The number 15 falls into the bucket buket_key= Math.floor((15 - 0) / 5) * 5 + 0 = 15, that is, it falls into [15,+∞)this bucket

4. Demand

We have a set of response time data, and aggregate statistics apibased on this set of datahistogram

4.1 prepare for mapping

PUT /index_api_response_time
{
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "properties": {
      "id": {
        "type": "long"
      },
      "api": {
        "type": "keyword"
      },
      "response_time": {
        "type": "integer"
      }
    }
  }
}

   
   
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

Here is mappingrelatively simple, just 3 fields id, apiand response_time.

4.2 Prepare data

PUT /index_api_response_time/_bulk
{"index":{"_id":1}}
{"api":"/user/infos","response_time": 3}
{"index":{"_id":2}}
{"api":"/user/add"}
{"index":{"_id":3}}
{"api":"/user/update","response_time": 8}
{"index":{"_id":4}}
{"api":"/user/list","response_time": 15}
{"index":{"_id":5}}
{"api":"/user/export","response_time": 30}
{"index":{"_id":6}}
{"api":"/user/detail","response_time": 32}

   
   
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

The data recorded first here id=2is not response_timeavailable, and it will be processed additionally during later aggregation.

5. Histogram aggregation operation

5.1. Aggregation based on response_time with an interval of 5

5.1.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,
  "aggs": {
    "agg_01": {
      "histogram": {
        "field": "response_time",
        "interval": 5
      }
    }
  }
}

   
   
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

5.1.2 java code

@Test
@DisplayName("根据response_time聚合,间隔为5")
public void test01() throws IOException {
    
    
    SearchRequest request = SearchRequest.of(search ->
            search
                    .index("index_api_response_time")
                    .size(0)
                    .aggregations("agg_01", agg -> agg.histogram(histogram -> histogram.field("response_time")
                    .interval(5D))));
    System.out.println("request: " + request);
    SearchResponse<String> response = client.search(request, String.class);
    System.out.println("response: " + response);
}

   
   
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

5.1.3 Running results

operation result

5.2 Aggregate the total response time of each bucket on the basis of 5.1

此处聚合一下是为了结合已有的数据,看看每个数据是否落入到了相应的桶中

5.2.1 dsl

GET /index_api_response_time/_search
{
  "size": 0,

Guess you like

Origin blog.csdn.net/rbx508780/article/details/131696368