day12 Elasticserach

table of Contents

One: installation and configuration

Install Elasticsearch

Kibana

Install ik tokenizer

Two: learning objectives

Independently write data import function

Implement basic search independently

Independent page paging

Sort results independently

 

One: installation and configuration

1.2.1. Create a new user leyou

For security reasons, elasticsearch is not allowed to run as root by default.

1.2.2. Upload the installation package and decompress it

1.2.3. Modify the configuration

We enter the config directory: cd config

jvm.options

Elasticsearch is based on Lucene, and the bottom layer of Lucene is implemented in Java, so we need to configure jvm parameters.

elasticsearch.yml

(Lucene is not a complete full-text indexing application, but a full-text indexing engine toolkit written in Java , which can be easily embedded in various applications to achieve application-specific full-text indexing/retrieval functions.)

1.3. Operation

Enter the elasticsearch/bin directory, you can see the following executable files:

1.3.1. Error 1: The kernel is too low

We are using centos6, and its linux kernel version is 2.6. The Elasticsearch plugin requires at least version 3.5. But it doesn't matter, we can disable this plugin.

Modify the elasticsearch.yml file and add the following configuration at the bottom:

1.3.2. Error 2: Insufficient file permissions

We use the leyou user instead of root, so the file permissions are insufficient.

First log in as the root user.

1.3.3. Error 3: Not enough threads

1.3.4. Error 4: Process virtual memory

1.3.5. Restart the terminal window

After all errors are corrected, you must restart your Xshell terminal, otherwise the configuration will be invalid.

1.3.6. Start

Start again and finally succeeded!

 

1.4. A dress kibana

1.4.1. What is Kibana?

Kibana is a Node.js-based Elasticsearch index library data statistics tool . It can use the aggregation function of Elasticsearch to generate various charts, such as column charts, line charts, and pie charts.

It also provides a console for operating Elasticsearch index data, and provides certain API tips, which is very helpful for us to learn the syntax of Elasticsearch.

(To put it simply, Node.js is JavaScript running on the server. Node.js is a platform based on the Chrome JavaScript runtime.

Node.js is an event-driven I/O server-side JavaScript environment. Based on Google's V8 engine, the V8 engine executes Javascript very fast and has very good performance. )

(JavaScript is a script, a programming language, which can implement complex functions on web pages. What web pages show you is no longer simple static information, but real-time content updates, interactive maps, 2D/3D Animations, scrolling videos, etc. How can JavaScript be absent.)

 

1.5. Install ik tokenizer

Lucene's IK tokenizer has not been maintained as early as 2012. Now we are going to use the upgraded version based on it, and it has been developed as an integrated plug-in for ElasticSearch. It is maintained and upgraded together with Elasticsearch. The version is also consistent, the latest Version: 6.3.0

 

Two: learning objectives

Independently write data import function

Implement basic search independently

Independent page paging

Sort results independently

 

1. Index library data import

Yesterday we learned the basic application of Elasticsearch. Apply what you have learned today to build a search microservice to realize the search function.

 

1.1. Create a search service

1.2. Analysis of index database data format

Next, we need to import product data into the index library to facilitate user search.

So the question is, we have SPU and SKU, how do we save them to the index database?

1.2.1. Results-oriented

Therefore, the result of the search is SPU, which is a collection of multiple SKUs.

Since the search result is SPU, the SPU stored in our index library should also be SPU, but it needs to contain SKU information.

1.2.2. What data is needed

1.2.3. The final data structure

We create a class, encapsulate the data to be saved to the index library, and set the mapping properties:

@Document(indexName = "goods", type = "docs", shards = 1, replicas = 0)

public class Goods {

    @Id

    private Long id; // spuId

    @Field(type = FieldType.Text, analyzer = "ik_max_word")

    private String all; // 所有需要被搜索的信息,包含标题,分类,甚至品牌

    @Field(type = FieldType.Keyword, index = false)

    private String subTitle;// 卖点

    private Long brandId;// 品牌id

    private Long cid1;// 1级分类id

    private Long cid2;// 2级分类id

    private Long cid3;// 3级分类id

    private Date createTime;// 创建时间

    private List<Long> price;// 价格

    @Field(type = FieldType.Keyword, index = false)

    private String skus;// sku信息的json结构

    private Map<String, Object> specs;// 可搜索的规格参数,key是参数名,值是参数值

}

1.3. Commodity microservices providing interface

 

The data in the index library comes from the database, and we cannot directly query the database of the product, because in real development, each microservice is independent of each other, including the database. So we can only call the interface services provided by commodity microservices.

First think about the data we need:

SPU information

SKU information

Details of SPU

Commodity category name (splicing all fields)

Rethink what services we need:

 

First: query the service of spu in batches, which has been written.

Second: query the sku service according to spuId, which has been written

Third: query the service of SpuDetail according to spuId, which has been written

Fourth: query the name of the product category according to the product category id, which has not been written

Fifth: According to the product brand id, query the brand of the product, which has not been written

Therefore, we need to provide an additional interface for querying product category names.

 

1.3.1. Product category name query

controller:

1.3.2. Writing FeignClient

1.3.2.1. Problem presentation

Operate leyou-search project

1.4. Import data

Import data is only done once, and subsequent operations such as updating and deleting will operate the index library through the message queue

1.4.1. Create GoodsRepository

java code:

public interface GoodsRepository extends ElasticsearchRepository<Goods, Long> {

}

1.4.2. Create Index

We create a new test class, in which to operate data:

1.4.3. Import data

Importing data is actually querying the data, and then converting the queried Spu into Goods to save , so we first write a SearchService, and then define a method in it to convert the Spu to Goods

2. Implement basic search

2.1. Page analysis

2.1.1. Page Jump

2.1.2. Initiate an asynchronous request

If you want to display the search results after the page loads. When the page loads, we should obtain the address bar request parameters, initiate an asynchronous request, query the background data, and then render on the page.

(Synchronous means: after the sender sends data, wait for the receiver to send back a response before sending the next data packet. 
Asynchronous means: after the sender sends data, the receiver does not wait for the receiver to send back a response, and then send the next The communication method of the data packet.)

Through the hook function created , the request parameters are obtained when the page is loaded and recorded.

created(){
    // 判断是否有请求参数
    if(!location.search){
        return;
    }
    // 将请求参数转为对象
    const search = ly.parse(location.search.substring(1));
    // 记录在data的search对象中
    this.search = search;
    
    // 发起请求,根据条件搜索
    this.loadData();
}

2.2. Provide a search interface in the background

2.2.1.controller

2.2.2.service

2.2.3. Testing

2.3. Page rendering

2.3.1. Save search results

2.3.2. Circular display of goods

2.3.3. Multiple sku display

 

2.3.3.1. Analysis

2.3.3.2. Initialize sku

2.3.3.3. Multiple sku picture list

2.3.4. Display other attributes of sku

 

3. Page paging effect

In the query just now, we defaulted the page number and page size of the query, so all the paging functions are unavailable. Next, let's take a look at how to make the paging function bar.

There are two steps here,

 

The first step: how to generate a paging bar

Step 2: Click the paging button, what do we do

3.1. How to generate a paging bar

3.1.1. Required data

Paging data should be calculated based on information such as the total number of pages, the current page, and the total number of entries.

Current page: It must be determined by the page, click the button to switch to the corresponding page

Total number of pages: need to be delivered to us in the background

Total number: need to be delivered to us in the background

3.1.2. Provide data in the background

3.1.3. Page calculation paging bar

3.2. What to do when clicking on the tab

After clicking the paging button, it is natural to modify the value of page

 

 

Therefore, we add click events to the previous and next buttons, modify the page, bind the click event to the number button, and click to directly modify the page:

3 ways to bind events in JavaScript


Type 1: Use inline Type 2: Use .onclick Type
3: Use event listener addEventListener (only one of the first two types can be bound, and multiple types can be bound later)

Reference link: https://juejin.im/post/6844903720136736775

 

3.3. Page top part of the page bar

4. Sort (Homework)

 

4.1. Page search sort conditions

This is used for sorting, and it is sorted by comprehensive by default. Clicking on new products should be sorted according to the time when the products were created, and clicking on the price should be sorted by price. Because we don't have statistics on sales volume and evaluation, let's take new products and prices as examples to explain, and the approach is conceivable.

 

Sorting needs to know two things:

Sorted field

Sorting method

4.2. Add sorting logic in the background

Next, the background needs to receive the sorting information in the request parameters, and then add sorting logic to the search.

Now, in our request parameter object SearchRequest, there are only two fields, page and key. Need to be extended:

Then in the search business logic, add sorting conditions:

Note that because the price we store in the index library is an array, it will be processed intelligently when sorting by price:

If the price is descending, the maximum value in the array will be sorted

If the price is in ascending order, the smallest value in the array will be sorted

Guess you like

Origin blog.csdn.net/qq_42198024/article/details/107890240