Elasticsearch全文检索:根据关键词对全文查询检索,关键词也分词处理

之前看过了solr的全文检索工具,原理比较简单,理解起来也快;这次我们项目上要求用Elasticsearch实现全文检索,据说这个插件功能更厉害,但是也没有具体研究过;这里就省略了es的部署过程和集成springboot的方法了,直接附上我的后台查询代码;


import com.pridecn.file.domain.EsFileInfo;
import com.pridecn.file.service.ElasticsearchService;
import io.searchbox.client.JestClient;
import io.searchbox.core.Search;
import io.searchbox.core.SearchResult;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

@Service
public class ElasticsearchServiceImpl implements ElasticsearchService {

    @Autowired
    JestClient jestClient;

    @Override
    public List<EsFileInfo> findPublishedFileByKeyWord(String keyWord, int pageNum, int pageSize) {
        //处理特殊字符
        keyWord = QueryParser.escape(keyWord);
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        searchSourceBuilder.query(QueryBuilders.boolQuery()
                                                .should(/*QueryBuilders.queryStringQuery(keyWord).field("FILE_NAME")*/QueryBuilders.matchQuery("FILE_NAME",keyWord).analyzer("ik_smart"))
                                                .should(/*QueryBuilders.queryStringQuery(keyWord).field("attachment.content")*/QueryBuilders.matchQuery("attachment.content",keyWord).analyzer("ik_smart")));
        //初始化高亮对象
        HighlightBuilder highlightBuilder = new HighlightBuilder();
        highlightBuilder.field("FILE_NAME");//高亮title
        highlightBuilder.field("attachment.content");
        highlightBuilder.preTags("<span style='color:red'>").postTags("</span>");//高亮标签
        //设置高亮
        searchSourceBuilder.highlighter(highlightBuilder);
        //设置起始页
        searchSourceBuilder.from((pageNum - 1) * pageSize);
        //设置页大小
        searchSourceBuilder.size(pageSize);
        //指定索引
        Search search = new Search.Builder(searchSourceBuilder.toString())
                .addIndex("book")
                .build();
        SearchResult result = null ;
        List<EsFileInfo> list = new ArrayList<>();
        try {
            //执行查询操作
            result = jestClient.execute(search);
            System.out.println("本次查询共查到:"+result.getTotal()+"个关键字!"+result.getJsonObject());
            List<SearchResult.Hit<EsFileInfo,Void>> hits = result.getHits(EsFileInfo.class);
            for (SearchResult.Hit<EsFileInfo,Void> hit : hits) {
                EsFileInfo source = hit.source;
                //获取高亮后的内容
                Map<String, List<String>> highlight = hit.highlight;
                List<String> file_name = highlight.get("FILE_NAME");//高亮后的title
                if(file_name!=null){
                    source.setFile_name(file_name.get(0));
                }
                List<String> content = highlight.get("attachment.content");//高亮后的title
                if(content!=null){
                    source.getEsDoc().setContent(content.get(0));
                }
                System.out.println("姓名:"+source.getFile_name());
                System.out.println("作者:"+source.getEsDoc().getAuthor());
                System.out.println("内容:"+source.getEsDoc().getContent());
                list.add(source);
            }
            return list;
        } catch (IOException e) {
            e.printStackTrace();
            return new ArrayList<>();
        }
    }

    @Override
    public int findPublishedCountByKeyWord(String keyWord) {
        //处理特殊字符
        keyWord = QueryParser.escape(keyWord);
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        searchSourceBuilder.query(QueryBuilders.boolQuery()
                .should(QueryBuilders.queryStringQuery(keyWord).field("FILE_NAME"))
                .should(QueryBuilders.queryStringQuery(keyWord).field("attachment.content")));
        //初始化高亮对象
        HighlightBuilder highlightBuilder = new HighlightBuilder();
        highlightBuilder.field("FILE_NAME");//高亮title
        highlightBuilder.field("attachment.content");
        highlightBuilder.preTags("<span style='color:red'>").postTags("</span>");//高亮标签
        //设置高亮
        searchSourceBuilder.highlighter(highlightBuilder);
        //设置页大小
        searchSourceBuilder.size(10000);
        //指定索引
        Search search = new Search.Builder(searchSourceBuilder.toString())
                .addIndex("book")
                .build();
        SearchResult result = null ;
        try {
            result = jestClient.execute(search);
            System.out.println("本次查询共查到:"+result.getTotal()+"个关键字!"+result.getJsonObject());
            List<SearchResult.Hit<EsFileInfo,Void>> hits = result.getHits(EsFileInfo.class);
            return hits.size();
        } catch (IOException e) {
            e.printStackTrace();
            return new ArrayList<>().size();
        }
    }
}


import com.google.gson.annotations.SerializedName;
import org.apache.poi.hmef.Attachment;

import java.util.Map;

/**
 * es查询出的文件信息结果类
 */
public class EsFileInfo {

    @SerializedName("FILE_ID")
    private String file_id;

    @SerializedName("FILE_NAME")
    private String file_name;

    @SerializedName("FILE_SAVE_NAME")
    private String file_save_name;
    //编译成另一个名字
    @SerializedName("attachment")
    private EsDoc esDoc;

    public String getFile_id() {
        return file_id;
    }

    public void setFile_id(String file_id) {
        this.file_id = file_id;
    }

    public String getFile_name() {
        return file_name;
    }

    public void setFile_name(String file_name) {
        this.file_name = file_name;
    }

    public String getFile_save_name() {
        return file_save_name;
    }

    public void setFile_save_name(String file_save_name) {
        this.file_save_name = file_save_name;
    }

    public EsDoc getEsDoc() {
        return esDoc;
    }

    public void setEsDoc(EsDoc esDoc) {
        this.esDoc = esDoc;
    }
}
package com.pridecn.file.domain;

import com.google.gson.annotations.SerializedName;

/**
 * 文件实体附件类
 */
public class EsDoc {

    //    @SerializedName("attachment.author")
    private String author;

    //    @SerializedName("attachment.content")
    private String content;

    //    @SerializedName("attachment.date")
    private String date;

    public String getAuthor() {
        return author;
    }

    public void setAuthor(String author) {
        this.author = author;
    }

    public String getContent() {
        return content;
    }

    public void setContent(String content) {
        this.content = content;
    }

    public String getDate() {
        return date;
    }

    public void setDate(String date) {
        this.date = date;
    }
}

QueryBuilders的几个分词方法的区别:


    /**
     * 默认的standard analyzer分词规则:<br>
     * 去掉大部分标点符号,并以此分割原词为多个词,把分分割后的词转为小写放入token组中。<br>
     * 对于not-analyzed的词,直接把原词放入token组中。<br>
     * matchQuery的机制是:先检查字段类型是否是analyzed,如果是,则先分词,再去去匹配token;如果不是,则直接去匹配token。<br>
     * id=id2,默认分词,id2不分词。<br>
     * 以wwIF5-vP3J4l3GJ6VN3h为例:<br>
     * id是的token组是[wwif5,vp3j4l3gj6vn3h]<br>
     * id2的token组是[wwIF5-vP3J4l3GJ6VN3h]<br>
     * 可以预计以下结果:<br>
     * 1.matchQuery("id", "字符串"),"字符串"分词后有[wwif5,vp3j4l3gj6vn3h]其中之一时,有值。<br>
     * 如:wwIF5-vP3J4l3GJ6VN3h,wwif5-vp3j4l3gj6vn3h,wwIF5,wwif5,wwIF5-6666等等。<br>
     * 2.matchQuery("id2", "wwIF5-vP3J4l3GJ6VN3h"),有值。<br>
     * 特别说明:<br>
     * 在创建索引时,如果没有指定"index":"not_analyzed"<br>
     * 会使用默认的analyzer进行分词。当然你可以指定analyzer。<br>
     * 在浏览器中输入:<br>
     * http://localhost:9200/_analyze?pretty&analyzer=standard&text=J4Kz1%26L
     * bvjoQFE9gHC7H<br>
     * 可以看到J4Kz1&LbvjoQFE9gHC7H被分成了:j4kz1和lbvjoqfe9ghc7h<br>

///

*
     * 默认的standard analyzer分词规则:<br>
     * 去掉大部分标点符号,并以此分割原词为多个词,把分分割后的词转为小写放入token组中。<br>
     * 对于not-analyzed的词,直接把原词放入token组中。<br>
     * termQuery的机制是:直接去匹配token。<br>
     * id=id2,默认分词,id2不分词。<br>
     * 以wwIF5-vP3J4l3GJ6VN3h为例:<br>
     * id是的token组是[wwif5,vp3j4l3gj6vn3h]<br>
     * id2的token组是[wwIF5-vP3J4l3GJ6VN3h]<br>
     * 可以预计以下结果:<br>
     * 1.termQuery("id", "wwif5"),有值。<br>
     * 2.termQuery("id", "vp3j4l3gj6vn3h"),有值。<br>
     * 3.termQuery("id2", "wwIF5-vP3J4l3GJ6VN3h"),有值。<br>


总结:match query搜索的时候,首先会解析查询字符串,进行分词,然后查询,而term query,输入的查询内容是什么,就会按照什么去查询,并不会解析查询内容,对它分词。QueryBuilders.queryStringQuery(keyWord).field("FILE_NAME")与matchquery相似

猜你喜欢

转载自blog.csdn.net/tonglei111/article/details/84320325