示例: 对以下三个文档去除停用词后构造倒排索引
image
查询包含“搜索引擎”的文档
单词词典的实现一般用B+树,B+树构造的可视化过程网址: B+ Tree Visualization
关于B树和B+树
image
image
B+树内部结点存索引,叶子结点存数据,这里的 单词词典就是B+树索引,倒排列表就是数据,整合在一起后如下所示
note: B+树索引中文和英文怎么比较大小呢?unicode比较还是拼音呢?
image
ES存储的是一个JSON格式的文档,其中包含多个字段,每个字段会有自己的倒排索引
分词是将文本转换成一系列单词(Term or Token)的过程,也可以叫文本分析,在ES里面称为Analysis
image
分词器是ES中专门处理分词的组件,英文为Analyzer,它的组成如下:
分词器调用顺序
image
ES提供了一个可以测试分词的API接口,方便验证分词效果,endpoint是_analyze
image
POST test_index/doc
{
"username": "whirly",
"age":22
}
POST test_index/_analyze
{
"field": "username",
"text": ["hello world"]
}
POST _analyze
{
"tokenizer": "standard",
"filter": ["lowercase"],
"text": ["Hello World"]
}
ES自带的分词器有如下:
示例:停用词分词器
POST _analyze
{
"analyzer": "stop",
"text": ["The 2 QUICK Brown Foxes jumped over the lazy dog's bone."]
}
结果
{
"tokens": [
{
"token": "quick",
"start_offset": 6,
"end_offset": 11,
"type": "word",
"position": 1
},
{
"token": "brown",
"start_offset": 12,
"end_offset": 17,
"type": "word",
"position": 2
},
{
"token": "foxes",
"start_offset": 18,
"end_offset": 23,
"type": "word",
"position": 3
},
{
"token": "jumped",
"start_offset": 24,
"end_offset": 30,
"type": "word",
"position": 4
},
{
"token": "over",
"start_offset": 31,
"end_offset": 35,
"type": "word",
"position": 5
},
{
"token": "lazy",
"start_offset": 40,
"end_offset": 44,
"type": "word",
"position": 7
},
{
"token": "dog",
"start_offset": 45,
"end_offset": 48,
"type": "word",
"position": 8
},
{
"token": "s",
"start_offset": 49,
"end_offset": 50,
"type": "word",
"position": 9
},
{
"token": "bone",
"start_offset": 51,
"end_offset": 55,
"type": "word",
"position": 10
}
]
}
# 在Elasticsearch安装目录下执行命令,然后重启es
bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.3.0/elasticsearch-analysis-ik-6.3.0.zip
# 如果由于网络慢,安装失败,可以先下载好zip压缩包,将下面命令改为实际的路径,执行,然后重启es
bin/elasticsearch-plugin install file:///path/to/elasticsearch-analysis-ik-6.3.0.zip
POST _analyze
{
"analyzer": "ik_smart",
"text": ["公安部:各地校车将享最高路权"]
}
# 结果
{
"tokens": [
{
"token": "公安部",
"start_offset": 0,
"end_offset": 3,
"type": "CN_WORD",
"position": 0
},
{
"token": "各地",
"start_offset": 4,
"end_offset": 6,
"type": "CN_WORD",
"position": 1
},
{
"token": "校车",
"start_offset": 6,
"end_offset": 8,
"type": "CN_WORD",
"position": 2
},
{
"token": "将",
"start_offset": 8,
"end_offset": 9,
"type": "CN_CHAR",
"position": 3
},
{
"token": "享",
"start_offset": 9,
"end_offset": 10,
"type": "CN_CHAR",
"position": 4
},
{
"token": "最高",
"start_offset": 10,
"end_offset": 12,
"type": "CN_WORD",
"position": 5
},
{
"token": "路",
"start_offset": 12,
"end_offset": 13,
"type": "CN_CHAR",
"position": 6
},
{
"token": "权",
"start_offset": 13,
"end_offset": 14,
"type": "CN_CHAR",
"position": 7
}
]
}
POST _analyze
{
"analyzer": "ik_max_word",
"text": ["公安部:各地校车将享最高路权"]
}
# 结果
{
"tokens": [
{
"token": "公安部",
"start_offset": 0,
"end_offset": 3,
"type": "CN_WORD",
"position": 0
},
{
"token": "公安",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 1
},
{
"token": "部",
"start_offset": 2,
"end_offset": 3,
"type": "CN_CHAR",
"position": 2
},
{
"token": "各地",
"start_offset": 4,
"end_offset": 6,
"type": "CN_WORD",
"position": 3
},
{
"token": "校车",
"start_offset": 6,
"end_offset": 8,
"type": "CN_WORD",
"position": 4
},
{
"token": "将",
"start_offset": 8,
"end_offset": 9,
"type": "CN_CHAR",
"position": 5
},
{
"token": "享",
"start_offset": 9,
"end_offset": 10,
"type": "CN_CHAR",
"position": 6
},
{
"token": "最高",
"start_offset": 10,
"end_offset": 12,
"type": "CN_WORD",
"position": 7
},
{
"token": "路",
"start_offset": 12,
"end_offset": 13,
"type": "CN_CHAR",
"position": 8
},
{
"token": "权",
"start_offset": 13,
"end_offset": 14,
"type": "CN_CHAR",
"position": 9
}
]
}
当自带的分词无法满足需求时,可以自定义分词,通过定义Character Filters、Tokenizer和Token Filters实现
POST _analyze
{
"tokenizer": "keyword",
"char_filter": ["html_strip"],
"text": ["<p>I'm so <b>happy</b>!</p>"]
}
# 结果
{
"tokens": [
{
"token": """
I'm so happy!
""",
"start_offset": 0,
"end_offset": 32,
"type": "word",
"position": 0
}
]
}
POST _analyze
{
"tokenizer": "path_hierarchy",
"text": ["/path/to/file"]
}
# 结果
{
"tokens": [
{
"token": "/path",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 0
},
{
"token": "/path/to",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 0
},
{
"token": "/path/to/file",
"start_offset": 0,
"end_offset": 13,
"type": "word",
"position": 0
}
]
}
POST _analyze
{
"text": [
"a Hello World!"
],
"tokenizer": "standard",
"filter": [
"stop",
"lowercase",
{
"type": "ngram",
"min_gram": 4,
"max_gram": 4
}
]
}
# 结果
{
"tokens": [
{
"token": "hell",
"start_offset": 2,
"end_offset": 7,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "ello",
"start_offset": 2,
"end_offset": 7,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "worl",
"start_offset": 8,
"end_offset": 13,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "orld",
"start_offset": 8,
"end_offset": 13,
"type": "<ALPHANUM>",
"position": 2
}
]
}
自定义分词需要在索引配置中设定 char_filter、tokenizer、filter、analyzer等
自定义分词示例:
PUT test_index_1
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"type": "custom",
"tokenizer": "standard",
"char_filter": [
"html_strip"
],
"filter": [
"uppercase",
"asciifolding"
]
}
}
}
}
}
POST test_index_1/_analyze
{
"analyzer": "my_custom_analyzer",
"text": ["<p>I'm so <b>happy</b>!</p>"]
}
# 结果
{
"tokens": [
{
"token": "I'M",
"start_offset": 3,
"end_offset": 11,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "SO",
"start_offset": 12,
"end_offset": 14,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "HAPPY",
"start_offset": 18,
"end_offset": 27,
"type": "<ALPHANUM>",
"position": 2
}
]
}
分词会在如下两个时机使用:
更多内容请访问我的个人网站: http://laijianfeng.org 参考文档:
扫码关注腾讯云开发者
领取腾讯云代金券
Copyright © 2013 - 2025 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有
深圳市腾讯计算机系统有限公司 ICP备案/许可证号:粤B2-20090059 深公网安备号 44030502008569
腾讯云计算(北京)有限责任公司 京ICP证150476号 | 京ICP备11018762号 | 京公网安备号11010802020287
Copyright © 2013 - 2025 Tencent Cloud.
All Rights Reserved. 腾讯云 版权所有