假设现在你要查询第100页的10条数据,但是对于es来说,from=1000000,size=100,这时 es需要从各个分片上查询出来10000100条数据,然后汇总计算后从其中取出100条。如果有5个分片则需要查询出来5*10000100条数据,如果现在有一个100个查询请求呢,50亿左右的数据,一条数据2KB,就需要9000G左右的内存,什么样的机器能够支持这么庞大的查询,所以如果你在使用es的分页查询过程中,刚开始翻页可能速度比较快,可能到第一百页查询就需要4-5s,翻到1000页以后,直接报错了。
NativeSearchQueryBuilder query = new NativeSearchQueryBuilder();
if(!StringUtils.isEmpty(ulqBean.getStartTime()) && !StringUtils.isEmpty(ulqBean.getEndTime())) {
query.withQuery(QueryBuilders.rangeQuery("logTime").from(ulqBean.getStartTime()).to(ulqBean.getEndTime()));
}
if(!StringUtils.isEmpty(ulqBean.getSearch())) {
BoolQueryBuilder shouldQuery = QueryBuilders.boolQuery()
.should(QueryBuilders.wildcardQuery("content", "*" + ulqBean.getSearch() + "*"))
.should(QueryBuilders.wildcardQuery("code", "*" + ulqBean.getSearch() + "*"))
.should(QueryBuilders.wildcardQuery("name", "*" + ulqBean.getSearch() + "*"));
query.withQuery(shouldQuery);
}
query.withSort(new FieldSortBuilder("logTime").order(SortOrder.DESC));
if(ulqBean.getPageNo() != null && ulqBean.getPageSize() != null) {
//es结果从第0页开始算
query.withPageable(new PageRequest(ulqBean.getPageNo() - 1, ulqBean.getPageSize()));
}
NativeSearchQuery build = query.build();
org.springframework.data.domain.Page<ConductAudits> conductAuditsPage = template.queryForPage(build, ConductAudits.class);
ulqBean.getPagination().setTotal((int) conductAuditsPage.getTotalElements());
ulqBean.getPagination().setList(conductAuditsPage.getContent());
[root@localhost elasticsearch-2.4.6]# curl -XGET 'http://11.12.84.126:9200/_audit_0102/_log_0102/_search?size=2&from=10000&pretty=true'
{
"error" : {
"root_cause" : [ {
"type" : "query_phase_execution_exception",
"reason" : "Result window is too large, from + size must be less than or equal to: [10000] but was [10002]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter."
} ],
"type" : "search_phase_execution_exception",
"reason" : "all shards failed",
"phase" : "query",
"grouped" : true,
"failed_shards" : [ {
"shard" : 0,
"index" : "_audit_0102",
"node" : "f_CQitYESZedx8ZbyZ6bHA",
"reason" : {
"type" : "query_phase_execution_exception",
"reason" : "Result window is too large, from + size must be less than or equal to: [10000] but was [10002]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level parameter."
}
} ]
},
"status" : 500
}
如果你的数据大小在你的控制范围内,想要进一步深度分页,你可以通过如下命令修改窗口大小:
curl -XPUT "http://11.12.84.126:9200/_audit_0102/_settings" -d '{
"index": {
"max_result_window": 100000
}
}'
这只是允许你更进一步深度分页,却没有从根本上解决深度分页的问题,而且随着页码的增加,系统资源占用成指数级上升,很容易就会出现OOM。
这时如果你的产品经理要求你按照常规的做法去分页,你可以很明确的告诉他,你的系统不支持这么深度的分页,翻的越深,性能也就越差。
不过这种深度分页场景在现实中确实存在,有些场景下,我们可以说服产品经理很少有人会翻看很久之前的历史数据,但是有些场景下可能一天都产生几百万。这个时候我们可以根据具体场景具体分析。
scroll查询原理是在第一次查询的时候一次性生成一个快照,根据上一次的查询的id来进行下一次的查询,这个就类似于关系型数据库的游标,然后每次滑动都是根据产生的游标id进行下一次查询,这种性能比上面说的分页性能要高出很多,基本都是毫秒级的。 注意:scroll不支持跳页查询。 使用场景:对实时性要求不高的查询,例如微博或者头条滚动查询。 具体java代码实现
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
QueryBuilder builder = QueryBuilders.queryStringQuery("123456").field("code");
boolQueryBuilder.must(QueryBuilders.termQuery("logType", "10"))
.must(builder);
SearchResponse response1 = client.prepareSearch("_audit_0221").setTypes("_log_0221")
.setQuery(boolQueryBuilder)
.setSearchType(.setSearchType(SearchType.DEFAULT))
.setSize(10).setScroll(TimeValue.timeValueMinutes(5))
.addSort("logTime", SortOrder.DESC)
.execute().actionGet();//第一次查询
for (SearchHit searchHit : response1.getHits().hits()) {
biz handle....;
}
while (response1.getHits().hits().length>0) {
for (SearchHit searchHit : response1.getHits().hits()) {
System.out.println(searchHit.getSource().toString());
}
response1 = client.prepareSearchScroll(response1.getScrollId()).setScroll(TimeValue.timeValueMinutes(5))
.execute().actionGet();
}
如果是一次性的搜索,可以清除查询结果,毕竟可以减少对内存的消耗。
ClearScrollRequest request = new ClearScrollRequest();
request.addScrollId(scrollId);
client.clearScroll(request);
使用场景:我有500w用户,需要遍历所有用户发送数据,并且对顺序没有要求,这个时候我们可以使用scroll-scan。
具体使用方式:
SearchResponse response = client.prepareSearch("_audit_0221").setTypes("_log_0221")
.setQuery(boolQueryBuilder)
.setSearchType(SearchType.SCAN)
.setSize(5).setScroll(TimeValue.timeValueMinutes(5))
.addSort("logTime", SortOrder.DESC)
.execute().actionGet();
SearchResponse response1 = client.prepareSearchScroll(scrollId).setScroll(TimeValue.timeValueMinutes(5))
.execute().actionGet();
while (response1.getHits().hits().length>0) {
for (SearchHit searchHit : response1.getHits().hits()) {
System.out.println(searchHit.getSource().toString());
}
response1 = client.prepareSearchScroll(response1.getScrollId()).setScroll(TimeValue.timeValueMinutes(5))
.execute().actionGet();
}
QueryBuilder builder = QueryBuilders.boolQuery().filter(QueryBuilders.termQuery("code", "123456"));
SearchQuery searchQuery = new NativeSearchQueryBuilder().withIndices("_audit_0221")
.withTypes("_log_0221").withQuery(builder).withPageable(new PageRequest(0, 2)).build();
String srollId = template.scan(searchQuery, 100000, false);
while (true) {
Page<ConductAudits> scroll = template.scroll(srollId, 1000, ConductAudits.class);
if(scroll.getContent().size()==0) {
break;
}
List<ConductAudits> content = scroll.getContent();
for (ConductAudits c: content
) {
System.out.println(JSON.toJSONString(c));
}
// System.out.println(JSON.toJSONString(scroll.getContent()+"\r\n"));
for (ConductAudits conductAudits : scroll.getContent()) {
System.out.println(JSON.toJSONString(conductAudits+"\r\n"));
}
}
##7、 总结:
PS:elasticSearch各个版本可能都稍有区别,但是原理相同。本文的很多代码都是基于es 2.4.6