脚本宝典收集整理的这篇文章主要介绍了第4章 Elasticsearch 进阶,脚本宝典觉得挺不错的,现在分享给大家,也给大家做个参考。
{ "settings" : { "number_of_shards" : 3, "number_of_replicas" : 1 } }
{ "number_of_replicas" : 2 }
当然,如果只是在相同节点数目的集群上增加更多的副本分片并不能提高性能,因为每
我们关闭的节点是一个主节点。而集群必须拥有一个主节点来保证正常工作,所以发生
通过 elasticsearch-head 插件查看集群情况,所以我们的集群是一个有三个节点和一个索
新索引默认有 1 个副本分片,这意味着为满足规定数量应该需要两个活动的分片副本。 但是,这些
{ "settings": { "refresh_interval": "30s" } }
# 关闭自动刷新 PUT /users/_settings { "refresh_interval": -1 } # 每一秒刷新 PUT /users/_settings { "refresh_interval": "1s" }
GET http://localhost:9200/_analyze { "analyzer": "standard", "text": "Text to analyze" }
{ "tokens": [ { "token": "text", "start_offset": 0, "end_offset": 4, "type": "<ALPHANUM>", "position": 1 }, { "token": "to", "start_offset": 5, "end_offset": 7, "type": "<ALPHANUM>", "position": 2 }, { "token": "analyze", "start_offset": 8, "end_offset": 15, "type": "<ALPHANUM>", "position": 3 } ] }
# GET http://localhost:9200/_analyze { "text":"测试单词" }
{ "tokens": [ { "token": "测", "start_offset": 0, "end_offset": 1, "type": "<IDEOGRAPHIC>", "position": 0 }, { "token": "试", "start_offset": 1, "end_offset": 2, "type": "<IDEOGRAPHIC>", "position": 1 }, { "token": "单", "start_offset": 2, "end_offset": 3, "type": "<IDEOGRAPHIC>", "position": 2 }, { "token": "词", "start_offset": 3, "end_offset": 4, "type": "<IDEOGRAPHIC>", "position": 3 } ] }
# GET http://localhost:9200/_analyze { "text":"测试单词", "analyzer":"ik_max_word" }
{ "tokens": [ { "token": "测试", "start_offset": 0, "end_offset": 2, "type": "CN_WORD", "position": 0 }, { "token": "单词", "start_offset": 2, "end_offset": 4, "type": "CN_WORD", "position": 1 } ] }
# GET http://localhost:9200/_analyze { "text":"弗雷尔卓德", "analyzer":"ik_max_word" }
{ "tokens": [ { "token": "弗", "start_offset": 0, "end_offset": 1, "type": "CN_CHAR", "position": 0 }, { "token": "雷", "start_offset": 1, "end_offset": 2, "type": "CN_CHAR", "position": 1 }, { "token": "尔", "start_offset": 2, "end_offset": 3, "type": "CN_CHAR", "position": 2 }, { "token": "卓", "start_offset": 3, "end_offset": 4, "type": "CN_CHAR", "position": 3 }, { "token": "德", "start_offset": 4, "end_offset": 5, "type": "CN_CHAR", "position": 4 } ] }
# PUT http://localhost:9200/my_index { "settings": { "analysis": { "char_filter": { "&_to_and": { "type": "mapping", "mappings": [ "&=> and "] }}, "filter": { "my_stopwords": { "type": "stop", "stopwords": [ "the", "a" ] }}, "analyzer": { "my_analyzer": { "type": "custom", "char_filter": [ "html_strip", "&_to_and" ], "tokenizer": "standard", "filter": [ "lowercase", "my_stopwords" ] }} }}}
# GET http://127.0.0.1:9200/my_index/_analyze { "text":"The quick & brown fox", "analyzer": "my_analyzer" }
{ "tokens": [ { "token": "quick", "start_offset": 4, "end_offset": 9, "type": "<ALPHANUM>", "position": 1 }, { "token": "and", "start_offset": 10, "end_offset": 11, "type": "<ALPHANUM>", "position": 2 }, { "token": "brown", "start_offset": 12, "end_offset": 17, "type": "<ALPHANUM>", "position": 3 }, { "token": "fox", "start_offset": 18, "end_offset": 21, "type": "<ALPHANUM>", "position": 4 } ] }
{ "error": { "root_cause": [ { "type": "action_request_validation_exception", "reason": "Validation Failed: 1: internal versioning can not be used for optimistic concurrency control. Please use `if_seq_no` and `if_primary_term` instead;" } ], "type": "action_request_validation_exception", "reason": "Validation Failed: 1: internal versioning can not be used for optimistic concurrency control. Please use `if_seq_no` and `if_primary_term` instead;" }, "status": 400 }
# 默认端口 server.port: 5601 # ES 服务器的地址 elasticsearch.hosts: ["http://localhost:9200"] # 索引名 kibana.index: ".kibana" # 支持中文 i18n.locale: "zh-CN"
以上是脚本宝典为你收集整理的第4章 Elasticsearch 进阶全部内容,希望文章能够帮你解决第4章 Elasticsearch 进阶所遇到的问题。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。