

| 主机IP | 模块 | 功能 | 
|---|---|---|
| 192.168.59.131 | elasticsearch,kibana | 主节点 | 
| 192.168.59.138 | elasticsearch,logstash | 数据节点1 | 
| 192.168.59.139 | elasticsearch | 数据节点2 | 
三台机器全部安装jdk8(openjdk即可)
yum install -y java-1.8.0-openjdk配置三台机器的hosts
vim /etc/hosts以下操作3台机器上都要执行(如下我找到了两种方式来安装,一个是yum一个是rpm,如果要是用yum的话需要下载一个yum源如下:)
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vim /etc/yum.repos.d/elastic.repo //加入如下内容
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum install -y elasticsearch 因为我们是分布式的一套架构,所以我们接下来就配置一下。
安装好了之后我们去查看下有哪些配置文件:
[[email protected] ~]# rpm -ql elasticsearch
/etc/elasticsearch/elasticsearch.yml    //主配置文件(集群,端口,配置)
/etc/init.d/elasticsearch       //启动脚本
/etc/sysconfig/elasticsearch    //服务本身相关的配置文件安装位置等服务配置:
[root@zhdy01 ~]# vim /etc/elasticsearch/elasticsearch.yml

绑定侦听的地址,如果设置为 0.0.0.0 则监听全网,如果不安装x-pad的话。所有人均可以访问这台es,很不安全,要是有内网IP建议绑定内网IP。

因为已经配置了hosts所以我这边使用主机名,如果没有配置,直接写IP也是可以的,但是保证IP间可以互通。
设置完毕后我们需要把配置同样配置在另外两台集群上面:
[root@zhdy01 ~]# scp /etc/elasticsearch/elasticsearch.yml zhdy02:/etc/elasticsearch/elasticsearch.yml 
elasticsearch.yml                                                                                                                    100% 3006     2.1MB/s   00:00    
[root@zhdy01 ~]# scp /etc/elasticsearch/elasticsearch.yml zhdy03:/etc/elasticsearch/elasticsearch.yml 
elasticsearch.yml 然后去另外两台机器修改:
 cluster.name: zhdya
 node.master: false
 node.data: true
 network.host: 分别改为刚刚绑定的IP地址
 discovery.zen.ping.unicast.hosts: ["zhdy01", "zhdy02", "zhdy03"]启动服务:(先启动主节点:)
systemctl start elasticsearch
[[email protected] ~]# ps aux | grep elasticsearch
elastic+   4045  105 12.1 1549892 186072 ?      Ssl  18:28   0:22 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
root       4074 11.0  0.0 112668   976 pts/0    S+   18:29   0:00 grep --color=auto elasticsearch三台机器都会启动 9200 和 9300 连个端口
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1239/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1907/master         
tcp6       0      0 192.168.161.163:9200    :::*                    LISTEN      2244/java           
tcp6       0      0 192.168.161.163:9300    :::*                    LISTEN      2244/java           
tcp6       0      0 :::22                   :::*                    LISTEN      1239/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1907/master 如果没有正常启动服务,在咱们自定义的log目录是看不到日志的。我们需要去:
/var/log/messages查看日志(日志名是自己在配置文件中定义的):
[[email protected] ~]# less /var/log/elasticsearch/zhdya.log关闭防火墙:
[root@zhdy01 ~]# systemctl stop firewalld主节点执行:
//健康检查
[[email protected] ~]# curl '192.168.59.131:9200/_cluster/health?pretty'
{
  "cluster_name" : "zhdya",
  "status" : "green",       //green代表正常
  "timed_out" : false,      //没有超时
  "number_of_nodes" : 3,    //3个节点
  "number_of_data_nodes" : 2,   //2个数据节点
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
//集群详细信息
[[email protected] ~]# curl '192.168.59.131:9200/_cluster/state?pretty'
{
  "cluster_name" : "zhdya",
  "compressed_size_in_bytes" : 349,
  "version" : 4,
  "state_uuid" : "-hntbcN0TCi_bXklwFEOXg",
  "master_node" : "vHnd51RtQciqfKLdSW_cDg",
  "blocks" : { },
  "nodes" : {
    "B5FAVHyJSry1LTh-Nk8YhA" : {
      "name" : "zhdy03",
      "ephemeral_id" : "3_CsWdhrQXqq2fYBGlIEuw",
      "transport_address" : "192.168.59.139:9300",
      "attributes" : { }
    },
    "vHnd51RtQciqfKLdSW_cDg" : {
      "name" : "zhdy01",
      "ephemeral_id" : "XJOOZUP4T-icKCV9N_c1YA",
      "transport_address" : "192.168.59.131:9300",
      "attributes" : { }
    },
     "mROUnHp5QW6tZtmh2hjYcw" : {
      "name" : "zhdy02",
      "ephemeral_id" : "5qv7WGftTbe3GQu8nJJ7lA",
      "transport_address" : "192.168.59.138:9300",
      "attributes" : { }
    }
  },
//截取了一段如上内容~
参考 http://zhaoyanblog.com/archives/732.html 3台机器上都要执行
cd /usr/share/elasticsearch/bin/ (可省略)
./elasticsearch-plugin install x-pack //如果速度慢,就下载x-pack压缩包(可省略)
 
cd /tmp/; wget https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-6.0.0.zip (可省略)
./elasticsearch-plugin install  file:///tmp/x-pack-6.0.0.zip (可省略)
 
启动elasticsearch服务
systemctl enable elasticsearch.service
systemctl start elasticsearch.service
以下操作只需要在131上执行
安装x-pack后就可以为内置用户设置密码了,如下
/usr/share/elasticsearch/bin/x-pack/setup-passwords interactive (可省略)
curl 192.168.59.131:9200 -u elastic //输入密码,可以查看到输出信息(可省略)