本章主要讲redis的集群搭建.
redis集群的可扩展性(scalability是线性的, 即增加节点会带来实际的效果提示. 集群节点间使用异步冗余备份,所以安全些.
redis集群主要提供一定程度的稳定性(单从性能讲的话,并不如单机的,集群都是这样的,所以算不上是redis集群的缺点), 当某部分节点宕掉时,集群也是可用的.
因为数据是分开存储的,即使丢了,也只丢一部分. 互为主从的宕掉部分节点也不会丢数据,而且你还要做备份的嘛. 其实缓存数据库,更应该关注的是性能.
如果你没得这么多虚拟机的话, 就用伪集群就是(所有节点都在一台服务器上, 用不同的端口和工作目录区分)
配置环境我就不讲了,可以去看https://cloud.tencent.com/developer/article/1757503
环境说明:
节点 1 : 192.168.1.31
节点 2 : 192.168.1.32
节点 4 : 192.168.1.33
每个节点2个实例(端口分别为: 6379 6380), 互为主从.
只需要编译一次即可,然后把编译好的软件拷贝到其它服务器上安装(如果环境不同的话,才需要重新编译的)
本次安装我们就使用install 命令就行了(make install也可以.但是install可以定制我们要的文件).
第一个节点操作如下:
wget https://download.redis.io/releases/redis-5.0.10.tar.gz #下载
tar -xvf redis-5.0.10.tar.gz #解压
cd redis-5.0.10 #切换目录
make MALLOC=libc #编译
mkdir -p /usr/local/redis-cluster/bin #创建redis集群目录
install ./src/{redis-benchmark,redis-check-aof,redis-check-rdb,redis-cli,redis-sentinel,redis-server} /usr/local/redis-cluster/bin/
cp -ra redis.conf /usr/local/redis-cluster/redis-cluster-6379.conf #拷贝配置文件实例1
cp -ra redis.conf /usr/local/redis-cluster/redis-cluster-6380.conf #拷贝配置文件实例2
cd /usr/local/redis-cluster/
mkdir -p /usr/local/redis-cluster/data
mkdir -p /usr/local/redis-cluster/log
然后修改配置文件/usr/local/redis-cluster/redis-cluster-6379.conf如下内容
另一个的实例配置也是一样的,只需改下端口和文件名即可
#以守护进程方式运行(也就是放后台)
daemonize yes
#快照路径
dir /usr/local/redis-cluster/data
#redis的密码. 创建集群的时候,就不要这个参数,创建完了,再设置就是(节点间密码要一致)
#requirepass 123456
#主节点的密码.
#masterauth 123456
#关闭保护模式
protected-mode no
#设置pid文件,注意要有权限哦,不然起不来
pidfile /var/run/redis-cluster-6379.pid
#日志路径
logfile /usr/local/redis-cluster/log/redis-cluster-6379.log
#监听的地址, 最好是写本机的具体IP, 我这里偷懒,就全部监听了
bind 0.0.0.0
#redis的端口,6379是默认的,建议换掉
port 6379
#开启AOF持久化
appendonly yes
#AOF文件名,相对路径就是dir路径下
appendfilename "appendonly-6379.aof"
#rdb备份策略
save 900 1
save 300 10
save 60 10000
#rdb备份的文件名
dbfilename dump-6379.rdb
#以下为集群的配置
#启用集群,默认是注释掉的
cluster-enabled yes
#集群的配置文件,记录集群信息的文件
cluster-config-file node-6379.conf
#节点超时时间(单位: 毫秒)
cluster-node-timeout 15000
完成后,你会得到如下目录结构
[root@ddcw31 redis-cluster]# tree
.
├── bin
│ ├── redis-benchmark
│ ├── redis-check-aof
│ ├── redis-check-rdb
│ ├── redis-cli
│ ├── redis-sentinel
│ └── redis-server
├── data
├── log
├── redis-cluster-6379.conf
└── redis-cluster-6380.conf
3 directories, 8 files
[root@ddcw31 redis-cluster]#
在第一节点执行如下命令(没配置密钥的话,就得输密码哈)
cd /usr/local/
tar -cvf /tmp/redis-cluster-5.0.10.tar.gz redis-cluster
scp /tmp/redis-cluster-5.0.10.tar.gz 192.168.1.32:/tmp/
scp /tmp/redis-cluster-5.0.10.tar.gz 192.168.1.33:/tmp/
ssh 192.168.1.32 -C "tar -xvf /tmp/redis-cluster-5.0.10.tar.gz -C /usr/local"
ssh 192.168.1.33 -C "tar -xvf /tmp/redis-cluster-5.0.10.tar.gz -C /usr/local"
再第一节点执行如下命令启动所有实例(就是启动的时候指定配置文件)
/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6379.conf #启动第一个实例
/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6380.conf #启动第二个实例
#启动第二个节点的2个实例
ssh 192.168.1.32 -C "/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6379.conf"
ssh 192.168.1.32 -C "/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6380.conf"
#启动第三个节点的2个实例
ssh 192.168.1.33 -C "/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6379.conf"
ssh 192.168.1.33 -C "/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6380.conf"
查看实例启动成功后即可创建集群
创建集群:
官方给的例子是用ruby脚本./src/redis-trib.rb创建,但是太麻烦,还得安装ruby环境.
redis在5.0之后支持redis-cli创建集群, 本文就用redis-cli来创建集群
查看帮助,找到创建集群的格式为 redis-cli --cluster create host1:port1 ... hostN:portN --cluster-replicas <arg>
--cluster-replicas 1 的意思是 每个主节点有1个从节点(要多个从节点的话,就得增加相应的实例/节点)
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli --cluster help
Cluster Manager Commands:
create host1:port1 ... hostN:portN
--cluster-replicas <arg>
check host:port
--cluster-search-multiple-owners
info host:port
fix host:port
--cluster-search-multiple-owners
reshard host:port
--cluster-from <arg>
--cluster-to <arg>
--cluster-slots <arg>
--cluster-yes
--cluster-timeout <arg>
--cluster-pipeline <arg>
--cluster-replace
rebalance host:port
--cluster-weight <node1=w1...nodeN=wN>
--cluster-use-empty-masters
--cluster-timeout <arg>
--cluster-simulate
--cluster-pipeline <arg>
--cluster-threshold <arg>
--cluster-replace
add-node new_host:new_port existing_host:existing_port
--cluster-slave
--cluster-master-id <arg>
del-node host:port node_id
call host:port command arg arg .. arg
set-timeout host:port milliseconds
import host:port
--cluster-from <arg>
--cluster-copy
--cluster-replace
help
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
[root@ddcw31 redis-cluster]#
所以我们的环境执行如下命令即可创建集群:
/usr/local/redis-cluster/bin/redis-cli --cluster create \
192.168.1.31:6379 192.168.1.31:6380 \
192.168.1.32:6379 192.168.1.32:6380 \
192.168.1.33:6379 192.168.1.33:6380 \
--cluster-replicas 1
过程如下,中间确定,输入yes即可
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli --cluster create \
> 192.168.1.31:6379 192.168.1.31:6380 \
> 192.168.1.32:6379 192.168.1.32:6380 \
> 192.168.1.33:6379 192.168.1.33:6380 \
> --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.1.32:6380 to 192.168.1.31:6379
Adding replica 192.168.1.33:6380 to 192.168.1.32:6379
Adding replica 192.168.1.31:6380 to 192.168.1.33:6379
M: 169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379
slots:[0-5460] (5461 slots) maste
S: 60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380
replicates 61a3730c0bc4f8dd0adc6cb8361468b111ae107f
M: 4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379
slots:[5461-10922] (5462 slots) maste
S: 6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380
replicates 169b0df771d45f27383add0304df59d2fbae6c62
M: 61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379
slots:[10923-16383] (5461 slots) maste
S: fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380
replicates 4b277b33572bfdfdae734da9a006ff5d7ee05d46
Can I set the above configuration? (type 'yes' to accept): yes #这里手动输入yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluste
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 192.168.1.31:6379)
M: 169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379
slots:[0-5460] (5461 slots) maste
1 additional replica(s)
M: 4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379
slots:[5461-10922] (5462 slots) maste
1 additional replica(s)
S: fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380
slots: (0 slots) slave
replicates 4b277b33572bfdfdae734da9a006ff5d7ee05d46
S: 6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380
slots: (0 slots) slave
replicates 169b0df771d45f27383add0304df59d2fbae6c62
M: 61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379
slots:[10923-16383] (5461 slots) maste
1 additional replica(s)
S: 60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380
slots: (0 slots) slave
replicates 61a3730c0bc4f8dd0adc6cb8361468b111ae107f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@ddcw31 redis-cluster]#
设置密码: 设置完了, 还得修改配置文件哈,不然重启就没了(我这里偷懒,就直接config rewrite了, 生产环境建议是要禁掉config的)
每个实例都要设置哈, 不然会出现某些节点没得数据的. 集群高可用性也会受到影响
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> config set requirepass 123456
OK
192.168.1.31:6379> config set masterauth 123456
(error) NOAUTH Authentication required.
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> config set requirepass 123456
OK
192.168.1.31:6379> config rewrite
OK
192.168.1.31:6379>
查看集群信息如下
192.168.1.33:6379> cluster nodes
fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380@16380 slave 4b277b33572bfdfdae734da9a006ff5d7ee05d46 0 1608532872000 6 connected
6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380@16380 slave 169b0df771d45f27383add0304df59d2fbae6c62 0 1608532872000 4 connected
60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380@16380 slave 61a3730c0bc4f8dd0adc6cb8361468b111ae107f 0 1608532875050 5 connected
4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379@16379 master - 0 1608532873000 3 connected 5461-10922
61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379@16379 myself,master - 0 1608532870000 5 connected 10923-16383
169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379@16379 master - 0 1608532874034 1 connected 0-5460
192.168.1.33:6379>
各个字段意思是啥,可以看:http://www.redis.cn/commands/cluster-nodes.html
本环境:
fc5978c802368c699e57405d3c1ba867bc5fe312 这个是192.168.1.33:6380的node-id,其它的类似,集群加减节点的时候会用到
主节点 备节点 hash槽(slot)
192.168.1.31:6379 192.168.1.32:6380 0-5460
192.168.1.32:6379 192.168.1.33:6380 5461-10922
192.168.1.33:6379 192.168.1.31:6380 10923-16383
启动集群只需要启动所有节点即可. 所有只需要在每个节点执行如下命令即可
/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6379.conf
/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/redis-cluster-6380.conf
关闭所有节点即可
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> shutdown
not connected> exit
[root@ddcw31 redis-cluster]#
查看cluster-config-file指定的文件也行
登录后执行cluster nodes查看也行
[root@ddcw31 redis-cluster]# tail /usr/local/redis-cluster/data/node-6379.conf
169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379@16379 myself,master - 0 1608532289000 1 connected 0-5460
4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379@16379 master - 0 1608532289453 3 connected 5461-10922
fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380@16380 slave 4b277b33572bfdfdae734da9a006ff5d7ee05d46 0 1608532290000 6 connected
6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380@16380 slave 169b0df771d45f27383add0304df59d2fbae6c62 0 1608532290000 4 connected
61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379@16379 master - 0 1608532291000 5 connected 10923-16383
60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380@16380 slave 61a3730c0bc4f8dd0adc6cb8361468b111ae107f 0 1608532291485 5 connected
vars currentEpoch 6 lastVoteEpoch 0
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:1336
cluster_stats_messages_pong_sent:1483
cluster_stats_messages_sent:2819
cluster_stats_messages_ping_received:1478
cluster_stats_messages_pong_received:1336
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:2819
192.168.1.31:6379> cluster nodes
169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379@16379 myself,master - 0 1608533632000 1 connected 0-5460
4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379@16379 master - 0 1608533633000 3 connected 5461-10922
fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380@16380 slave 4b277b33572bfdfdae734da9a006ff5d7ee05d46 0 1608533631000 6 connected
6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380@16380 slave 169b0df771d45f27383add0304df59d2fbae6c62 0 1608533632330 4 connected
61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379@16379 master - 0 1608533634347 5 connected 10923-16383
60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380@16380 slave 61a3730c0bc4f8dd0adc6cb8361468b111ae107f 0 1608533633339 5 connected
192.168.1.31:6379>
redis会根据key进行CRC16算出其槽(slot)所在节点
HASH_SLOT = CRC16(key) mod 16384
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> set test "i am 31"
-> Redirected to slot [6918] located at 192.168.1.32:6379
(error) NOAUTH Authentication required.
192.168.1.32:6379> auth 123456
OK
192.168.1.32:6379> set test "i am 31"
OK
192.168.1.32:6379> get test
"i am 31"
192.168.1.32:6379>
我登录的31, 当我set test的时候, 对test做hash,算出其槽为6918,属于第二个节点,于是就自动登录第二节点了
先登录31:6379 查看集群节点信息 --> 设置test值(根据hash算出应存储到32:6379) --> 人工kill掉32:6379后, 查询刚才设置的test值在33:6380(slave-->master)上 --> 查看日志为15秒失败后,33:6380就变成master了
(当宕掉的实例重新启动之后,会变成slave)
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> cluster nodes
61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379@16379 slave 60bee426b74f78863ebde556ccdf3be318076e2a 0 1608540875000 8 connected
6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380@16380 master - 0 1608540875491 7 connected 0-5460
169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379@16379 myself,slave 6c41bb62bc3857d2c9549873d79f00f4a34475d2 0 1608540876000 1 connected
4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379@16379 master - 0 1608540877517 3 connected 5461-10922
fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380@16380 slave 4b277b33572bfdfdae734da9a006ff5d7ee05d46 0 1608540872449 6 connected
60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380@16380 master - 0 1608540877000 8 connected 10923-16383
192.168.1.31:6379> set test "awsl20201221"
-> Redirected to slot [6918] located at 192.168.1.32:6379
(error) NOAUTH Authentication required.
192.168.1.32:6379> auth 123456
OK
192.168.1.32:6379> get test
"i am 31"
192.168.1.32:6379> set test "awsl20201221"
OK
192.168.1.32:6379> get test
"awsl20201221"
192.168.1.32:6379> exit
[root@ddcw31 redis-cluster]# ssh 192.168.1.32
root@192.168.1.32's password:
Last login: Mon Dec 21 12:14:28 2020 from pc-202004152311
[root@ddcw32 ~]# ps -ef | grep redis
root 2226 1 0 16:54 ? 00:00:00 /usr/local/redis-cluster/bin/redis-server 0.0.0.0:6379 [cluster]
root 2231 1 0 16:54 ? 00:00:00 /usr/local/redis-cluster/bin/redis-server 0.0.0.0:6380 [cluster]
root 2256 2240 0 16:56 pts/1 00:00:00 grep --color=auto redis
[root@ddcw32 ~]# kill -9 2226
[root@ddcw32 ~]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> get test
-> Redirected to slot [6918] located at 192.168.1.33:6380
(error) NOAUTH Authentication required.
192.168.1.33:6380> auth 123456
OK
192.168.1.33:6380> get test
"awsl20201221"
192.168.1.33:6380> exit
[root@ddcw32 ~]# ssh 192.168.1.33
The authenticity of host '192.168.1.33 (192.168.1.33)' can't be established.
ECDSA key fingerprint is SHA256:Nt3xEe5pKXcjs46teMTKGFZ5E55B+IF9rSVdIw2fYTc.
ECDSA key fingerprint is MD5:84:6d:67:d9:eb:c2:67:b9:27:bd:27:e3:3b:68:c1:11.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.33' (ECDSA) to the list of known hosts.
root@192.168.1.33's password:
Last login: Mon Dec 21 12:14:44 2020 from pc-202004152311
[root@ddcw33 ~]# tail /usr/local/redis-cluster/log/redis-cluster-6380.log
1887:S 21 Dec 2020 16:57:07.107 # Error condition on socket for SYNC: Connection refused
1887:S 21 Dec 2020 16:57:07.171 * FAIL message received from 6c41bb62bc3857d2c9549873d79f00f4a34475d2 about 4b277b33572bfdfdae734da9a006ff5d7ee05d46
1887:S 21 Dec 2020 16:57:07.171 # Cluster state changed: fail
1887:S 21 Dec 2020 16:57:07.207 # Start of election delayed for 515 milliseconds (rank #0, offset 261).
1887:S 21 Dec 2020 16:57:07.815 # Starting a failover election for epoch 9.
1887:S 21 Dec 2020 16:57:07.821 # Failover election won: I'm the new master.
1887:S 21 Dec 2020 16:57:07.822 # configEpoch set to 9 after successful failove
1887:M 21 Dec 2020 16:57:07.822 # Setting secondary replication ID to fb19496140888dd50683a812b6f4dc4c6475a003, valid up to offset: 262. New replication ID is 64e8015d75d3e07350c8edcc16abf9b4b820534e
1887:M 21 Dec 2020 16:57:07.822 * Discarding previously cached master state.
1887:M 21 Dec 2020 16:57:07.822 # Cluster state changed: ok
[root@ddcw33 ~]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> cluster nodes
61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379@16379 slave 60bee426b74f78863ebde556ccdf3be318076e2a 0 1608541383634 8 connected
6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380@16380 master - 0 1608541379586 7 connected 0-5460
169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379@16379 myself,slave 6c41bb62bc3857d2c9549873d79f00f4a34475d2 0 1608541382000 1 connected
4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379@16379 master,fail - 1608541011887 1608541009000 3 disconnected
fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380@16380 master - 0 1608541382000 9 connected 5461-10922
60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380@16380 master - 0 1608541382618 8 connected 10923-16383
192.168.1.31:6379>
也就是说当我有三个节点互为主从的时候, 可以宕掉任一节点(一主和另一从)
测试如下:
你如果测试失败了的话,多半是密码问题, 每个主从都要设置密码且一样
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.31 -p 6379 -c
192.168.1.31:6379> auth 123456
OK
192.168.1.31:6379> cluster nodes
61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379@16379 slave 60bee426b74f78863ebde556ccdf3be318076e2a 0 1608541557000 8 connected
6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380@16380 master - 0 1608541559021 7 connected 0-5460
169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379@16379 myself,slave 6c41bb62bc3857d2c9549873d79f00f4a34475d2 0 1608541555000 1 connected
4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379@16379 slave fc5978c802368c699e57405d3c1ba867bc5fe312 0 1608541560031 9 connected
fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380@16380 master - 0 1608541557000 9 connected 5461-10922
60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380@16380 master - 0 1608541558009 8 connected 10923-16383
192.168.1.31:6379> exit
[root@ddcw31 redis-cluster]# ps -ef | grep redis | grep cluster | awk '{print $2}' | xargs -t -i kill -9 {}
kill -9 3367
kill -9 3372
[root@ddcw31 redis-cluster]# /usr/local/redis-cluster/bin/redis-cli -h 192.168.1.32 -p 6379 -c
192.168.1.32:6379> auth 123456
OK
192.168.1.32:6379> cluster nodes
6c41bb62bc3857d2c9549873d79f00f4a34475d2 192.168.1.32:6380@16380 master - 0 1608541641823 7 connected 0-5460
4b277b33572bfdfdae734da9a006ff5d7ee05d46 192.168.1.32:6379@16379 myself,slave fc5978c802368c699e57405d3c1ba867bc5fe312 0 1608541636000 3 connected
61a3730c0bc4f8dd0adc6cb8361468b111ae107f 192.168.1.33:6379@16379 master - 0 1608541643852 10 connected 10923-16383
fc5978c802368c699e57405d3c1ba867bc5fe312 192.168.1.33:6380@16380 master - 0 1608541642837 9 connected 5461-10922
169b0df771d45f27383add0304df59d2fbae6c62 192.168.1.31:6379@16379 slave,fail 6c41bb62bc3857d2c9549873d79f00f4a34475d2 1608541618211 1608541616000 7 disconnected
60bee426b74f78863ebde556ccdf3be318076e2a 192.168.1.31:6380@16380 master,fail - 1608541618211 1608541617299 8 disconnected
192.168.1.32:6379>
这个测试最好是在应用上测试,效果更佳,对于应用来讲几乎是透明的
其它的特性我就不去测了.
本环境是设置了密码的,所有还得加 -a password参数
集群的添加节点和删除节点我也不讲了.涉及到hash槽的重新分配.给个提示
#添加从节点:
#existing_host:existing_port 是指集群的某个节点,比较要登录上去才能加啊,所以也会涉及到密码问题
#new_host:new_port指新添加的节点IP:端口
/usr/local/redis-cluster/bin/redis-cli --cluster add-node --cluster-slave --cluster-master-id NODE-ID new_host:new_port existing_host:existing_port
#添加主节点:
/usr/local/redis-cluster/bin/redis-cli --cluster add-node new_host:new_port existing_host:existing_port
#移除一个节点,移除节点是不区分主从的. 移除前得先确保该节点没得hash槽
/usr/local/redis-cluster/bin/redis-cli --cluster del-node host:port node_id
新添加的主节点是没得hash槽的,
得重新分配hash槽,提示
redis-cli --cluster reshard host:port -a 123456
下一章讲使用redis,
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。