内存node是将所有元数据保存在内存中的node,是以一定安全风险为代价交换性能的选择,由于不保存数据到硬盘,所以断电或重启后数据将会丢失,也正因为不必与硬盘打交道,所以速度会非常快
一般使用它来动态地扩展集群性能(只使用RAM node的集群是脆弱的)
RAM nodes keep their metadata only in memory. As RAM nodes don’t have to write to disc as much as disc nodes, they can perform better. However, note that since persistent queue data is always stored on disc, the performance improvements will affect only resource management (e.g. adding/removing queues, exchanges, or vhosts), but not publishing or consuming speed. RAM nodes are an advanced use case; when setting up your first cluster you should simply not use them. You should have enough disc nodes to handle your redundancy requirements, then if necessary add additional RAM nodes for scale. A cluster containing only RAM nodes is fragile; if the cluster stops you will not be able to start it again and will lose all data. RabbitMQ will prevent the creation of a RAM-node-only cluster in many situations, but it can’t absolutely prevent it. The examples here show a cluster with one disc and one RAM node for simplicity only; such a cluster is a poor design choice.
使用下面方法创建内存node
[root@h101 ~]# rabbitmqctl -n rabbit cluster_status
Cluster status of node rabbit@h101 ...
[{nodes,[{disc,[rabbit@h101]}]},
{running_nodes,[rabbit@h101]},
{cluster_name,<<"hare@h101.temp">>},
{partitions,[]}]
[root@h101 ~]# rabbitmqctl -n hare cluster_status
Cluster status of node hare@h101 ...
[{nodes,[{disc,[hare@h101]}]},
{running_nodes,[hare@h101]},
{cluster_name,<<"hare@h101.temp">>},
{partitions,[]}]
[root@h101 ~]# rabbitmqctl -n rabbit stop_app
Stopping node rabbit@h101 ...
[root@h101 ~]# rabbitmqctl -n rabbit join_cluster --ram hare@h101
Clustering node rabbit@h101 with hare@h101 ...
[root@h101 ~]# rabbitmqctl -n rabbit start_app
Starting node rabbit@h101 ...
[root@h101 ~]# rabbitmqctl -n rabbit cluster_status
Cluster status of node rabbit@h101 ...
[{nodes,[{disc,[hare@h101]},{ram,[rabbit@h101]}]},
{running_nodes,[hare@h101,rabbit@h101]},
{cluster_name,<<"hare@h101.temp">>},
{partitions,[]}]
[root@h101 ~]#
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。