前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >搭建Kafka集群( 2.8.0版本)之一

搭建Kafka集群( 2.8.0版本)之一

作者头像
程裕强
发布2021-09-08 15:56:23
1.3K0
发布2021-09-08 15:56:23
举报
文章被收录于专栏:大数据学习笔记

1、启动zookeeper集群

Kafka最新版本2.8.0可以不依赖zookeeper,但是仍然测试阶段,所以官方不推荐使用,因此还是要使用zookeeper集群。

代码语言:javascript
复制
[root@node1 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node1 ~]#
代码语言:javascript
复制
[root@node2 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node2 ~]# 
代码语言:javascript
复制
[root@node3 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node3 ~]# 

2、下载与上传

http://kafka.apache.org/downloads 当前最新版本是2.8.0

此处下载了 kafka_2.12-2.8.0.tgz,2.12是Scala的版本,2.8.0是kafka的版本。

然后上传到服务器

代码语言:javascript
复制
[root@node1 opt]# tar -zxvf kafka_2.12-2.8.0.tgz
[root@node1 opt]# mv kafka_2.12-2.8.0 kafka-2.8.0
[root@node1 opt]# cd kafka-2.8.0/
[root@node1 kafka-2.8.0]# ls
bin  config  libs  LICENSE  licenses  NOTICE  site-docs
[root@node1 kafka-2.8.0]# 

3、配置Kafka集群

(1)基本配置

代码语言:javascript
复制
[root@node1 kafka-2.8.0]# vi conf/server.properties

配置内容

代码语言:javascript
复制
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

############################# Socket Server Settings #############################
# 参考:https://www.codercto.com/a/68756.html、https://www.cnblogs.com/ElEGenT/p/12891114.html

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#  broker服务器要监听的地址及端口;默认是localhost:9092;0.0.0.0的话,表示监听本机的所有ip地址.
listeners=PLAINTEXT://node1:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net/InetAddress.getCanonicalHostName().
# 生产者和消费者连接的地址,kafka会把该地址注册到zookeeper中,所以只能为除0.0.0.0之外的合法ip或域名 ,默认和listeners的配置一致
# advertised.listeners=PLAINTEXT://:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/var/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
# 一个topic有3个partition
num.partitions=3

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
# 自动创建主题
auto.create.topics.enable=true
# 提供删除主题的功能
delete.topic.enable=true


############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=node1:2181,node2:2181,node3:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

修改项:

  • broker.id
  • listeners
  • advertised.listeners
  • log.dirs
  • num.partitions
  • zookeeper.connect

添加项:

  • auto.create.topics.enable
  • delete.topic.enable

(2)分发文件

代码语言:javascript
复制
[root@node1 opt]# scp -r kafka-2.8.0/ node2:/opt
代码语言:javascript
复制
[root@node1 opt]# scp -r kafka-2.8.0/ node3:/opt

(3)修改node2和node3节点配置

只需要修改broker.idadvertised.listeners

node2:

代码语言:javascript
复制
broker.id=2
advertised.listeners=PLAINTEXT://node2:9092

node3:

代码语言:javascript
复制
broker.id=3
advertised.listeners=PLAINTEXT://node3:9092

4、启动Kafka

(1)启动node1

代码语言:javascript
复制
[root@node1 kafka-2.8.0]# bin/kafka-server-start.sh -daemon config/server.properties 
[root@node1 kafka-2.8.0]# jps
29808 DFSZKFailoverController
10400 PaloFe
25697 QuorumPeerMain
29105 DataNode
29492 JournalNode
6199 Jps
12744 Worker
14153 BrokerBootstrap
12249 Master
28889 NameNode
5822 Kafka
[root@node1 kafka-2.8.0]# 

参数-daemon 表示以守护进程方式后运行。

(2)启动node2

代码语言:javascript
复制
[root@node2 kafka-2.8.0]#  bin/kafka-server-start.sh -daemon config/server.properties
[root@node2 kafka-2.8.0]# jps
6144 Worker
9744 Kafka
5921 DFSZKFailoverController
5667 JournalNode
23955 BrokerBootstrap
18516 PaloFe
30726 QuorumPeerMain
5433 DataNode
10571 Jps
5259 NameNode
[root@node2 kafka-2.8.0]# 

(3)启动node3

代码语言:javascript
复制
[root@node3 kafka-2.8.0]# bin/kafka-server-start.sh -daemon config/server.properties 
[root@node3 kafka-2.8.0]# jps
5971 PaloFe
1940 Worker
6807 Jps
9545 ZooKeeperMain
22474 QuorumPeerMain
26011 DataNode
22940 BrokerBootstrap
26252 JournalNode
6684 Kafka
[root@node3 kafka-2.8.0]# 

(4)在zookeeper中查看kafka集群

代码语言:javascript
复制
[root@node3 zookeeper-3.4.10]# bin/zkCli.sh
Connecting to localhost:2181
2021-09-03 21:28:40,630 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2021-09-03 21:28:40,637 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=node3
2021-09-03 21:28:40,637 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_161
2021-09-03 21:28:40,641 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2021-09-03 21:28:40,641 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/opt/jdk1.8.0_161/jre
2021-09-03 21:28:40,641 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/opt/zookeeper-3.4.10/bin/../build/classes:/opt/zookeeper-3.4.10/bin/../build/lib/*.jar:/opt/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/opt/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.10/bin/../conf:.::/opt/jdk1.8.0_161/lib
2021-09-03 21:28:40,641 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2021-09-03 21:28:40,641 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2021-09-03 21:28:40,642 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2021-09-03 21:28:40,642 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2021-09-03 21:28:40,642 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2021-09-03 21:28:40,642 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-514.el7.x86_64
2021-09-03 21:28:40,642 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2021-09-03 21:28:40,643 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2021-09-03 21:28:40,643 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/opt/zookeeper-3.4.10
2021-09-03 21:28:40,645 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7aec35a
Welcome to ZooKeeper!
2021-09-03 21:28:40,683 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2021-09-03 21:28:40,789 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
2021-09-03 21:28:40,804 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x37baa7072a20004, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /brokers
[ids, topics, seqid]
[zk: localhost:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 2] 

(5)停止kafka

如果需要停止kafka,可以使用bin目录下

代码语言:javascript
复制
[root@node1 kafka-2.8.0]# bin/kafka-server-stop.sh
代码语言:javascript
复制
[root@node2 kafka-2.8.0]# bin/kafka-server-stop.sh
代码语言:javascript
复制
[root@node3 kafka-2.8.0]# bin/kafka-server-stop.sh
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2021/09/03 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1、启动zookeeper集群
  • 2、下载与上传
  • 3、配置Kafka集群
  • 4、启动Kafka
相关产品与服务
云服务器
云服务器(Cloud Virtual Machine,CVM)提供安全可靠的弹性计算服务。 您可以实时扩展或缩减计算资源,适应变化的业务需求,并只需按实际使用的资源计费。使用 CVM 可以极大降低您的软硬件采购成本,简化 IT 运维工作。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档