前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Hadoop2.7.6_08_Federation联邦机制 1.1. HDFS-federation图解2.1. 注意事项3.1. 部署3.2. 环境变量3.3. c

Hadoop2.7.6_08_Federation联邦机制 1.1. HDFS-federation图解2.1. 注意事项3.1. 部署3.2. 环境变量3.3. c

作者头像
踏歌行
发布2020-10-15 11:31:27
7260
发布2020-10-15 11:31:27
举报
文章被收录于专栏:踏歌行的专栏

前言:

代码语言:txt
复制
   本文章是在  Hadoop2.7.6\_07\_HA高可用  的基础上完成的,所以不清楚的可参见这篇文章。

1. Hadoop的federation机制

代码语言:txt
复制
   文件的元数据是放在namenode上的,只有一个Namespace(命名空间)。随着HDFS的数据越来越多,单个namenode的资源使用必然会达到上限,而且namenode的负载能力也会越来越高,限制HDFS的性能。
代码语言:txt
复制
   Federation即为“联邦”,该特性允许一个HDFS集群中存在多个NameNode同时对外提供服务,这些NameNode分管一部分目录(水平切分),彼此之间相互隔离,但共享底层的DataNode存储资源。

1.1. HDFS-federation图解

2. 主机规划

主机名称

外网IP

内网IP

操作系统

备注

安装软件

运行进程

mini01

10.0.0.111

172.16.1.111

CentOS 7.4

ssh port:22

jdk、hadoop

NameNode、DFSZKFailoverController(zkfc)

mini02

10.0.0.112

172.16.1.112

CentOS 7.4

ssh port:22

jdk、hadoop

NameNode、DFSZKFailoverController(zkfc)

mini03

10.0.0.113

172.16.1.113

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

NameNode、ResourceManager

mini04

10.0.0.114

172.16.1.114

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

NameNode、ResourceManager

mini05

10.0.0.115

172.16.1.115

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

DataNode、NodeManager、JournalNode、QuorumPeerMain

mini06

10.0.0.116

172.16.1.116

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

DataNode、NodeManager、JournalNode、QuorumPeerMain

mini07

10.0.0.117

172.16.1.117

CentOS 7.4

ssh port:22

jdk、hadoop、zookeeper

DataNode、NodeManager、JournalNode、QuorumPeerMain

代码语言:javascript
复制
注意:此规划是在HA的基础上进行的federation,即 mini01和mini02做HA,mini03和mini04做HA;之后再做federation。
	HA的配置请参见:Hadoop2.7.6_07_HA高可用

Linux添加hosts信息,保证每台都可以相互ping通

代码语言:javascript
复制
 1 [root@mini01 ~]# cat /etc/hosts  
 2 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
 3 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 4 
 5 10.0.0.111    mini01
 6 10.0.0.112    mini02
 7 10.0.0.113    mini03
 8 10.0.0.114    mini04
 9 10.0.0.115    mini05
10 10.0.0.116    mini06
11 10.0.0.117    mini07

Windows的hosts文件修改

代码语言:javascript
复制
1 # 文件位置C:\Windows\System32\drivers\etc   在hosts中追加如下内容
2 …………………………………………
3 10.0.0.111    mini01
4 10.0.0.112    mini02
5 10.0.0.113    mini03
6 10.0.0.114    mini04
7 10.0.0.115    mini05
8 10.0.0.116    mini06
9 10.0.0.117    mini07

2.1. 注意事项

其中添加yun用户、实现yun用户免秘钥登录、Jdk【java8】、Zookeeper部署;请参见Hadoop2.7.6_07_HA高可用

3. Hadoop部署与配置修改

注意:每台机器的Hadoop以及配置相同

3.1. 部署

代码语言:javascript
复制
 1 [yun@mini01 software]$ pwd
 2 /app/software
 3 [yun@mini01 software]$ ll
 4 total 194152
 5 -rw-r--r-- 1 yun yun 198811365 Jun  8 16:36 CentOS-7.4_hadoop-2.7.6.tar.gz
 6 [yun@mini01 software]$ tar xf CentOS-7.4_hadoop-2.7.6.tar.gz
 7 [yun@mini01 software]$ mv hadoop-2.7.6/ /app/
 8 [yun@mini01 software]$ cd
 9 [yun@mini01 ~]$ ln -s hadoop-2.7.6/ hadoop
10 [yun@mini01 ~]$ ll
11 total 4
12 lrwxrwxrwx  1 yun yun   13 Jun  9 16:21 hadoop -> hadoop-2.7.6/
13 drwxr-xr-x  9 yun yun  149 Jun  8 16:36 hadoop-2.7.6
14 lrwxrwxrwx  1 yun yun   12 May 26 11:18 jdk -> jdk1.8.0_112
15 drwxr-xr-x  8 yun yun  255 Sep 23  2016 jdk1.8.0_112

3.2. 环境变量

代码语言:javascript
复制
1 [root@mini01 profile.d]# pwd
2 /etc/profile.d
3 [root@mini01 profile.d]# vim hadoop.sh 
4 export HADOOP_HOME="/app/hadoop"
5 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
6  
7 [root@mini01 profile.d]# source /etc/profile  # 生效

3.3. core-site.xml

代码语言:javascript
复制
 1 [yun@mini01 hadoop]$ pwd
 2 /app/hadoop/etc/hadoop
 3 [yun@mini01 hadoop]$ vim core-site.xml 
 4 <?xml version="1.0" encoding="UTF-8"?>
 5 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 6 <!--
 7   ………………
 8 -->
 9 
10 <!-- Put site-specific property overrides in this file. -->
11 
12 <configuration>
13   <property>
14     <name>fs.defaultFS</name>
15     <value>viewfs:///</value>
16   </property>
17 
18   <property>
19     <name>fs.viewfs.mounttable.default.link./bi</name>
20     <value>hdfs://bi/</value>
21   </property>
22 
23   <property>
24     <name>fs.viewfs.mounttable.default.link./dt</name>
25     <value>hdfs://dt/</value>
26   </property>
27 
28 
29   <!-- 指定hadoop临时目录 -->
30   <property>
31     <name>hadoop.tmp.dir</name>
32     <value>/app/hadoop/tmp</value>
33   </property>
34 
35   <!-- 指定zookeeper地址 -->
36   <property>
37     <name>ha.zookeeper.quorum</name>
38     <value>mini03:2181,mini04:2181,mini05:2181,mini06:2181,mini07:2181</value>
39   </property>
40 
41 </configuration>

3.4. hdfs-site.xml

3.4.1. 在mini01和mini02中

代码语言:javascript
复制
  1 # 在mini01、mini02、mini05、mini06、mini07 可使用如下配置
  2 [yun@mini01 hadoop]$ pwd
  3 /app/hadoop/etc/hadoop
  4 [yun@mini01 hadoop]$ vim hdfs-site.xml 
  5 <?xml version="1.0" encoding="UTF-8"?>
  6 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  7 <!--
  8   ……………………
  9 -->
 10 
 11 <!-- Put site-specific property overrides in this file. -->
 12 
 13 <configuration>
 14   <!--指定hdfs的nameservice为bi,dt,需要和core-site.xml中的保持一致 -->
 15   <property>
 16     <name>dfs.nameservices</name>
 17     <value>bi,dt</value>
 18   </property>
 19 
 20   <!-- bi下面有两个NameNode,分别是nn1,nn2 -->
 21   <property>
 22     <name>dfs.ha.namenodes.bi</name>
 23     <value>nn1,nn2</value>
 24   </property>
 25 
 26   <!-- dt下面有两个NameNode,分别是nn3,nn4 -->
 27   <property>
 28     <name>dfs.ha.namenodes.dt</name>
 29     <value>nn3,nn4</value>
 30   </property>
 31 
 32 
 33   <!-- nn1的RPC通信地址 -->
 34   <property>
 35     <name>dfs.namenode.rpc-address.bi.nn1</name>
 36     <value>mini01:9000</value>
 37   </property>
 38   <!-- nn1的http通信地址 -->
 39   <property>
 40     <name>dfs.namenode.http-address.bi.nn1</name>
 41     <value>mini01:50070</value>
 42   </property>
 43 
 44   <!-- nn2的RPC通信地址 -->
 45   <property>
 46     <name>dfs.namenode.rpc-address.bi.nn2</name>
 47     <value>mini02:9000</value>
 48   </property>
 49   <!-- nn2的http通信地址 -->
 50   <property>
 51     <name>dfs.namenode.http-address.bi.nn2</name>
 52     <value>mini02:50070</value>
 53   </property>
 54 
 55 
 56   <!-- nn3的RPC通信地址 -->
 57   <property>
 58     <name>dfs.namenode.rpc-address.dt.nn3</name>
 59     <value>mini03:9000</value>
 60   </property>
 61   <!-- nn3的http通信地址 -->
 62   <property>
 63     <name>dfs.namenode.http-address.dt.nn3</name>
 64     <value>mini03:50070</value>
 65   </property>
 66 
 67   <!-- nn4的RPC通信地址 -->
 68   <property>
 69     <name>dfs.namenode.rpc-address.dt.nn4</name>
 70     <value>mini04:9000</value>
 71   </property>
 72   <!-- nn4的http通信地址 -->
 73   <property>
 74     <name>dfs.namenode.http-address.dt.nn4</name>
 75     <value>mini04:50070</value>
 76   </property>
 77 
 78 
 79   <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
 80   <!--  在bi名称空间的两个namenode中用如下配置  -->
 81   <property>
 82     <name>dfs.namenode.shared.edits.dir</name>
 83     <value>qjournal://mini05:8485;mini06:8485;mini07:8485/bi</value>
 84   </property>
 85   <!--  在dt名称空间的两个namenode中用如下配置  # 在mini01和mini02中注释掉  -->
 86   <!--
 87   <property>
 88     <name>dfs.namenode.shared.edits.dir</name>
 89     <value>qjournal://mini05:8485;mini06:8485;mini07:8485/dt</value>
 90   </property>
 91    -->
 92 
 93 
 94   <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
 95   <property>
 96     <name>dfs.journalnode.edits.dir</name>
 97     <value>/app/hadoop/journaldata</value>
 98   </property>
 99 
100   <!-- 开启NameNode失败自动切换 -->
101   <property>
102     <name>dfs.ha.automatic-failover.enabled</name>
103     <value>true</value>
104   </property>
105 
106   <!-- 配置失败自动切换实现方式 -->
107   <property>
108     <name>dfs.client.failover.proxy.provider.bi</name>
109     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
110   </property>
111   <property>
112     <name>dfs.client.failover.proxy.provider.dt</name>
113     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
114   </property>
115 
116 
117   <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
118   <!-- 其中shell(/bin/true) 表示可执行一个脚本  比如 shell(/app/yunwei/hadoop_fence.sh) -->
119   <property>
120     <name>dfs.ha.fencing.methods</name>
121     <value>
122       sshfence
123       shell(/bin/true)
124     </value>
125   </property>
126 
127   <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
128   <property>
129     <name>dfs.ha.fencing.ssh.private-key-files</name>
130     <value>/app/.ssh/id_rsa</value>
131   </property>
132 
133   <!-- 配置sshfence隔离机制超时时间 单位:毫秒 -->
134   <property>
135     <name>dfs.ha.fencing.ssh.connect-timeout</name>
136     <value>30000</value>
137   </property>
138 
139 
140 </configuration>

3.4.2. 在mini03和mini04中

代码语言:javascript
复制
  1 # 在mini03、mini04 可使用如下配置
  2 [yun@mini01 hadoop]$ pwd
  3 /app/hadoop/etc/hadoop
  4 [yun@mini01 hadoop]$ vim hdfs-site.xml 
  5 <?xml version="1.0" encoding="UTF-8"?>
  6 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  7 <!--
  8   ……………………
  9 -->
 10 
 11 <!-- Put site-specific property overrides in this file. -->
 12 
 13 <configuration>
 14   <!--指定hdfs的nameservice为bi,dt,需要和core-site.xml中的保持一致 -->
 15   <property>
 16     <name>dfs.nameservices</name>
 17     <value>bi,dt</value>
 18   </property>
 19 
 20   <!-- bi下面有两个NameNode,分别是nn1,nn2 -->
 21   <property>
 22     <name>dfs.ha.namenodes.bi</name>
 23     <value>nn1,nn2</value>
 24   </property>
 25 
 26   <!-- dt下面有两个NameNode,分别是nn3,nn4 -->
 27   <property>
 28     <name>dfs.ha.namenodes.dt</name>
 29     <value>nn3,nn4</value>
 30   </property>
 31 
 32 
 33   <!-- nn1的RPC通信地址 -->
 34   <property>
 35     <name>dfs.namenode.rpc-address.bi.nn1</name>
 36     <value>mini01:9000</value>
 37   </property>
 38   <!-- nn1的http通信地址 -->
 39   <property>
 40     <name>dfs.namenode.http-address.bi.nn1</name>
 41     <value>mini01:50070</value>
 42   </property>
 43 
 44   <!-- nn2的RPC通信地址 -->
 45   <property>
 46     <name>dfs.namenode.rpc-address.bi.nn2</name>
 47     <value>mini02:9000</value>
 48   </property>
 49   <!-- nn2的http通信地址 -->
 50   <property>
 51     <name>dfs.namenode.http-address.bi.nn2</name>
 52     <value>mini02:50070</value>
 53   </property>
 54 
 55 
 56   <!-- nn3的RPC通信地址 -->
 57   <property>
 58     <name>dfs.namenode.rpc-address.dt.nn3</name>
 59     <value>mini03:9000</value>
 60   </property>
 61   <!-- nn3的http通信地址 -->
 62   <property>
 63     <name>dfs.namenode.http-address.dt.nn3</name>
 64     <value>mini03:50070</value>
 65   </property>
 66 
 67   <!-- nn4的RPC通信地址 -->
 68   <property>
 69     <name>dfs.namenode.rpc-address.dt.nn4</name>
 70     <value>mini04:9000</value>
 71   </property>
 72   <!-- nn4的http通信地址 -->
 73   <property>
 74     <name>dfs.namenode.http-address.dt.nn4</name>
 75     <value>mini04:50070</value>
 76   </property>
 77 
 78 
 79   <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
 80   <!--  在bi名称空间的两个namenode中用如下配置  # 在mini03和mini04中注释掉  -->
 81   <!--
 82   <property>
 83     <name>dfs.namenode.shared.edits.dir</name>
 84     <value>qjournal://mini05:8485;mini06:8485;mini07:8485/bi</value>
 85   </property>
 86    -->
 87   <!--  在dt名称空间的两个namenode中用如下配置  # 在mini01和mini02中注释掉  -->
 88   <property>
 89     <name>dfs.namenode.shared.edits.dir</name>
 90     <value>qjournal://mini05:8485;mini06:8485;mini07:8485/dt</value>
 91   </property>
 92 
 93 
 94   <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
 95   <property>
 96     <name>dfs.journalnode.edits.dir</name>
 97     <value>/app/hadoop/journaldata</value>
 98   </property>
 99 
100   <!-- 开启NameNode失败自动切换 -->
101   <property>
102     <name>dfs.ha.automatic-failover.enabled</name>
103     <value>true</value>
104   </property>
105 
106   <!-- 配置失败自动切换实现方式 -->
107   <property>
108     <name>dfs.client.failover.proxy.provider.bi</name>
109     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
110   </property>
111   <property>
112     <name>dfs.client.failover.proxy.provider.dt</name>
113     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
114   </property>
115 
116 
117   <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
118   <!-- 其中shell(/bin/true) 表示可执行一个脚本  比如 shell(/app/yunwei/hadoop_fence.sh) -->
119   <property>
120     <name>dfs.ha.fencing.methods</name>
121     <value>
122       sshfence
123       shell(/bin/true)
124     </value>
125   </property>
126 
127   <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
128   <property>
129     <name>dfs.ha.fencing.ssh.private-key-files</name>
130     <value>/app/.ssh/id_rsa</value>
131   </property>
132 
133   <!-- 配置sshfence隔离机制超时时间 单位:毫秒 -->
134   <property>
135     <name>dfs.ha.fencing.ssh.connect-timeout</name>
136     <value>30000</value>
137   </property>
138 
139 
140 </configuration>

3.5. mapred-site.xml

代码语言:javascript
复制
 1 # 较上篇文章无改变
 2 [yun@mini01 hadoop]$ pwd
 3 /app/hadoop/etc/hadoop
 4 [yun@mini01 hadoop]$ cp -a mapred-site.xml.template mapred-site.xml 
 5 [yun@mini01 hadoop]$ vim mapred-site.xml  
 6 <?xml version="1.0"?>
 7 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 8 <!--
 9   ……………………
10 -->
11 
12 <!-- Put site-specific property overrides in this file. -->
13 
14 <configuration>
15   <!-- 指定mr框架为yarn方式 -->
16   <property>
17     <name>mapreduce.framework.name</name>
18     <value>yarn</value>
19   </property>
20 
21 </configuration>

3.6. yarn-site.xml

代码语言:javascript
复制
 1 # 较上篇文章无改变
 2 [yun@mini01 hadoop]$ pwd
 3 /app/hadoop/etc/hadoop
 4 [yun@mini01 hadoop]$ vim yarn-site.xml 
 5 <?xml version="1.0"?>
 6 <!--
 7   ……………………
 8 -->
 9 <configuration>
10 
11 <!-- Site specific YARN configuration properties -->
12   <!-- 开启RM高可用 -->
13   <property>
14     <name>yarn.resourcemanager.ha.enabled</name>
15     <value>true</value>
16   </property>
17 
18   <!-- 指定RM的cluster id -->
19   <property>
20     <name>yarn.resourcemanager.cluster-id</name>
21     <value>yrc</value>
22   </property>
23 
24   <!-- 指定RM的名字 -->
25   <property>
26     <name>yarn.resourcemanager.ha.rm-ids</name>
27     <value>rm1,rm2</value>
28   </property>
29 
30   <!-- 分别指定RM的地址 -->
31   <property>
32     <name>yarn.resourcemanager.hostname.rm1</name>
33     <value>mini03</value>
34   </property>
35   <property>
36     <name>yarn.resourcemanager.hostname.rm2</name>
37     <value>mini04</value>
38   </property>
39 
40   <!-- 指定zk集群地址 -->
41   <property>
42     <name>yarn.resourcemanager.zk-address</name>
43     <value>mini03:2181,mini04:2181,mini05:2181,mini06:2181,mini07:2181</value>
44   </property>
45 
46   <!-- reduce 获取数据的方式 -->
47   <property>
48     <name>yarn.nodemanager.aux-services</name>
49     <value>mapreduce_shuffle</value>
50   </property>
51 
52 
53 </configuration>

3.7. 修改slaves

代码语言:txt
复制
   slaves是指定子节点的位置,因为要在mini01上启动HDFS、在mini03启动yarn,所以mini01上的slaves文件指定的是datanode的位置,mini03上的slaves文件指定的是nodemanager的位置
代码语言:javascript
复制
1 # 较上篇文章无改变
2 [yun@mini01 hadoop]$ pwd
3 /app/hadoop/etc/hadoop
4 [yun@mini01 hadoop]$ vim slaves 
5 mini05
6 mini06
7 mini07

PS:改后配置后,将这些配置拷到其他Hadoop机器

4. 启动相关服务

注意:第一次启动时严格按照下面的步骤!!!!!!!

4.1. 启动zookeeper集群

代码语言:txt
复制
   前面已经启动了,这里就不说了

4.2. 启动journalnode

代码语言:javascript
复制
1 # 根据规划在mini05、mini06、mini07 启动    
2 # 在第一次格式化的时候需要先启动journalnode   之后就不必了
3 [yun@mini05 ~]$ hadoop-daemon.sh start journalnode  # 已经配置环境变量,所以不用进入到响应的目录  
4 starting journalnode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-journalnode-mini05.out
5 [yun@mini05 ~]$ jps 
6 1281 QuorumPeerMain
7 1817 Jps
8 1759 JournalNode

4.3. 在bi的格式化

4.3.1. 格式化HDFS

代码语言:javascript
复制
 1 # 在mini01上执行命令
 2 [yun@mini01 ~]$ hdfs namenode -format -clusterID CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f   # 其中UUID可以在网上生成一个 
 3 18/07/01 12:11:03 INFO namenode.NameNode: STARTUP_MSG: 
 4 /************************************************************
 5 STARTUP_MSG: Starting NameNode
 6 STARTUP_MSG:   host = mini01/10.0.0.111
 7 STARTUP_MSG:   args = [-format, -clusterID, CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f]
 8 STARTUP_MSG:   version = 2.7.6
 9 STARTUP_MSG:   classpath = ………………
10 STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on 2018-06-08T08:30Z
11 STARTUP_MSG:   java = 1.8.0_112
12 ************************************************************/
13 18/07/01 12:11:03 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14 18/07/01 12:11:03 INFO namenode.NameNode: createNameNode [-format, -clusterID, CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f]
15 Formatting using clusterid: CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f
16 18/07/01 12:11:03 INFO namenode.FSNamesystem: No KeyProvider found.
17 18/07/01 12:11:03 INFO namenode.FSNamesystem: fsLock is fair: true
18 18/07/01 12:11:03 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19 18/07/01 12:11:03 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
20 18/07/01 12:11:03 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
21 18/07/01 12:11:03 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
22 18/07/01 12:11:03 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jul 01 12:11:03
23 18/07/01 12:11:03 INFO util.GSet: Computing capacity for map BlocksMap
24 18/07/01 12:11:03 INFO util.GSet: VM type       = 64-bit
25 18/07/01 12:11:03 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
26 18/07/01 12:11:03 INFO util.GSet: capacity      = 2^21 = 2097152 entries
27 18/07/01 12:11:03 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
28 18/07/01 12:11:03 INFO blockmanagement.BlockManager: defaultReplication         = 3
29 18/07/01 12:11:03 INFO blockmanagement.BlockManager: maxReplication             = 512
30 18/07/01 12:11:03 INFO blockmanagement.BlockManager: minReplication             = 1
31 18/07/01 12:11:03 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
32 18/07/01 12:11:03 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
33 18/07/01 12:11:03 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
34 18/07/01 12:11:03 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
35 18/07/01 12:11:03 INFO namenode.FSNamesystem: fsOwner             = yun (auth:SIMPLE)
36 18/07/01 12:11:03 INFO namenode.FSNamesystem: supergroup          = supergroup
37 18/07/01 12:11:03 INFO namenode.FSNamesystem: isPermissionEnabled = true
38 18/07/01 12:11:03 INFO namenode.FSNamesystem: Determined nameservice ID: bi
39 18/07/01 12:11:03 INFO namenode.FSNamesystem: HA Enabled: true
40 18/07/01 12:11:03 INFO namenode.FSNamesystem: Append Enabled: true
41 18/07/01 12:11:04 INFO util.GSet: Computing capacity for map INodeMap
42 18/07/01 12:11:04 INFO util.GSet: VM type       = 64-bit
43 18/07/01 12:11:04 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
44 18/07/01 12:11:04 INFO util.GSet: capacity      = 2^20 = 1048576 entries
45 18/07/01 12:11:04 INFO namenode.FSDirectory: ACLs enabled? false
46 18/07/01 12:11:04 INFO namenode.FSDirectory: XAttrs enabled? true
47 18/07/01 12:11:04 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
48 18/07/01 12:11:04 INFO namenode.NameNode: Caching file names occuring more than 10 times
49 18/07/01 12:11:04 INFO util.GSet: Computing capacity for map cachedBlocks
50 18/07/01 12:11:04 INFO util.GSet: VM type       = 64-bit
51 18/07/01 12:11:04 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
52 18/07/01 12:11:04 INFO util.GSet: capacity      = 2^18 = 262144 entries
53 18/07/01 12:11:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
54 18/07/01 12:11:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
55 18/07/01 12:11:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
56 18/07/01 12:11:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
57 18/07/01 12:11:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
58 18/07/01 12:11:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
59 18/07/01 12:11:04 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
60 18/07/01 12:11:04 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
61 18/07/01 12:11:04 INFO util.GSet: Computing capacity for map NameNodeRetryCache
62 18/07/01 12:11:04 INFO util.GSet: VM type       = 64-bit
63 18/07/01 12:11:04 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
64 18/07/01 12:11:04 INFO util.GSet: capacity      = 2^15 = 32768 entries
65 18/07/01 12:11:04 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1764062562-10.0.0.111-1530418264789
66 18/07/01 12:11:04 INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted.
67 18/07/01 12:11:05 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
68 18/07/01 12:11:05 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 320 bytes saved in 0 seconds.
69 18/07/01 12:11:05 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
70 18/07/01 12:11:05 INFO util.ExitUtil: Exiting with status 0
71 18/07/01 12:11:05 INFO namenode.NameNode: SHUTDOWN_MSG: 
72 /************************************************************
73 SHUTDOWN_MSG: Shutting down NameNode at mini01/10.0.0.111
74 ************************************************************/

拷贝到mini02

代码语言:javascript
复制
 1 #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/app/hadoop/tmp,然后将/app/hadoop/tmp拷贝到mini02的/app/hadoop/下。
 2 # 方法1:
 3 [yun@mini01 hadoop]$ pwd
 4 /app/hadoop
 5 [yun@mini01 hadoop]$ scp -r tmp/ yun@mini02:/app/hadoop  
 6 VERSION                           100%  202   189.4KB/s   00:00 
 7 seen_txid                         100%    2     1.0KB/s   00:00 
 8 fsimage_0000000000000000000.md5   100%   62    39.7KB/s   00:00 
 9 fsimage_0000000000000000000       100%  320   156.1KB/s   00:00 
10 
11 ##########################3
12 # 方法2:##也可以这样,建议hdfs namenode -bootstrapStandby  # 不过需要mini02的Hadoop起来才行

4.3.2. 格式化ZKFC

代码语言:javascript
复制
 1 # 在mini01上执行一次即可
 2 [yun@mini01 current]$ hdfs zkfc -formatZK  
 3 18/07/01 12:12:45 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at mini01/10.0.0.111:9000
 4 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
 5 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:host.name=mini01
 6 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_112
 7 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
 8 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:java.home=/app/jdk1.8.0_112/jre
 9 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:java.class.path=……………………
10 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/app/hadoop-2.7.6/lib/native
11 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
14 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-693.el7.x86_64
16 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:user.name=yun
17 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:user.home=/app
18 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Client environment:user.dir=/app/hadoop-2.7.6
19 18/07/01 12:12:46 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=mini03:2181,mini04:2181,mini05:2181,mini06:2181,mini07:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@7f3b84b8
20 18/07/01 12:12:46 INFO zookeeper.ClientCnxn: Opening socket connection to server mini04/10.0.0.114:2181. Will not attempt to authenticate using SASL (unknown error)
21 18/07/01 12:12:46 INFO zookeeper.ClientCnxn: Socket connection established to mini04/10.0.0.114:2181, initiating session
22 18/07/01 12:12:46 INFO zookeeper.ClientCnxn: Session establishment complete on server mini04/10.0.0.114:2181, sessionid = 0x464538f087b0003, negotiated timeout = 5000
23 18/07/01 12:12:46 INFO ha.ActiveStandbyElector: Session connected.
24 18/07/01 12:13:02 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/bi in ZK.
25 18/07/01 12:13:02 INFO zookeeper.ZooKeeper: Session: 0x464538f087b0003 closed
26 18/07/01 12:13:02 INFO zookeeper.ClientCnxn: EventThread shut down

4.4. 在dt的格式化

4.4.1. 格式化HDFS

代码语言:javascript
复制
 1 # 在mini03上执行命令
 2 [yun@mini03 hadoop]$ hdfs namenode -format -clusterID CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f    # clusterID必须与bi的相同 
 3 18/07/01 12:14:30 INFO namenode.NameNode: STARTUP_MSG: 
 4 /************************************************************
 5 STARTUP_MSG: Starting NameNode
 6 STARTUP_MSG:   host = mini03/10.0.0.113
 7 STARTUP_MSG:   args = [-format, -clusterID, CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f]
 8 STARTUP_MSG:   version = 2.7.6
 9 STARTUP_MSG:   classpath = ……………………
10 STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on 2018-06-08T08:30Z
11 STARTUP_MSG:   java = 1.8.0_112
12 ************************************************************/
13 18/07/01 12:14:30 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14 18/07/01 12:14:30 INFO namenode.NameNode: createNameNode [-format, -clusterID, CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f]
15 Formatting using clusterid: CID-8c9c1d4b-5aa3-4d12-b717-31a56884da7f
16 18/07/01 12:14:31 INFO namenode.FSNamesystem: No KeyProvider found.
17 18/07/01 12:14:31 INFO namenode.FSNamesystem: fsLock is fair: true
18 18/07/01 12:14:31 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
19 18/07/01 12:14:31 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
20 18/07/01 12:14:31 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
21 18/07/01 12:14:31 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
22 18/07/01 12:14:31 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jul 01 12:14:31
23 18/07/01 12:14:31 INFO util.GSet: Computing capacity for map BlocksMap
24 18/07/01 12:14:31 INFO util.GSet: VM type       = 64-bit
25 18/07/01 12:14:31 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
26 18/07/01 12:14:31 INFO util.GSet: capacity      = 2^21 = 2097152 entries
27 18/07/01 12:14:31 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
28 18/07/01 12:14:31 INFO blockmanagement.BlockManager: defaultReplication         = 3
29 18/07/01 12:14:31 INFO blockmanagement.BlockManager: maxReplication             = 512
30 18/07/01 12:14:31 INFO blockmanagement.BlockManager: minReplication             = 1
31 18/07/01 12:14:31 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
32 18/07/01 12:14:31 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
33 18/07/01 12:14:31 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
34 18/07/01 12:14:31 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
35 18/07/01 12:14:31 INFO namenode.FSNamesystem: fsOwner             = yun (auth:SIMPLE)
36 18/07/01 12:14:31 INFO namenode.FSNamesystem: supergroup          = supergroup
37 18/07/01 12:14:31 INFO namenode.FSNamesystem: isPermissionEnabled = true
38 18/07/01 12:14:31 INFO namenode.FSNamesystem: Determined nameservice ID: dt
39 18/07/01 12:14:31 INFO namenode.FSNamesystem: HA Enabled: true
40 18/07/01 12:14:31 INFO namenode.FSNamesystem: Append Enabled: true
41 18/07/01 12:14:31 INFO util.GSet: Computing capacity for map INodeMap
42 18/07/01 12:14:31 INFO util.GSet: VM type       = 64-bit
43 18/07/01 12:14:31 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
44 18/07/01 12:14:31 INFO util.GSet: capacity      = 2^20 = 1048576 entries
45 18/07/01 12:14:31 INFO namenode.FSDirectory: ACLs enabled? false
46 18/07/01 12:14:31 INFO namenode.FSDirectory: XAttrs enabled? true
47 18/07/01 12:14:31 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
48 18/07/01 12:14:31 INFO namenode.NameNode: Caching file names occuring more than 10 times
49 18/07/01 12:14:31 INFO util.GSet: Computing capacity for map cachedBlocks
50 18/07/01 12:14:31 INFO util.GSet: VM type       = 64-bit
51 18/07/01 12:14:31 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
52 18/07/01 12:14:31 INFO util.GSet: capacity      = 2^18 = 262144 entries
53 18/07/01 12:14:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
54 18/07/01 12:14:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
55 18/07/01 12:14:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
56 18/07/01 12:14:31 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
57 18/07/01 12:14:31 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
58 18/07/01 12:14:31 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
59 18/07/01 12:14:31 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
60 18/07/01 12:14:31 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
61 18/07/01 12:14:31 INFO util.GSet: Computing capacity for map NameNodeRetryCache
62 18/07/01 12:14:31 INFO util.GSet: VM type       = 64-bit
63 18/07/01 12:14:31 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
64 18/07/01 12:14:31 INFO util.GSet: capacity      = 2^15 = 32768 entries
65 18/07/01 12:14:39 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1084901829-10.0.0.113-1530418479529
66 18/07/01 12:14:39 INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted.
67 18/07/01 12:14:39 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
68 18/07/01 12:14:39 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 320 bytes saved in 0 seconds.
69 18/07/01 12:14:39 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
70 18/07/01 12:14:39 INFO util.ExitUtil: Exiting with status 0
71 18/07/01 12:14:39 INFO namenode.NameNode: SHUTDOWN_MSG: 
72 /************************************************************
73 SHUTDOWN_MSG: Shutting down NameNode at mini03/10.0.0.113
74 ************************************************************/

拷贝到mini04

代码语言:javascript
复制
 1 #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/app/hadoop/tmp,然后将/app/hadoop/tmp拷贝到mini04的/app/hadoop/下。
 2 # 方法1:
 3 [yun@mini03 hadoop]$ pwd
 4 /app/hadoop
 5 [yun@mini03 hadoop]$ scp -r tmp/ yun@mini04:/app/hadoop  
 6 VERSION                           100%  202   189.4KB/s   00:00 
 7 seen_txid                         100%    2     1.0KB/s   00:00 
 8 fsimage_0000000000000000000.md5   100%   62    39.7KB/s   00:00 
 9 fsimage_0000000000000000000       100%  320   156.1KB/s   00:00 
10 
11 ##########################3
12 # 方法2:##也可以这样,建议hdfs namenode -bootstrapStandby  # 不过需要mini04的Hadoop起来才行

4.4.2. 格式化ZKFC

代码语言:javascript
复制
 1 # 在mini03上执行一次即可
 2 [yun@mini03 hadoop]$ hdfs zkfc -formatZK
 3 18/07/01 12:14:58 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at mini03/10.0.0.113:9000
 4 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
 5 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:host.name=mini03
 6 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_112
 7 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
 8 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:java.home=/app/jdk1.8.0_112/jre
 9 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:java.class.path=……………………
10 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/app/hadoop-2.7.6/lib/native
11 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
14 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-693.el7.x86_64
16 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:user.name=yun
17 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:user.home=/app
18 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Client environment:user.dir=/app/hadoop-2.7.6
19 18/07/01 12:14:58 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=mini03:2181,mini04:2181,mini05:2181,mini06:2181,mini07:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@7f3b84b8
20 18/07/01 12:14:58 INFO zookeeper.ClientCnxn: Opening socket connection to server mini05/10.0.0.115:2181. Will not attempt to authenticate using SASL (unknown error)
21 18/07/01 12:14:58 INFO zookeeper.ClientCnxn: Socket connection established to mini05/10.0.0.115:2181, initiating session
22 18/07/01 12:14:58 INFO zookeeper.ClientCnxn: Session establishment complete on server mini05/10.0.0.115:2181, sessionid = 0x56455467ae50003, negotiated timeout = 5000
23 18/07/01 12:14:58 INFO ha.ActiveStandbyElector: Session connected.
24 18/07/01 12:15:00 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/dt in ZK.
25 18/07/01 12:15:00 INFO zookeeper.ZooKeeper: Session: 0x56455467ae50003 closed
26 18/07/01 12:15:00 INFO zookeeper.ClientCnxn: EventThread shut down

4.5. 启动HDFS

代码语言:javascript
复制
 1 # 在mini01操作一次即可
 2 [yun@mini01 hadoop]$ start-dfs.sh 
 3 Starting namenodes on [mini01 mini02 mini03 mini04]
 4 mini01: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out
 5 mini02: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini02.out
 6 mini04: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini04.out
 7 mini03: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini03.out
 8 mini06: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini06.out
 9 mini07: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini07.out
10 mini05: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini05.out
11 Starting journal nodes [mini05 mini06 mini07]
12 mini05: journalnode running as process 6922. Stop it first.
13 mini07: journalnode running as process 6370. Stop it first.
14 mini06: journalnode running as process 7078. Stop it first.
15 Starting ZK Failover Controllers on NN hosts [mini01 mini02 mini03 mini04]
16 mini04: starting zkfc, logging to /app/hadoop-2.7.6/logs/hadoop-yun-zkfc-mini04.out
17 mini01: starting zkfc, logging to /app/hadoop-2.7.6/logs/hadoop-yun-zkfc-mini01.out
18 mini02: starting zkfc, logging to /app/hadoop-2.7.6/logs/hadoop-yun-zkfc-mini02.out
19 mini03: starting zkfc, logging to /app/hadoop-2.7.6/logs/hadoop-yun-zkfc-mini03.out

4.6. 启动YARN

代码语言:javascript
复制
 1 #####注意#####:是在mini03上执行start-yarn.sh,把namenode和resourcemanager分开是因为性能问题
 2 # 因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动
 3 [yun@mini03 ~]$ start-yarn.sh  
 4 starting yarn daemons
 5 starting resourcemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-resourcemanager-mini03.out
 6 mini07: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini07.out
 7 mini06: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini06.out
 8 mini05: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini05.out
 9 
10 
11 ################################
12 # 在mini04启动 resourcemanager
13 [yun@mini04 ~]$ yarn-daemon.sh start resourcemanager  # 也可用start-yarn.sh  
14 starting resourcemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-resourcemanager-mini04.out

4.7. 启动说明

代码语言:javascript
复制
1 # 第一次启动的时候请严格按照上面的步骤【第一次涉及格式化问题】
2 # 第二次以及之后,步骤为: 启动zookeeper、HDFS、YARN

5. 浏览器访问

5.1. bi访问

代码语言:javascript
复制
1 http://mini01:50070    
代码语言:javascript
复制
1 http://mini02:50070    

5.2. dt访问

代码语言:javascript
复制
1 http://mini03:50070    
代码语言:javascript
复制
1 http://mini04:50070    

6. 简单操作

6.1. HDFS创建目录与上传

代码语言:javascript
复制
 1 [yun@mini02 software]$ pwd
 2 /app/software
 3 [yun@mini02 software]$ ll
 4 total 715008
 5 -rw-r--r-- 1 yun yun 198811365 Jun  8 16:36 CentOS-7.4_hadoop-2.7.6.tar.gz
 6 -rw-r--r-- 1 yun yun 350098461 May 25 20:18 eclipse-jee-oxygen-3a-win32-x86_64.zip
 7 -rw-r--r-- 1 yun yun 183249642 Oct 27  2017 jdk1.8.0_112.tar.gz
 8 -rw-rw-r-- 1 yun yun         4 Jul  1 09:37 test
 9 [yun@mini02 software]$ hadoop fs -ls /   # 通过命令行可见,是进行了逻辑分区的 
10 Found 2 items
11 -r-xr-xr-x   - yun yun          0 2018-07-01 17:26 /bi
12 -r-xr-xr-x   - yun yun          0 2018-07-01 17:26 /dt
13 [yun@mini02 software]$ hadoop fs -mkdir /bi/software   # 在bi创建目录 
14 [yun@mini02 software]$ 
15 [yun@mini02 software]$ hadoop fs -mkdir /dt/software   # 在dt创建目录 
16 [yun@mini02 software]$ 
17 [yun@mini02 software]$ hadoop fs -put jdk1.8.0_112.tar.gz /bi/software # 上传文件到bi 
18 [yun@mini02 software]$ hadoop fs -put eclipse-jee-oxygen-3a-win32-x86_64.zip /dt/software # 上传文件到dt 
19 [yun@mini02 software]$ 
20 [yun@mini02 software]$ 
21 [yun@mini02 software]$ hadoop fs -ls /bi/software
22 Found 1 items
23 -rw-r--r--   3 yun supergroup  183249642 2018-07-01 17:28 /bi/software/jdk1.8.0_112.tar.gz
24 [yun@mini02 software]$ 
25 [yun@mini02 software]$ 
26 [yun@mini02 software]$ hadoop fs -ls /dt/software
27 Found 1 items
28 -rw-r--r--   3 yun supergroup  350098461 2018-07-01 17:29 /dt/software/eclipse-jee-oxygen-3a-win32-x86_64.zip

6.2. Datanode目录结构

代码语言:javascript
复制
 1 [yun@mini05 current]$ pwd
 2 /app/hadoop/tmp/dfs/data/current
 3 [yun@mini05 current]$ ll
 4 total 4
 5 drwx------ 4 yun yun  54 Jul  1 17:14 BP-153647176-10.0.0.111-1530436323095
 6 drwx------ 4 yun yun  54 Jul  1 17:14 BP-282900577-10.0.0.113-1530436402230
 7 -rw-rw-r-- 1 yun yun 229 Jul  1 17:14 VERSION
 8 [yun@mini05 current]$ tree 
 9 .
10 ├── BP-153647176-10.0.0.111-1530436323095
11 │   ├── current
12 │   │   ├── finalized
13 │   │   │   └── subdir0
14 │   │   │       └── subdir0
15 │   │   │           ├── blk_1073741825
16 │   │   │           ├── blk_1073741825_1001.meta
17 │   │   │           ├── blk_1073741826
18 │   │   │           └── blk_1073741826_1002.meta
19 │   │   ├── rbw
20 │   │   └── VERSION
21 │   ├── scanner.cursor
22 │   └── tmp
23 ├── BP-282900577-10.0.0.113-1530436402230
24 │   ├── current
25 │   │   ├── finalized
26 │   │   │   └── subdir0
27 │   │   │       └── subdir0
28 │   │   │           ├── blk_1073741825
29 │   │   │           ├── blk_1073741825_1001.meta
30 │   │   │           ├── blk_1073741826
31 │   │   │           ├── blk_1073741826_1002.meta
32 │   │   │           ├── blk_1073741827
33 │   │   │           └── blk_1073741827_1003.meta
34 │   │   ├── rbw
35 │   │   └── VERSION
36 │   ├── scanner.cursor
37 │   └── tmp
38 └── VERSION
39 
40 14 directories, 15 files
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1. Hadoop的federation机制
    • 1.1. HDFS-federation图解
    • 2. 主机规划
      • 2.1. 注意事项
      • 3. Hadoop部署与配置修改
        • 3.1. 部署
          • 3.2. 环境变量
            • 3.3. core-site.xml
              • 3.4. hdfs-site.xml
                • 3.4.1. 在mini01和mini02中
                • 3.4.2. 在mini03和mini04中
              • 3.5. mapred-site.xml
                • 3.6. yarn-site.xml
                  • 3.7. 修改slaves
                  • 4. 启动相关服务
                    • 4.1. 启动zookeeper集群
                      • 4.2. 启动journalnode
                        • 4.3. 在bi的格式化
                          • 4.3.1. 格式化HDFS
                          • 4.3.2. 格式化ZKFC
                        • 4.4. 在dt的格式化
                          • 4.4.1. 格式化HDFS
                          • 4.4.2. 格式化ZKFC
                        • 4.5. 启动HDFS
                          • 4.6. 启动YARN
                            • 4.7. 启动说明
                            • 5. 浏览器访问
                              • 5.1. bi访问
                                • 5.2. dt访问
                                • 6. 简单操作
                                  • 6.1. HDFS创建目录与上传
                                    • 6.2. Datanode目录结构
                                    相关产品与服务
                                    大数据
                                    全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
                                    领券
                                    问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档