1、环境要求
1:CentOS6.8
2:jdk1.8
3:hadoop2.7.x
【以上为公司统一要求】
2:配置要求
1:关闭且禁用防火墙
#servive iptables stop
#chkconfig iptables off
2:配置主机名称
# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hyxy201
设置好以后,重新启动CentOS
3:配置静态ip地址
为了多台虚拟机可以实现通讯,我们依然需要配置Host Only的网卡。配置静态地址如:192.168.56.201。
4:配置hosts文件:
192.168.56.201hadoop201
5:配置ssh无密码登录
$ ssh-keygen -t rsa
$ ssh-copy-id hostname
3:安装jdk
$ tar -zxf jdk1.8.162.tar.gz -C /usr/
配置JDK环境变量:
export JAVA_HOME=/usr/jdk1.8.0_162
export PATH=$PATH:$JAVA_HOME/bin
配置JDK生效,
$ source /etc/profile
4:配置完全分布式
完全分布式的配置单:
配置文件内容具体如下:
hadoop-env.sh文件:
export JAVA_HOME=/usr/jdk1.8.0_162
core-site.xml文件如下:
fs.defaultFS
hdfs://hadoop31:8020
/app/hadoop-tmp-dir
hdfs-site.xml文件:
dfs.replicationM
3
/app/datas/datanode-data-dir
/app/datas/namenode-name-dir
hadoop33:50090
hadoop33:50091
Mapred-site.xml文件:
yarn
Yarn-site.xml文件:
hadoop32
mapreduce_shuffle
slaves文件:
hadoop31
hadoop32
hadoop33
5:检查每台服务器的进程
Hadoop31是:
[wangjian@hadoop31 /]$ jps
2482 Jps
2328 NodeManager
1530 NameNode
1629 DataNode
Hadoop32是:
[wangjian@hadoop32 /]$ jps
3552 NodeManager
3446 ResourceManager
3932 Jps
2125 DataNode
Hadoop33是:
[wangjian@hadoop33 /]$ jps
2209 SecondaryNameNode
2113 DataNode
2613 NodeManager
2783 Jps
可见与配置表相同。
6:测试hdfs命令和mapreduce程序
[wangjian@hadoop33 /]$ hdfs dfsadmin -report
DFS Remaining: 87526440960 (81.52 GB)
DFS Used: 86016 (84 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (3):
Name: 192.168.56.33:50010 (hadoop33)
Hostname: hadoop33
Decommission Status : Normal
DFS Used: 28672 (28 KB)
Non DFS Used: 2007281664 (1.87 GB)
DFS Remaining: 29175828480 (27.17 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.79%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Mar 16 23:30:46 CST 2018
测试更多hdfs操作和mapreduce操作。
OVER
领取专属 10元无门槛券
私享最新 技术干货