hadoop的端口
50070 //namenode http port
50075 //datanode http port
50090 //2namenode http port
8020 //namenode rpc[A1] port
50010 //datanode rpc port
hadoop四大模块
common //公共模块
hdfs //namenode + datanode + secondarynamenode
mapred //framework框架
yarn //resourcemanager + nodemanager
启动脚本
1.start-all.sh //启动所有进程[A2]
2.stop-all.sh //停止所有进程
3.start-dfs.sh //启动所有节点【3个节点:NN,DN,2NN】
stop-dfs.sh //停止所有节点【3个节点:NN,DN,2NN】
4.start-yarn.sh //启动管理器【2个管理器:RM,NM】
Stop-yarn.sh //停止所有管理器
[hdfs] start-dfs.sh stop-dfs.sh
NN(NameNode;名字节点)
DN(DataNode:数据节点)
2NN(SecondaryNameNode:第二节点名称,辅助名称节点)
[yarn] start-yarn.sh stop-yarn.sh
RM(ResourceManager:资源管理器)
NM(NodeManager:节点管理器)
修改主机名
1./etc/hostname
s250
2./etc/hosts 【配置它的原因:ping名字比ping IP简单】
127.0.0.1 localhost
192.168.77.250 s250
192.168.77.251 s251
192.168.77.252 s252
192.168.77.253 s253
完全分布式
1.克隆3台client(centos7)
右键centos-7-->管理->克隆-> ... -> 完整克隆
2.启动client
3.启用客户机共享文件夹。
4.修改hostname和ip地址文件
[/etc/hostname]
s251
[/etc/sysconfig/network-scripts/ifcfg-ens33]
...
IPADDR=..
5.重启网络服务[A3]
$>sudo service network restart
6.修改/etc/resolv.conf文件[A4]
nameserver 192.168.77.2
7.重复以上3 ~ 6过程.
准备完全分布式主机的ssh
1.删除所有主机上的/home/centos/.ssh/*
1. cd ~/.ssh 【s250的主机】
rm -rf *
2.ssh s251 rm -rf /home/centos/.ssh/*
3.ssh s252 rm -rf /home/centos/.ssh/*
4.ssh s253 rm -rf /home/centos/.ssh/*
2.在s250主机上生成密钥对
$>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
3.将s250的公钥文件id_rsa.pub远程复制到251 ~ 253主机上。
并放置/home/centos/.ssh/authorized_keys
$>scp[I5] id_rsa.pub centos@s250:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub centos@s251:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub centos@s252:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub centos@s253:/home/centos/.ssh/authorized_keys
4.配置完全分布式($/etc/hadoop/)
[core-site.xml]
fs.defaultFS
hdfs://s250[A6] /
[hdfs-site.xml]
dfs.replication
3[A7]
[mapred-site.xml]
不变
[yarn-site.xml]
yarn.resourcemanager.hostname
s250[A8]
yarn.nodemanager.aux-services
mapreduce_shuffle
[slaves]奴隶文件,主控端要控制的奴隶
s251
s252
s253
[hadoop-env.sh]注意监测:完全分布模式下的hadoop-env.sh中JAVA_HOME是否为绝对路径
...
export JAVA_HOME=/soft/jdk
...
5.分发配置
$>cd /soft/hadoop/etc/
$>scp -r full centos@s251:/soft/hadoop/etc/
$>scp -r full centos@s252:/soft/hadoop/etc/
$>scp -r full centos@s253:/soft/hadoop/etc/
6.删除符号连接
$>cd /soft/hadoop/etc
$>rm hadoop
$>ssh s251 rm /soft/hadoop/etc/hadoop
$>ssh s252 rm /soft/hadoop/etc/hadoop
$>ssh s253 rm /soft/hadoop/etc/hadoop
7.创建符号连接
$>cd /soft/hadoop/etc/
$>ln -s full hadoop
$>ssh s251 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
$>ssh s252 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
$>ssh s253 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
8.删除临时目录文件
$>sudo rm -rf *
$>ssh s251
sudo rm -rf /tmp/*
$>ssh s252
sudo rm -rf /tmp/*
$>ssh s253
sodu rm -rf /tmp/*
9.删除hadoop日志
$>cd /soft/hadoop/logs
$>rm -rf *
$>ssh s251 rm -rf /soft/hadoop/logs/*
$>ssh s252 rm -rf /soft/hadoop/logs/*
$>ssh s253 rm -rf /soft/hadoop/logs/*
10.格式化文件系统
$>hadoop namenode -format
11.启动hadoop进程
$>start-all.sh
12.进入web网页进行查看:http://主端IP:50070查看是否成功
下图是错的 ln operation 里面有客户端才是正确的
出现下图情况重复执行8-11步即可
领取专属 10元无门槛券
私享最新 技术干货