为了学习hive SQL,我搭建了单机版的hive,查了网上的资料,走了许多坑,总结如下。
安装Hive,需要先安装JDK、Hadoop、MySQL,最后再安装Hive。
1,安装JDK
建议安装Oracle版本的jdk8,虽然官方没有规定必须使用Oracle 版本的jdk8,但是我发现非Oracle版本的jdk或Oracle版本的jdk8以上的版本需要额外配置,网上的资料也很少。
1.1获得jdk-8u161-linux-x64.tar.gz
1.2运行
#tar -xzf jdk-8u161-linux-x64.tar.gz -C /usr/local/
# cd /usr/local/
#mv jdk1.8.0_161 java
#gedit ~/.bashrc
export JAVA_HOME=/usr/local/java
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin
# source ~/.bashrc
# java -version
ava version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
2,安装Hadoop
2.1安装并配置ssh
2.1.1安装ssh
# sudo apt-get install openssh-server
安装完成后使用命令
# ssh localhost
登录会失败
2.1.2配置ssh无密码登录
#exit # 退出刚才的 ssh localhost
# cd ~/.ssh/ # 若没有该目录,请再执行一次ssh localhost
# ssh-keygen -t rsa # 会有提示,都按回车就可以
# cat ./id_rsa.pub >> ./authorized_keys # 加入授权
2.2 hadoop的安装与配置
2.2.1 hadoop下载与安装
下载hadoop-3.3.6.tar.gz
#tar -xzf hadoop-3.3.6.tar.gz -C /usr/local/
#cd /usr/local/
#mv hadoop-3.3.6 hadoop
#chown -R hadoop ./hadoop #前一个hadoop为用户名,更改为自己的用户名即可
#gedit ~/.bashrc
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
# source ~/.bashrc
# hadoop version
Hadoop 3.3.6
Source code repository https://github.com/apache/hadoop.git -r 1be78238728da9266a4f88195058f08fd012bf9c
Compiled by ubuntu on 2023-06-18T08:22Z
Compiled on platform linux-x86_64
Compiled with protoc 3.7.1
From source with checksum 5652179ad55f76cb287d9c633bb53bbd
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.3.6.jar
2.2.2 hadoop单机配置
安装后的hadoop默认为单机配置,无需其他配置即可运行。使用hadoop自带的单词统计的例子体验以下:
#cd /usr/local/hadoop
# mkdir ./input
# cp ./etc/hadoop/*.xml ./input # 将配置文件作为输入文件
#./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep ./input ./output 'dfs[a-z.]+'
# cat ./output/* # 查看运行结果
输出
1 dfsadn
2.2.3 hadoop单机配置
#gedit /usr/local/hadoop/etc/hadoo/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
#gedit /usr/local/hadoop/etc/hadoo/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>
3)在/usr/local/hadoop下使用命令
# ./bin/hdfs namenode -format #实现namenode的格式化
4)使得root用户可以启动hadoop
Hadoop默认是不允许root用户启动的,但是比如我们为了方便学习,可以使用root用户来启动。
# /etc/profile
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
5) 配置Hadoop JAVA_HOME
#gedit /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/java
export HADOOP_HOME=/usr/local/hadoop
6)
#sbin/start-dfs.sh #开启namenode和datanode
7)
#jps
37153 DataNode #DataNode
43796 Jps
37380 SecondaryNameNode #第二个NameNode
37013 NameNode #NameNode
8)安装yarn(非必要)
在/usr/local/hadoop下操作
#cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
#gedit etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
9)
#gedit etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
10)启动资源管理器
#./sbin/start-yarn.sh
#./sbin/mr-jobhistory-daemon.sh start historyserver #查看历史任务
11)启动成功后可以在http://localhost:8088/cluster访问集群资源管理器。
3,安装MySQL
3.1 查看是否本机上安装了MySQL或MariaDB
判断是否安装了MariaDB
#sudo dpkg -l | grep maria
判断是否安装了mysql
#sudo dpkg -l | grep mysql
3.2 卸载MySQL或MariaDB
如果以前安装了MySQL或MariaDB,我们这里建议卸载
卸载以前安装MySQL:
#sudo apt autoremove mysql-*
卸载以前安装MariaDB:
#sudo apt autoremove mariadb-*
3.3安装MariaDB
#sudo apt update
#sudo apt install mariadb-server
3.4检查MariaDB服务状态
#sudo systemctl status mariadb
3.5基本的配置
#sudo mysql_secure_installation
1)Enter current password for root (enter for none):
(预设的MariaDB没有密码,直接Enter即可)
2)Switch to unix_socket authentication [Y/n] n
(是否切换unix_socket安全认证,这里输入的n,即不切换)
3)Change the root password? [Y/n] y
(是否修改root账户的密码,输入的y,即会修改root密码
注意:
- root密码最好是复杂密码,否则可能会每次连接MariaDB时需要加sudo
- 在设置密码时,输入密码时是不会跳光标的
- 本文将密码设置成12345)
4)Remove anonymous users? [Y/n] y
(是否删除匿名用户,这里输入的是y,即删除匿名用户
默认情况下,MariaDB安装有匿名用户,允许任何人登录MariaDB而无需为其创建帐户,在生产环境中一定要删除)
5)Disallow root login remotely? [Y/n] y
(是否允许远程登录root账户,否则只能在localhost上登录root账户,这里输入的y,即不允许远程登录root账户)
6)Remove test database and access to it? [Y/n] y
(是否删除test数据库,这里输入的y,即删除test数据库
默认情况下,MariaDB有一个test数据库,允许任何用户获取)
7)Reload privilege tables now? [Y/n] y
(是否重新加载权限表,这里输入的y,即立即重新加载)
4,安装Hive
4.1下载Hive安装包
apache-hive-2.3.9-bin.tar.gz
4.2安装hive
#tar -xzf apache-hive-2.3.9-bin.tar.gz -C /usr/local/
#cd /usr/local/
#mv apache-hive-2.3.9 hive
#gedit ~/.bashrc
export HIVE_HOME=/usr/local/hive
export PATH=$HIVE_HOME/bin:$PATH
#source ~/.bashrc
4.3上传MySQL驱动
下载mysql-connector-java-5.1.49.tar.gz
#tar -xvf mysql-connector-java-5.1.49.tar.gz
#cp mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar /usr/local/hive/lib
4.4修改配置文件 hive-site.xml
记住:单机版本仅包括下面的内容,多余的请删除,我在这里在了老跟投。
#gedit conf/hive-site.xml
<?xml version="1.0"
encoding="UTF-8" standalone="no"?>
<?xml-stylesheet
type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- MySQL数据库地址 -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<!-- MySQL数据库驱动 -->
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<!-- MySQL数据库账号 -->
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<!-- MySQL数据库密码 -->
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive_pwd</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>
</configuration>
4.5元数据初始化
1)数据库对账号授权
#mysql -uroot -p123456
mysql>grant all privileges on hive.* to 'hive'@'%' identified by 'hive_pwd';
2)初始化元数据
#schematool -dbType mysql -initSchema
3) 运行hive
# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-2.3.9.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>