1) 使用explode函数将hive表中的Map和Array字段数据进行拆分
需求
现在有数据格式如下
zhangsan child1,child2,child3,child4 k1:v1,k2:v2
lisi child5,child6,child7,child8 k3:v3,k4:v4
字段之间使用\t分割,需求将所有的child进行拆开成为一列。
+----------+--+
| mychild |
+----------+--+
| child1 |
| child2 |
| child3 |
| child4 |
| child5 |
| child6 |
| child7 |
| child8 |
+----------+--+
将map的key和value也进行拆开,成为如下结果
+----------------+-------------------+-- +
| mymapkey | mymapvalue |
+----------------+-------------------+-- +
| k1 | v1 |
| k2 | v2 |
| k3 | v3 |
| k4 | v4 |
+----------------+-------------------+-- +
实现
创建hive数据库
hive (default)> CREATE database hive_explode;
hive (default)> use hive_explode;
hive (hive_explode)> CREATE table t3(name string,
children array,
address Map)
row format delimited fields terminated by '\t'
collection items terminated by ','
map keys terminated by ':’
stored as textFile;
加载数据
执行以下命令创建表数据文件
#mkdir -p /export/servers/hivedatas/
#cd /export/servers/hivedatas/
#gedit maparray
zhangsan child1,child2,child3,child4 k1:v1,k2:v2
lisi child5,child6,child7,child8 k3:v3,k4:v4
hive表当中加载数据
hive (hive_explode)> LOAD DATA LOCAL INPATH '/export/servers/hivedatas/maparray' into table t3;
使用explode将hive当中数据拆开
将array当中的数据拆分开
hive (hive_explode)> SELECT explode(children) AS myChild FROM t3;
将map当中的数据拆分开
hive (hive_explode)> SELECT explode(address) AS (myMapKey, myMapValue) FROM t3;
2)使用explode拆分json字符串
需求
现在有一些数据格式如下:
a:shandong,b:beijing,c:hebei|1,2,3,4,5,6,7,8,9|[{"source":"7fresh","monthSales":4900,"userCount":1900,"score":"9.9"},{"source":"jd","monthSales":2090,"userCount":78981,"score":"9.8"},{"source":"jdmart","monthSales":6987,"userCount":1600,"score":"9.0"}]
其中字段与字段之间的分隔符是 |
我们要解析得到所有的monthSales对应的值为以下这一列(行转列)
4900
2090
6987
实现
创建hive表
hive (hive_explode)> CREATE table explode_lateral_view
('area' string,
'goods_id' string,
'sale_info' string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
TORED AS textfile;
准备数据并加载数据
#cd /export/servers/hivedatas
#gedit explode_json
a:shandong,b:beijing,c:hebei|1,2,3,4,5,6,7,8,9|[{"source":"7fresh","monthSales":4900,"userCount":1900,"score":"9.9"},{"source":"jd","monthSales":2090,"userCount":78981,"score":"9.8"},{"source":"jdmart","monthSales":6987,"userCount":1600,"score":"9.0"}]
加载数据到hive表当中
hive (hive_explode)> LOAD DATA LOCAL INPATH '/export/servers/hivedatas/explode_json' overwrite into table explode_lateral_view;
使用explode拆分Array
hive (hive_explode)> SELECT explode(split(goods_id,',')) as goods_id FROM explode_lateral_view;
使用explode拆解Map
hive (hive_explode)> SELECT explode(split(area,',')) as area FROM explode_lateral_view;
拆解json字段
hive (hive_explode)> SELECT explode(split(regexp_replace(regexp_replace(sale_info,'\\[\\{',''),'}]',''),'},\\{')) as sale_info FROM explode_lateral_view;
然后用get_json_object来获取key为monthSales的数据
hive (hive_explode)> SELECT get_json_object(explode(split(regexp_replace(regexp_replace(sale_info,'\\[\\{',''),'}]',''),'},\\{')),'$.monthSales') as sale_info FROM explode_lateral_view;
然后挂了FAILED: SemanticException [Error 10081]: UDTF's are not supported outside the SELECT clause, nor nested in expressions
UDTF explode不能写在别的函数内
如果你这么写,想查两个字段,SELECT explode(split(area,',')) as area,good_id FROM explode_lateral_view;
会报错FAILED: SemanticException 1:40 Only a single expression in the SELECT clause is supported with UDTF's. Error encountered near token 'good_id'
使用UDTF的时候,只支持一个字段,这时候就需要LATERAL VIEW出场了.
3) 配合LATERAL VIEW使用
配合lateral view查询多个字段
hive (hive_explode)> SELECT goods_id2,sale_info FROM explode_lateral_view LATERAL VIEW explode(split(goods_id,','))goods as goods_id2;
其中LATERAL VIEW explode(split(goods_id,','))goods相当于一个虚拟表,与原表explode_lateral_view笛卡尔积关联。
也可以多重使用
hive (hive_explode)> SELECT goods_id2,sale_info,area2
> FROM explode_lateral_view
> LATERAL VIEW explode(split(goods_id,',')) goods as goods_id2
> LATERAL VIEW explode(split(area,',')) area as area2;
最终,我们可以通过下面的句子,把这个json格式的一行数据,完全转换成二维表的方式展现。
hive (hive_explode)> SELECT get_json_object(concat('{',sale_info_1,'}'),'$.source') as source,get_json_object(concat('{',sale_info_1,'}'),'$.monthSales') as monthSales,get_json_object(concat('{',sale_info_1,'}'),'$.userCount') as monthSales,get_json_object(concat('{',sale_info_1,'}'),'$.score') as monthSales FROM explode_lateral_view LATERAL VIEW explode(split(regexp_replace(regexp_replace(sale_info,'\\[\\{',''),'}]',''),'},\\{'))sale_info as sale_info_1;
3)行转列
相关参数说明
CONCAT(string A/col, string B/col…):返回输入字符串连接后的结果,支持任意个输入字符串;
CONCAT_WS(separator, str1, str2,...):它是一个特殊形式的 CONCAT()。第一个参数剩余参数间的分隔符。分隔符可以是与剩余参数一样的字符串。如果分隔符是 NULL,返回值也将为 NULL。这个函数会跳过分隔符参数后的任何 NULL 和空字符串。分隔符将被加到被连接的字符串之间;
COLLECT_SET(col):函数只接受基本数据类型,它的主要作用是将某字段的值进行去重汇总,产生array类型字段。
需求
name | constellation | blood_type |
---|---|---|
孙悟空 | 白羊座 | A |
老王 | 射手座 | A |
宋宋 | 白羊座 | B |
猪八戒 | 白羊座 | A |
凤姐 | 射手座 | A |
把星座和血型一样的人归类到一起。结果如下:
射手座,A 老王|凤姐
白羊座,A 孙悟空|猪八戒
白羊座,B 宋宋
创建本地constellation.txt,导入数据
执行以下命令创建文件,注意数据使用\t进行分割
cd /export/servers/hivedatas
gedit constellation.txt
孙悟空 白羊座 A
老王 射手座 A
宋宋 白羊座 B
猪八戒 白羊座 A
凤姐 射手座 A
创建hive表并加载数据
hive (hive_explode)> CREATE table person_info(
>name string,
>constellation string,
>blood_type string)
>row format delimited fields terminated by "\t";
加载数据
hive (hive_explode)> LOAD DATA LOCAL INPATH '/export/servers/hivedatas/constellation.txt' into table person_info;
按需求查询数据
hive (hive_explode)> SELECT t1.base,concat_ws('|', collect_set(t1.name)) name FROM (SELECT name,concat(constellation, "," , blood_type) base FROM person_info) t1 group by t1.base;
4)列转行
所需函数:
EXPLODE(col):将hive一列中复杂的array或者map结构拆分成多行。
LATERAL VIEW
用法:
hive (hive_explode)>LATERAL VIEW udtf(expression) tableAlias AS columnAlias
解释:用于和split, explode等UDTF一起使用,它能够将一列数据拆成多行数据,在此基础上可以对拆分后的数据进行聚合。
需求
cd /export/servers/hivedatas
gedit movie.txt
《疑犯追踪》 悬疑,动作,科幻,剧情
《Lie to me》 悬疑,警匪,动作,心理,剧情
《战狼2》 战争,动作,灾难
数据字段之间使用\t进行分割
将电影分类中的数组数据展开。结果如下:
《疑犯追踪》 悬疑
《疑犯追踪》 动作
《疑犯追踪》 科幻
《疑犯追踪》 剧情
《Lie to me》 悬疑
《Lie to me》 警匪
《Lie to me》 动作
《Lie to me》 心理
《Lie to me》 剧情
《战狼2》 战争
《战狼2》 动作
《战狼2》 灾难
加载数据
hive>LOAD DATA LOCAL INPATH "/export/servers/hivedatas/movie.txt" into table movie_info;
hive>SELECT movie,category_name FROM movie_info lateral view explode(category) table_tmp as category_name;
5) reflect函数
加载数据
hive (hive_explode)> LOAD DATA LOCAL INPATH '/export/servers/hivedatas/test_udf2' overwrite into table test_udf2;
执行查询
hive (hive_explode)> SELECT reflect(class_name,method_name,col1,col2) FROM test_udf2;
2窗口函数
1)窗口函数与分析函数sum、avg、min、max
建表语句
hive>CREATE table test_t1(
cookieid string,
creatrtime string, --day
pv int
) row format delimited
fields terminated by ',';
准备数据
cookie1,2024-04-10,1
cookie1,2024-04-11,5
cookie1,2024-04-12,7
cookie1,2024-04-13,3
cookie1,2024-04-14,2
cookie1,2024-04-15,4
cookie1,2024-04-16,4
加载数据
hive> LOAD DATA LOCAL INPATH'/home/jerry/hive/test_t1.dat' into table test_t1;
开启智能本地模式
set hive.exec.mode.local.auto=true;
sum函数和窗口函数的配合使用:结果和ORDER BY相关,默认为升序。
hive> SELECT cookieid,CREATEtime,pv,
sum(pv) over(partition by cookieid order by creatrtime) as pv1
FROM test_t1;
hive> SELECT cookieid,CREATEtime,pv,
sum(pv) over(partition by cookieid order by creatrtime rows between unbounded preceding and current row) as pv2
FROM test_t1;
hive> SELECT cookieid, creatr,pv,
sum(pv) over(partition by cookieid) as pv3
FROM test_t1;
hive> SELECT cookieid,CREATEtime,pv,
sum(pv) over(partition by cookieid order by creatr rows between 3 preceding and current row) as pv4
FROM test_t1;
hive> SELECT cookieid,CREATEtime,pv,
sum(pv) over(partition by cookieid order by CREATEtime rows between 3 preceding and 1 following) as pv5
FROM test_t1;
hive> SELECT cookieid,CREATEtime,pv,
sum(pv) over(partition by cookieid order by CREATEtime rows between current row and unbounded following) as pv6
FROM test_t1;
lpv1: 分组内从起点到当前行的pv累积,如,11号的pv1=10号的pv+11号的pv, 12号=10号+11号+12号
lpv2: 同pv1
lpv3: 分组内(cookie1)所有的pv累加
lpv4: 分组内当前行+往前3行,如,11号=10号+11号, 12号=10号+11号+12号,13号=10号+11号+12号+13号, 14号=11号+12号+13号+14号
lpv5: 分组内当前行+往前3行+往后1行,如,14号=11号+12号+13号+14号+15号=5+7+3+2+4=21
lpv6: 分组内当前行+往后所有行,如,13号=13号+14号+15号+16号=3+2+4+4=13,14号=14号+15号+16号=2+4+4=10
如果不指定rows between,默认为从起点到当前行;
如果不指定order by,则将分组内所有值累加;
关键是理解rows between含义,也叫做window子句:
lpreceding:往前
lfollowing:往后
lcurrent row:当前行
lunbounded:起点
lunbounded preceding 表示从前面的起点
lunbounded following:表示到后面的终点
AVG,MIN,MAX,和SUM用法一样。
2)窗口函数与分析函数row_number、rank、dense_rank、ntile
准备数据
cookie1,2024-04-10,1
cookie1,2024-04-11,5
cookie1,2024-04-12,7
cookie1,2024-04-13,3
cookie1,2024-04-14,2
cookie1,2024-04-15,4
cookie1,2024-04-16,4
cookie2,2024-04-10,2
cookie2,2024-04-11,3
cookie2,2024-04-12,5
cookie2,2024-04-13,6
cookie2,2024-04-14,3
cookie2,2024-04-15,9
cookie2,2024-04-16,7
CREATE TABLE test_t2 (
cookieid string,
CREATEtime string, --day
pv INT
) ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
stored as textfile;
加载数据:
LOAD DATA LOCAL INPATH '/home/jerry/hive/test_t2.dat' into table test_t2;
ROW_NUMBER()使用
ROW_NUMBER()从1开始,按照顺序,生成分组内记录的序列。
hive> SELECT cookieid,CREATEtime,pv,ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn
FROM test_t2;
RANK 和 DENSE_RANK使用
RANK() 生成数据项在分组中的排名,排名相等会在名次中留下空位 。
DENSE_RANK()生成数据项在分组中的排名,排名相等会在名次中不会留下空位。
hive> SELECT cookieid,CREATEtime,pv,RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn1,DENSE_RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn2,ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv DESC) AS rn3 FROM test_t2 WHERE cookieid = 'cookie1';
NTILE使用
有时会有这样的需求:如果数据排序后分为三部分,业务人员只关心其中的一部分,如何将这中间的三分之一数据拿出来呢?NTILE函数即可以满足。
ntile可以看成是:把有序的数据集合平均分配到指定的数量(num)个桶中, 将桶号分配给每一行。如果不能平均分配,则优先分配较小编号的桶,并且各个桶中能放的行数最多相差1。
然后可以根据桶号,选取前或后 n分之几的数据。数据会完整展示出来,只是给相应的数据打标签;具体要取几分之几的数据,需要再嵌套一层根据标签取出。
hive> SELECT cookieid, createtime,pv,
NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn1,
NTILE(3) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn2,
NTILE(4) OVER(ORDER BY createtime) AS rn3
FROM test_t2
ORDER BY cookieid,createtime;
LAG使用
LAG(col,n,DEFAULT) 用于统计窗口内往上第n行值第一个参数为列名,第二个参数为往上第n行(可选,默认为1),第三个参数为默认值(当往上第n行为NULL时候,取默认值,如不指定,则为NULL)
hive> SELECT cookieid,CREATEtime,url,
ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS rn,
LAG(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS last_1_time,
LAG(createtime,2) OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS last_2_time
FROM test_t4;
3)重要窗口函数:lag,lead,first_value,last_value
last_1_time: 指定了往上第1行的值,default为'1970-01-01 00:00:00'
cookie1第一行,往上1行为NULL,因此取默认值 1970-01-01 00:00:00
cookie1第三行,往上1行值为第二行值,2015-04-10 10:00:02
cookie1第六行,往上1行值为第五行值,2015-04-10 10:50:01
last_2_time: 指定了往上第2行的值,为指定默认值
cookie1第一行,往上2行为NULL
cookie1第二行,往上2行为NULL
cookie1第四行,往上2行为第二行值,2015-04-10 10:00:02
cookie1第七行,往上2行为第五行值,2015-04-10 10:50:0
LEAD
与LAG相反LEAD(col,n,DEFAULT) 用于统计窗口内往下第n行值第一个参数为列名,第二个参数为往下第n行(可选,默认为1),第三个参数为默认值(当往下第n行为NULL时候,取默认值,如不指定,则为NULL)
hive> SELECT cookieid,CREATEtime,url,ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS rn,
LEAD(CREATEtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS next_1_time,
LEAD(CREATEtime,2) OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS next_2_time
FROM test_t4;
FIRST_VALUE
取分组内排序后,截止到当前行,第一个值
hive> SELECT cookieid,CREATEtime,url,
ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS rn,
FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY CREATEtime) AS first1
FROM test_t4;
LAST_VALUE
取分组内排序后,截止到当前行,最后一个值
hive> SELECT cookieid,createtime,url,
ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn,
LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1,
FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime DESC) AS last2
FROM test_t4
ORDER BY cookieid,createtime;
如果想要取分组内排序后最后一个值,则需要变通一下:
hive> SELECT cookieid,createtime,url,
ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn,
LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1,
FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime DESC) AS last2
FROM test_t4
ORDER BY cookieid,createtime;
特别注意ORDER BY
如果不指定ORDER BY,则进行排序混乱,会出现错误的结果
hive> SELECT cookieid,createtime,url,
FIRST_VALUE(url) OVER(PARTITION BY cookieid) AS first2
FROM test_t4;
4)重要窗口函数:cume_dist,percent_rank
这两个序列分析函数不是很常用,注意:序列函数不支持WHERE子句
数据准备
d1,user1,1000
d1,user2,2000
d1,user3,3000
d2,user4,4000
d2,user5,5000
hive> CREATE EXTERNAL TABLE test_t3 (dept STRING,userid string,sal INT) ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
stored as textfile;
加载数据:
hive> LOAD DATA LOCAL INPATH'/home/jerry/hive/test_t3.dat' into table test_t3;
CUME_DIST
CUME_DIST和order by的排序顺序有关系
CUME_DIST 小于等于当前值的行数/分组内总行数 ORDER默认顺序 正序 升序 比如,统计小于等于当前薪水的人数,所占总人数的比例。
hive>SELECT dept,userid,sal,
CUME_DIST() OVER(ORDER BY sal) AS rn1,
CUME_DIST() OVER(PARTITION BY dept ORDER BY sal) AS rn2
FROM test_t3;
rn1: 没有partition,所有数据均为1组,总行数为5,
第一行:小于等于1000的行数为1,因此,1/5=0.2
第三行:小于等于3000的行数为3,因此,3/5=0.6
rn2: 按照部门分组,dpet=d1的行数为3,
第二行:小于等于2000的行数为2,因此,2/3=0.6666666666666666
5)分析函数:grouping sets,grouping_id,cube,rollup
数据准备
2024-03,2024-03-10,cookie1
2024-03,2024-03-10,cookie5
2024-03,2024-03-12,cookie7
2024-04,2024-04-12,cookie3
2024-04,2024-04-13,cookie2
2024-04,2024-04-13,cookie4
2024-04,2024-04-16,cookie4
2024-03,2024-03-10,cookie2
2024-03,2024-03-10,cookie3
2024-04,2024-04-12,cookie5
2024-04,2024-04-13,cookie6
2024-04,2024-04-15,cookie3
2024-04,2024-04-15,cookie2
2024-04,2024-04-16,cookie1
hive> CREATE TABLE test_t5 (
month STRING,
day STRING,
cookieid STRING
) ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
stored as textfile;
加载数据:
hive> LOAD DATA LOCAL INPATH'/home/jerry/hive/test_t5.dat' into table test_t5;
GROUPING SETS
grouping sets是一种将多个GROUP BY逻辑写在一个SQL语句中的便利写法。
等价于将不同维度的GROUP BY结果集进行UNION ALL。
hive> SELECT month,day,
COUNT(DISTINCT cookieid) AS uv,
GROUPING_ID
FROM test_t5
GROUP BY month,day
GROUPING SETS (month,day)
ORDER BY GROUPING_ID;
grouping_id表示这一组结果属于哪个分组集合,
根据GROUPING SETS中的分组条件month,day,1是代表month,2是代表day
等价于
hive> SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING_ID
FROM test_t5
GROUP BY month
UNION ALL
SELECT NULL as month,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING_ID FROM test_t5
GROUP BY day;
hive> SELECT month,day,
COUNT(DISTINCT cookieid) AS uv,
GROUPING_ID
FROM test_t5
GROUP BY month,day
GROUPING SETS (month,day,(month,day))
ORDER BY GROUPING_ID;
等价于
hive> SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING_ID FROM test_t5 GROUP BY month
UNION ALL
SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING_ID FROM test_t5 GROUP BY day
UNION ALL
SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING_ID FROM test_t5 GROUP BY month,day;
CUBE
根据GROUP BY的维度的所有组合进行聚合。
hive> SELECT month,day,
COUNT(DISTINCT cookieid) AS uv,
GROUPING_ID
FROM test_t5
GROUP BY month,day
WITH CUBE
ORDER BY GROUPING_ID;
等价于
hive> SELECT NULL,NULL,COUNT(DISTINCT cookieid) AS uv,0 AS GROUPING_ID FROM test_t5
UNION ALL
SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING_ID FROM test_t5 GROUP BY month
UNION ALL
SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING_ID FROM test_t5 GROUP BY day
UNION ALL
SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING_ID FROM test_t5 GROUP BY month,day;
ROLLUP
是CUBE的子集,以最左侧的维度为主,从该维度进行层级聚合。
比如,以month维度进行层级聚合:
hive> SELECT month,day,
COUNT(DISTINCT cookieid) AS uv,
GROUPING_ID
FROM test_t5
GROUP BY month,day
WITH ROLLUP
ORDER BY GROUPING_ID;
等价于
hive> SELECT day,month,
COUNT(DISTINCT cookieid) AS uv,
GROUPING_ID
FROM test_t5
GROUP BY day,month
WITH ROLLUP
ORDER BY GROUPING_ID;
这里,根据天和月进行聚合,和根据天聚合结果一样,因为有父子关系,如果是其他维度组合的话,就会不一样.
把month和day调换顺序,则以day维度进行层级聚合.