本片使用MapReduce——统计输出给定的文本文档每一个单词出现的总次数的案例进行,jar包在集群上测试
1、添加打包插件依赖
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.2</version> //这里换成对应版本
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin </artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.lizhengi.mr.WordcountDriver</mainClass> // 此处要换成自己工程的名字
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
2、更改WcDriver
将
FileInputFormat.setInputPaths(job, "/Users/marron27/test/input");
FileOutputFormat.setOutputPath(job, new Path("/Users/marron27/test/output"));
更改为
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
3、将程序打成jar包,然后拷贝到Hadoop集群中
完成打包
4、修改不带依赖的jar包名称为wc.jar,并拷贝该jar包到Hadoop集群
mv Hadoop-API-1.0-SNAPSHOT.jar wc.jar
scp wc.jar root@Carlota1:/root/test/input
5、新建测试用例,并上传到HDFS
ssh root@Carlota1
hadoop fs -copyFromLocal hello.txt /demo/test/input
6、执行WordCount程序
hadoop fs -copyToLocal /demo/test/output /root/test/output
cat /root/test/output part-r-00000
flume 2
hadoop 2
hdfs 1
hive 1
kafka 2
mapreduce 1
spark 1
spring 1
take 2
tomcat 2