我是hadoop的新手,刚刚开始尝试使用scala和spark连接到hdfs,但不知道配置有什么问题。请帮我解决和理解它。
Hadoop Version is 2.7.3
Scala Version is 2.12.1
Spark Version is 2.1.1
pom.xml (依赖项)
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
我试图在scala中运行下面的函数
import java.io.{File, PrintWriter, StringWriter}
def runCommand(cmd:String):(Int,String)={
try {
logger.info(String.format("Trying to run the following bash command: [%s]", cmd))
import sys.process._
val intResult:Int = cmd !
val st