无法启动Spark

编程入门 行业动态 更新时间:2024-10-25 19:28:13
无法启动Spark-Jobserver的本地实例(Can't start local instance of Spark-Jobserver)

所以我正在尝试创建一个本地的spark jobserver实例来测试作业,我甚至无法让它运行。

因此,当我进入我的流浪者实例时,我做的第一件事是我开始火花。 我知道这是有效的,因为我提交的作业是用它提供的提交作业实用程序来激发的。 然后我去我当地的spark-jobserver克隆并运行

vagrant@cassandra-spark:~/spark-jobserver$ sudo sbt [info] Loading project definition from /home/vagrant/spark-jobserver/project Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. [info] Set current project to root (in build file:/home/vagrant/spark-jobserver/) > reStart /home/vagrant/spark-jobserver/config/local.conf [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 21 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 35 ms [success] created output: /home/vagrant/spark-jobserver/job-server/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 6 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/vagrant/spark-jobserver/job-server-extras/target [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 3 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 8 ms [success] created output: /home/vagrant/spark-jobserver/job-server-api/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 11 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 7 ms [success] created output: /home/vagrant/spark-jobserver/akka-app/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 3 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 9 ms [success] created output: /home/vagrant/spark-jobserver/job-server-api/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 11 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/vagrant/spark-jobserver/akka-app/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 21 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/vagrant/spark-jobserver/job-server/target [info] Application job-server not yet started [info] Starting application job-server in the background ... job-server Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf) job-server[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 [warn] No main class detected [info] Application job-server-extras not yet started [info] Starting application job-server-extras in the background ... job-server-extras Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf) job-server-extras[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 [success] Total time: 6 s, completed Jun 12, 2015 2:28:32 PM > job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$). job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly. job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. >

在另一个终端,我ssh进入vagrant实例并运行

vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a ssembly-1.0.jar localhost:8090/jars The requested resource could not be found.

这是我的config / local.conf中的内容

# Template for a Spark Job Server configuration file # When deployed these settings are loaded when job server starts # # Spark Cluster / Job Server configuration spark { # spark.master will be passed to each job's JobContext master = "spark://192.168.10.11:7077" # master = "mesos://vm28-hulk-pub:5050" # master = "yarn-client" # Default # of CPUs for jobs to use for Spark standalone cluster job-number-cpus = 1 # predefined Spark contexts # contexts { # my-low-latency-context { # num-cpu-cores = 1 # Number of cores to allocate. Required. # memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc. # } # # define additional contexts here # } # universal context configuration. These settings can be overridden, see README.md context-settings { num-cpu-cores = 1 # Number of cores to allocate. Required. memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc. spark.cassandra.connection.host = "127.0.0.1" # in case spark distribution should be accessed from HDFS (as opposed to being installed on every mesos slave) # spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz" # uris of jars to be loaded into the classpath for this context. Uris is a string list, or a string separated by commas ',' dependent-jar-uris = ["file:///home/vagrant/lib/spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar"] # If you wish to pass any settings directly to the sparkConf as-is, add them here in passthrough, # such as hadoop connection settings that don't use the "spark." prefix passthrough { #es.nodes = "192.1.1.1" } } # This needs to match SPARK_HOME for cluster SparkContexts to be created successfully home = "/home/vagrant/spark" } # Note that you can use this file to define settings not only for job server, # but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.

So I'm trying to create a local instance of spark jobserver to test jobs on and I can't even get it to run.

So the first thing I do when I got into my vagrant instance is I start spark. I know this works because I submit jobs to spark with the submit-job utility it provides. I then go to my local spark-jobserver clone and run

vagrant@cassandra-spark:~/spark-jobserver$ sudo sbt [info] Loading project definition from /home/vagrant/spark-jobserver/project Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this. [info] Set current project to root (in build file:/home/vagrant/spark-jobserver/) > reStart /home/vagrant/spark-jobserver/config/local.conf [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 21 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 35 ms [success] created output: /home/vagrant/spark-jobserver/job-server/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 6 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/vagrant/spark-jobserver/job-server-extras/target [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [warn] Credentials file /root/.bintray/.credentials does not exist [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 3 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 8 ms [success] created output: /home/vagrant/spark-jobserver/job-server-api/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 11 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 7 ms [success] created output: /home/vagrant/spark-jobserver/akka-app/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 3 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 9 ms [success] created output: /home/vagrant/spark-jobserver/job-server-api/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 11 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 6 ms [success] created output: /home/vagrant/spark-jobserver/akka-app/target [info] scalastyle using config /home/vagrant/spark-jobserver/scalastyle-config.xml [info] Processed 21 file(s) [info] Found 0 errors [info] Found 0 warnings [info] Found 0 infos [info] Finished in 2 ms [success] created output: /home/vagrant/spark-jobserver/job-server/target [info] Application job-server not yet started [info] Starting application job-server in the background ... job-server Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf) job-server[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 [warn] No main class detected [info] Application job-server-extras not yet started [info] Starting application job-server-extras in the background ... job-server-extras Starting spark.jobserver.JobServer.main(/home/vagrant/spark-jobserver/config/local.conf) job-server-extras[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0 [success] Total time: 6 s, completed Jun 12, 2015 2:28:32 PM > job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$). job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly. job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. >

In another terminal I ssh into the vagrant instance and run

vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a ssembly-1.0.jar localhost:8090/jars The requested resource could not be found.

This is what is in my config/local.conf

# Template for a Spark Job Server configuration file # When deployed these settings are loaded when job server starts # # Spark Cluster / Job Server configuration spark { # spark.master will be passed to each job's JobContext master = "spark://192.168.10.11:7077" # master = "mesos://vm28-hulk-pub:5050" # master = "yarn-client" # Default # of CPUs for jobs to use for Spark standalone cluster job-number-cpus = 1 # predefined Spark contexts # contexts { # my-low-latency-context { # num-cpu-cores = 1 # Number of cores to allocate. Required. # memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc. # } # # define additional contexts here # } # universal context configuration. These settings can be overridden, see README.md context-settings { num-cpu-cores = 1 # Number of cores to allocate. Required. memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc. spark.cassandra.connection.host = "127.0.0.1" # in case spark distribution should be accessed from HDFS (as opposed to being installed on every mesos slave) # spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz" # uris of jars to be loaded into the classpath for this context. Uris is a string list, or a string separated by commas ',' dependent-jar-uris = ["file:///home/vagrant/lib/spark-cassandra-connector-assembly-1.3.0-M2-SNAPSHOT.jar"] # If you wish to pass any settings directly to the sparkConf as-is, add them here in passthrough, # such as hadoop connection settings that don't use the "spark." prefix passthrough { #es.nodes = "192.1.1.1" } } # This needs to match SPARK_HOME for cluster SparkContexts to be created successfully home = "/home/vagrant/spark" } # Note that you can use this file to define settings not only for job server, # but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.

最满意答案

找出问题所在,服务器正确启动(虽然没有正确记录)

问题是我在传递给curl的路径末尾没有“/”

所以要修复它会将curl语句更改为:

vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a ssembly-1.0.jar localhost:8090/jars

Figured out what the problem was, the server was starting correctly (although not logging correctly)

The problem was that I didn't have "/" at the end of the path passed to curl

so to fix it change the curl statement to this:

vagrant@cassandra-spark:~$ curl --data-binary @/home/vagrant/SQLJob/target/scala-2.10/CassSparkTest-a ssembly-1.0.jar localhost:8090/jars

更多推荐

本文发布于:2023-08-03 01:25:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1382499.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:无法启动   Spark

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!