我试图运行PySpark定制HDFS读取器类。这个类是用Java编写的,我需要从PySpark访问它,无论是从壳或火花提交。
I'm trying to run a custom HDFS reader class in PySpark. This class is written in Java and I need to access it from PySpark, either from the shell or with spark-submit.
在PySpark,我检索来自SparkContext的JavaGateway( sc._gateway )。
In PySpark, I retrieve the JavaGateway from the SparkContext (sc._gateway).
说我有一个类:
package org.foo.module public class Foo { public int fooMethod() { return 1; } }我试过把它打包成一个罐子,用传递 - 罐子选项pyspark,然后运行:
I've tried to package it into a jar and pass it with the --jar option to pyspark and then running:
from py4j.java_gateway import java_import jvm = sc._gateway.jvm java_import(jvm, "org.foo.module.*") foo = jvm.foo.module.Foo()但我得到的错误:
But I get the error:
Py4JError: Trying to call a package.有人能帮助呢?谢谢你。
Can someone help with this? Thanks.
推荐答案在PySpark尝试以下
In PySpark try the following
from py4j.java_gateway import java_import java_import(sc._gateway.jvm,"org.foo.module.Foo") func = sc._gateway.jvm.Foo() func.fooMethod()请确保您已编译您的Java code到一个可运行的罐子,并提交火花的工作,像这样
Make sure that you have compiled your Java code into a runnable jar and submit the spark job like so
spark-submit --driver-class-path "name_of_your_jar_file.jar" --jars "name_of_your_jar_file.jar" name_of_your_python_file.py更多推荐
在PySpark运行自定义的Java类
发布评论