我正在尝试在 PySpark 中运行自定义 HDFS 读取器类.这个类是用 Java 编写的,我需要从 PySpark 访问它,无论是从 shell 还是通过 spark-submit.
I'm trying to run a custom HDFS reader class in PySpark. This class is written in Java and I need to access it from PySpark, either from the shell or with spark-submit.
在 PySpark 中,我从 SparkContext (sc._gateway) 中检索 JavaGateway.
In PySpark, I retrieve the JavaGateway from the SparkContext (sc._gateway).
假设我有一堂课:
package org.foo.module public class Foo { public int fooMethod() { return 1; } }我尝试将它打包到一个 jar 中,并使用 --jar 选项将其传递给 pyspark,然后运行:
I've tried to package it into a jar and pass it with the --jar option to pyspark and then running:
from py4j.java_gateway import java_import jvm = sc._gateway.jvm java_import(jvm, "org.foo.module.*") foo = jvm.foo.module.Foo()但我收到错误:
Py4JError: Trying to call a package.有人可以帮忙吗?谢谢.
Can someone help with this? Thanks.
推荐答案在 PySpark 中尝试以下操作
In PySpark try the following
from py4j.java_gateway import java_import java_import(sc._gateway.jvm,"org.foo.module.Foo") func = sc._gateway.jvm.Foo() func.fooMethod()确保您已将 Java 代码编译成可运行的 jar 并像这样提交 spark 作业
Make sure that you have compiled your Java code into a runnable jar and submit the spark job like so
spark-submit --driver-class-path "name_of_your_jar_file.jar" --jars "name_of_your_jar_file.jar" name_of_your_python_file.py更多推荐
在 PySpark 中运行自定义 Java 类
发布评论