将 PySpark 连接到 AWS Redshift 时出错

编程入门 行业动态 更新时间:2024-10-28 18:28:58
本文介绍了将 PySpark 连接到 AWS Redshift 时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

一直在尝试将我的 EMR 5.11.0 集群上的 Spark 2.2.1 连接到我们的 Redshift 存储.

我遵循的方法是 -

  • 使用内置的 Redshift JDBC

    pyspark --jars/usr/share/aws/redshift/jdbc/RedshiftJDBC41.jar从 pyspark.sql 导入 SQLContextscsql_context = SQLContext(sc)redshift_url = "jdbc:redshift://HOST:PORT/DATABASE?user=USER&password=PASSWORD"redshift_query = "从表中选择 *;"redshift_query_tempdir_storage = "s3://personal_warehouse/wip_dumps/"# 从查询中读取数据df_users = sql_context.read \.format("com.databricks.spark.redshift") \.option("url", redshift_url) \.option("query", redshift_query) \.option("tempdir", redshift_query_tempdir_storage) \.option("forward_spark_s3_credentials", "true") \.加载()

    这给了我以下错误 -

  • 回溯(最近一次调用最后一次):文件",第 7 行,在文件/usr/lib/spark/python/pyspark/sql/readwriter.py",第 165 行,加载中返回 self._df(self._jreader.load()) 文件/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py",第 1133 行,在 call 文件中/usr/lib/spark/python/pyspark/sql/utils.py",第 63 行,在装饰中返回 f(*a, kw) 文件/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py",行319, 在 get_return_value ***py4j.protocol.Py4JJavaError: 一个错误调用 o63.load 时发生.:java.lang.ClassNotFoundException:未能找到数据源:com.databricks.spark.redshift.请找到spark.apache/third-party-projects.html 在*org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:546)在org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:87)在org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:87)在org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:302)在org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)在org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)在 sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在 java.lang.reflect.Method.invoke(Method.java:498) 在py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 在py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 在py4j.Gateway.invoke(Gateway.java:280) 在py4jmands.AbstractCommand.invokeMethod(AbstractCommand.java:132)在 py4jmands.CallCommand.execute(CallCommand.java:79) 在py4j.GatewayConnection.run(GatewayConnection.java:214) 在java.lang.Thread.run(Thread.java:748) 引起的:java.lang.ClassNotFoundException:com.databricks.spark.redshift.DefaultSource 在java.URLClassLoader.findClass(URLClassLoader.java:381) 在java.lang.ClassLoader.loadClass(ClassLoader.java:424) 在java.lang.ClassLoader.loadClass(ClassLoader.java:357) 在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22$$anonfun$apply$14.apply(DataSource.scala:530)在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22$$anonfun$apply$14.apply(DataSource.scala:530)在 scala.util.Try$.apply(Try.scala:192) 在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22.apply(DataSource.scala:530)在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22.apply(DataSource.scala:530)在 scala.util.Try.orElse(Try.scala:84) 在org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:530)……还有 16 个

    有人可以帮忙告诉我哪里遗漏了什么/犯了一个愚蠢的错误吗?

    谢谢!

    解决方案

    我必须在 EMR spark-submit 选项中包含 4 个 jar 文件才能使其正常工作.

    jar 文件列表:

    1.RedshiftJDBC41-1.2.12.1017.jar

    2.spark-redshift_2.10-2.0.0.jar

    3.最小-json-0.9.4.jar

    4.spark-avro_2.11-3.0.0.jar

    您可以下载 jar 文件并将它们存储在 S3 存储桶中,并在 spark-submit 选项中指向它,例如:

    --jars s3:///RedshiftJDBC41-1.2.10.1009.jar,s3:///minimal-json-0.9.4.jar,s3:///spark-avro_2.11-3.0.0.jar,s3:///spark-redshift_2.10-2.0.0.jar

    然后最后像这个例子一样查询你的红移:spark-redshift-example 在您的 Spark 代码中.

    Have been trying to connect Spark 2.2.1 on my EMR 5.11.0 cluster to our Redshift store.

    The approaches I followed was -

  • Use the inbuilt Redshift JDBC

    pyspark --jars /usr/share/aws/redshift/jdbc/RedshiftJDBC41.jar from pyspark.sql import SQLContext sc sql_context = SQLContext(sc) redshift_url = "jdbc:redshift://HOST:PORT/DATABASE?user=USER&password=PASSWORD" redshift_query = "select * from table;" redshift_query_tempdir_storage = "s3://personal_warehouse/wip_dumps/" # Read data from a query df_users = sql_context.read \ .format("com.databricks.spark.redshift") \ .option("url", redshift_url) \ .option("query", redshift_query) \ .option("tempdir", redshift_query_tempdir_storage) \ .option("forward_spark_s3_credentials", "true") \ .load()

    This gives me the following error -

  • Traceback (most recent call last): File "", line 7, in File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 165, in load return self._df(self._jreader.load()) File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in call File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco return f(*a, kw) File "/usr/lib/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value ***py4j.protocol.Py4JJavaError: An error occurred while calling o63.load. : java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.redshift. Please find packages at spark.apache/third-party-projects.html at* org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:546) at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:87) at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:87) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:302) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4jmands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4jmands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: com.databricks.spark.redshift.DefaultSource at java.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22$$anonfun$apply$14.apply(DataSource.scala:530) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22$$anonfun$apply$14.apply(DataSource.scala:530) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22.apply(DataSource.scala:530) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$22.apply(DataSource.scala:530) at scala.util.Try.orElse(Try.scala:84) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:530) ... 16 more

    Can someone please help tell where I've missed out on something / made a stupid mistake?

    Thanks!

    解决方案

    I had to include 4 jar files in the EMR spark-submit options to get this working.

    List of jar files:

    1.RedshiftJDBC41-1.2.12.1017.jar

    2.spark-redshift_2.10-2.0.0.jar

    3.minimal-json-0.9.4.jar

    4.spark-avro_2.11-3.0.0.jar

    You can download the jar files and store them on a S3 bucket and point to it in the spark-submit options like :

    --jars s3://<pathToJarFile>/RedshiftJDBC41-1.2.10.1009.jar,s3://<pathToJarFile>/minimal-json-0.9.4.jar,s3://<pathToJarFile>/spark-avro_2.11-3.0.0.jar,s3://<pathToJarFile>/spark-redshift_2.10-2.0.0.jar

    And then finally query your redshift like in this example : spark-redshift-example in your spark code.

    更多推荐

    将 PySpark 连接到 AWS Redshift 时出错

    本文发布于:2023-10-16 11:03:24,感谢您对本站的认可!
    本文链接:https://www.elefans.com/category/jswz/34/1497372.html
    版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
    本文标签:连接到   PySpark   Redshift   AWS

    发布评论

    评论列表 (有 0 条评论)
    草根站长

    >www.elefans.com

    编程频道|电子爱好者 - 技术资讯及电子产品介绍!