我使用
df.write.mode("append").jdbc("jdbc:mysql://ip:port/database", "table_name", properties)要插入表在MySQL中。
to insert into a table in MySQL.
另外,我已经加入的Class.forName(com.mysql.jdbc.Driver)在我的code。
Also, I have added Class.forName("com.mysql.jdbc.Driver") in my code.
当我提交申请星火:
spark-submit --class MY_MAIN_CLASS --master yarn-client --jars /path/to/mysql-connector-java-5.0.8-bin.jar --driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar MY_APPLICATION.jar这纱线客户机模式对我的作品。
This yarn-client mode works for me.
但是当我使用的纱线集群模式:
But when I use yarn-cluster mode:
spark-submit --class MY_MAIN_CLASS --master yarn-cluster --jars /path/to/mysql-connector-java-5.0.8-bin.jar --driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar MY_APPLICATION.jar它好好尝试的工作。我也尝试设置--conf
It doens't work. I also tried setting "--conf":
spark-submit --class MY_MAIN_CLASS --master yarn-cluster --jars /path/to/mysql-connector-java-5.0.8-bin.jar --driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar --conf spark.executor.extraClassPath=/path/to/mysql-connector-java-5.0.8-bin.jar MY_APPLICATION.jar但仍然得到错误找到JDBC没有合适的驱动程序。
but still get the "No suitable driver found for jdbc" error.
推荐答案有3个可能的解决方案,
There is 3 possible solutions,
您可以使用下列选项在火花提交 CLI:
You can use the following option in your spark-submit cli : --jars $(echo ./lib/*.jar | tr ' ' ',')
说明:假定你有一个 LIB 目录中的所有的罐子在你的项目的根,这将读取所有的库并将它们添加应用程序提交。
Explanation : Supposing that you have all your jars in a lib directory in your project root, this will read all the libraries and add them to the application submit.
您也可以尝试配置这些2个变量: spark.driver.extraClassPath 和 spark.executor.extraClassPath 在 SPARK_HOME / conf目录/火花default.conf 文件,并指定这些变量的jar文件的路径的价值。确保在工作节点存在相同的路径。
You can also try to configure these 2 variables : spark.driver.extraClassPath and spark.executor.extraClassPath in SPARK_HOME/conf/spark-default.conf file and specify the value of these variables as the path of the jar file. Ensure that the same path exists on worker nodes.
更多推荐
发现在星火JDBC没有合适的驱动程序
发布评论