問題描述
我正在使用
df.write.mode("append").jdbc("jdbc:mysql://ip:port/database", "table_name", properties)
插入到 MySQL 中的表中.
to insert into a table in MySQL.
此外,我在代碼中添加了 Class.forName("com.mysql.jdbc.Driver")
.
Also, I have added Class.forName("com.mysql.jdbc.Driver")
in my code.
當我提交 Spark 申請時:
When I submit my Spark application:
spark-submit --class MY_MAIN_CLASS
--master yarn-client
--jars /path/to/mysql-connector-java-5.0.8-bin.jar
--driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar
MY_APPLICATION.jar
這種紗線客戶端模式對我有用.
This yarn-client mode works for me.
但是當我使用紗線簇模式時:
But when I use yarn-cluster mode:
spark-submit --class MY_MAIN_CLASS
--master yarn-cluster
--jars /path/to/mysql-connector-java-5.0.8-bin.jar
--driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar
MY_APPLICATION.jar
它不起作用.我也試過設(shè)置--conf":
It doens't work. I also tried setting "--conf":
spark-submit --class MY_MAIN_CLASS
--master yarn-cluster
--jars /path/to/mysql-connector-java-5.0.8-bin.jar
--driver-class-path /path/to/mysql-connector-java-5.0.8-bin.jar
--conf spark.executor.extraClassPath=/path/to/mysql-connector-java-5.0.8-bin.jar
MY_APPLICATION.jar
但仍然出現(xiàn)找不到適合 jdbc 的驅(qū)動程序"錯誤.
but still get the "No suitable driver found for jdbc" error.
推薦答案
有 3 種可能的解決方案,
There is 3 possible solutions,
- 您可能希望使用構(gòu)建管理器(Maven、SBT)組裝您的應(yīng)用程序,因此您無需在
spark-submit
cli 中添加依賴項. 您可以在
spark-submit
cli 中使用以下選項:
- You might want to assembly you application with your build manager (Maven,SBT) thus you'll not need to add the dependecies in your
spark-submit
cli. You can use the following option in your
spark-submit
cli :
--jars $(echo ./lib/*.jar | tr ' ' ',')
說明:假設(shè)您在項目根目錄的 lib
目錄中擁有所有 jar,這將讀取所有庫并將它們添加到應(yīng)用程序提交中.
Explanation : Supposing that you have all your jars in a lib
directory in your project root, this will read all the libraries and add them to the application submit.
您也可以嘗試在 SPARK_HOME/conf/spark 中配置這 2 個變量:
文件并將這些變量的值指定為jar文件的路徑.確保工作節(jié)點上存在相同的路徑.spark.driver.extraClassPath
和 spark.executor.extraClassPath
-default.conf
You can also try to configure these 2 variables : spark.driver.extraClassPath
and spark.executor.extraClassPath
in SPARK_HOME/conf/spark-default.conf
file and specify the value of these variables as the path of the jar file. Ensure that the same path exists on worker nodes.
這篇關(guān)于在 Spark 中找不到適合 jdbc 的驅(qū)動程序的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!