久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<small id='cPXNx'></small><noframes id='cPXNx'>

    1. <tfoot id='cPXNx'></tfoot>
        <legend id='cPXNx'><style id='cPXNx'><dir id='cPXNx'><q id='cPXNx'></q></dir></style></legend>

      1. <i id='cPXNx'><tr id='cPXNx'><dt id='cPXNx'><q id='cPXNx'><span id='cPXNx'><b id='cPXNx'><form id='cPXNx'><ins id='cPXNx'></ins><ul id='cPXNx'></ul><sub id='cPXNx'></sub></form><legend id='cPXNx'></legend><bdo id='cPXNx'><pre id='cPXNx'><center id='cPXNx'></center></pre></bdo></b><th id='cPXNx'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='cPXNx'><tfoot id='cPXNx'></tfoot><dl id='cPXNx'><fieldset id='cPXNx'></fieldset></dl></div>

          <bdo id='cPXNx'></bdo><ul id='cPXNx'></ul>

      2. pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合

        pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
          <bdo id='Hhwjl'></bdo><ul id='Hhwjl'></ul>
            <tbody id='Hhwjl'></tbody>
          <i id='Hhwjl'><tr id='Hhwjl'><dt id='Hhwjl'><q id='Hhwjl'><span id='Hhwjl'><b id='Hhwjl'><form id='Hhwjl'><ins id='Hhwjl'></ins><ul id='Hhwjl'></ul><sub id='Hhwjl'></sub></form><legend id='Hhwjl'></legend><bdo id='Hhwjl'><pre id='Hhwjl'><center id='Hhwjl'></center></pre></bdo></b><th id='Hhwjl'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='Hhwjl'><tfoot id='Hhwjl'></tfoot><dl id='Hhwjl'><fieldset id='Hhwjl'></fieldset></dl></div>

          <small id='Hhwjl'></small><noframes id='Hhwjl'>

              <legend id='Hhwjl'><style id='Hhwjl'><dir id='Hhwjl'><q id='Hhwjl'></q></dir></style></legend>

                • <tfoot id='Hhwjl'></tfoot>
                  本文介紹了pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我在 Mac 上使用 docker image sequenceiq/spark 來研究這些spark examples,在學習過程中,我根據這個答案,當我啟動Simple Data Operations 例子,這里是發生了什么:

                  I use docker image sequenceiq/spark on my Mac to study these spark examples, during the study process, I upgrade the spark inside that image to 1.6.1 according to this answer, and the error occurred when I start the Simple Data Operations example, here is what happened:

                  當我運行 df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() 它引發錯誤,與pyspark控制臺的完整堆棧如下:

                  when I run df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() it raise a error, and the full stack with the pyspark console is as followed:

                  Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
                  [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
                  Type "help", "copyright", "credits" or "license" for more information.
                  16/04/12 22:45:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
                  Welcome to
                        ____              __
                       / __/__  ___ _____/ /__
                      _\ \/ _ \/ _ `/ __/  '_/
                     /__ / .__/\_,_/_/ /_/\_\   version 1.6.1
                        /_/
                  
                  Using Python version 2.6.6 (r266:84292, Jul 23 2015 15:22:56)
                  SparkContext available as sc, HiveContext available as sqlContext.
                  >>> url = "jdbc:mysql://localhost:3306/test?user=root;password=myPassWord"
                  >>> df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  16/04/12 22:46:05 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:06 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:11 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
                  16/04/12 22:46:11 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
                  16/04/12 22:46:16 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:17 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  Traceback (most recent call last):
                    File "<stdin>", line 1, in <module>
                    File "/usr/local/spark/python/pyspark/sql/readwriter.py", line 139, in load
                      return self._df(self._jreader.load())
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
                    File "/usr/local/spark/python/pyspark/sql/utils.py", line 45, in deco
                      return f(*a, **kw)
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
                  py4j.protocol.Py4JJavaError: An error occurred while calling o23.load.
                  : java.sql.SQLException: No suitable driver
                      at java.sql.DriverManager.getDriver(DriverManager.java:278)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at scala.Option.getOrElse(Option.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:49)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
                      at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
                      at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
                      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
                      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
                      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                      at java.lang.reflect.Method.invoke(Method.java:606)
                      at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
                      at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
                      at py4j.Gateway.invoke(Gateway.java:259)
                      at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
                      at py4j.commands.CallCommand.execute(CallCommand.java:79)
                      at py4j.GatewayConnection.run(GatewayConnection.java:209)
                      at java.lang.Thread.run(Thread.java:744)
                  
                  >>>
                  

                  這是我迄今為止嘗試過的:

                  Here is what I have tried till now:

                  1. 下載mysql-connector-java-5.0.8-bin.jar,放入/usr/local/spark/lib/.還是一樣的錯誤.

                  1. Download mysql-connector-java-5.0.8-bin.jar, and put it in to /usr/local/spark/lib/. It still the same error.

                  像這樣創建t.py:

                  from pyspark import SparkContext  
                  from pyspark.sql import SQLContext  
                  
                  sc = SparkContext(appName="PythonSQL")  
                  sqlContext = SQLContext(sc)  
                  df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()  
                  
                  df.printSchema()  
                  countsByAge = df.groupBy("age").count()  
                  countsByAge.show()  
                  countsByAge.write.format("json").save("file:///usr/local/mysql/mysql-connector-java-5.0.8/db.json")  
                  

                  然后,我嘗試了 spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py.結果還是一樣.

                  then, I tried spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py. The result is still the same.

                  1. 然后我嘗試了 pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py,有和沒有下面的t.py,還是一樣.
                  1. Then I tried pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py, both with and without the following t.py, still the same.

                  在此期間,mysql 正在運行.這是我的操作系統信息:

                  During all of this, the mysql is running. And here is my os info:

                  # rpm --query centos-release  
                  centos-release-6-5.el6.centos.11.2.x86_64
                  

                  hadoop 版本是 2.6.

                  And the hadoop version is 2.6.

                  現在不知道下一步該去哪里,希望有大神幫忙指點一下,謝謝!

                  Now I don't where to go next, so I hope some one can help give some advice, thanks!

                  推薦答案

                  當我嘗試將腳本寫入 MySQL 時,我遇到了java.sql.SQLException:沒有合適的驅動程序".

                  I ran into "java.sql.SQLException: No suitable driver" when I tried to have my script write to MySQL.

                  這是我為解決這個問題所做的.

                  Here's what I did to fix that.

                  在 script.py 中

                  In script.py

                  df.write.jdbc(url="jdbc:mysql://localhost:3333/my_database"
                                    "?user=my_user&password=my_password",
                                table="my_table",
                                mode="append",
                                properties={"driver": 'com.mysql.jdbc.Driver'})
                  

                  然后我以這種方式運行 spark-submit

                  Then I ran spark-submit this way

                  SPARK_HOME=/usr/local/Cellar/apache-spark/1.6.1/libexec spark-submit --packages mysql:mysql-connector-java:5.1.39 ./script.py
                  

                  請注意,SPARK_HOME 特定于安裝 spark 的位置.對于您的環境,這個 https://github.com/sequenceiq/docker-spark/blob/master/README.md 可能會有所幫助.

                  Note that SPARK_HOME is specific to where spark is installed. For your environment this https://github.com/sequenceiq/docker-spark/blob/master/README.md might help.

                  如果以上所有內容都令人困惑,請嘗試以下操作:
                  在 t.py 中替換

                  In case all the above is confusing, try this:
                  In t.py replace

                  sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  

                  sqlContext.read.format("jdbc").option("dbtable","people").option("driver", 'com.mysql.jdbc.Driver').load()
                  

                  然后運行

                  spark-submit --packages mysql:mysql-connector-java:5.1.39 --master local[4] t.py
                  

                  這篇關于pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數根據 N 個先前值來決定接下來的 N 個行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達式的結果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時出錯,使用 for 循環數組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數據庫表作為 Spark 數據幀讀取?)
                • <i id='H7lsK'><tr id='H7lsK'><dt id='H7lsK'><q id='H7lsK'><span id='H7lsK'><b id='H7lsK'><form id='H7lsK'><ins id='H7lsK'></ins><ul id='H7lsK'></ul><sub id='H7lsK'></sub></form><legend id='H7lsK'></legend><bdo id='H7lsK'><pre id='H7lsK'><center id='H7lsK'></center></pre></bdo></b><th id='H7lsK'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='H7lsK'><tfoot id='H7lsK'></tfoot><dl id='H7lsK'><fieldset id='H7lsK'></fieldset></dl></div>

                    • <tfoot id='H7lsK'></tfoot>

                        • <bdo id='H7lsK'></bdo><ul id='H7lsK'></ul>
                          <legend id='H7lsK'><style id='H7lsK'><dir id='H7lsK'><q id='H7lsK'></q></dir></style></legend>

                          <small id='H7lsK'></small><noframes id='H7lsK'>

                              <tbody id='H7lsK'></tbody>
                            主站蜘蛛池模板: 精品一区二区电影 | 视频1区2区 | 亚洲狠狠 | 精品视频在线免费观看 | 一区二区三区不卡视频 | 国产精品伦理一区 | 亚洲国产精品一区二区三区 | av一区在线 | 91在线| 国产欧美精品区一区二区三区 | 亚洲一区二区在线播放 | 91精品一区二区三区久久久久久 | 日韩精品成人 | 91精品国产色综合久久不卡98口 | 这里只有精品99re | 欧美一区两区 | 欧美一级α片 | 国产成人精品一区二区三区视频 | 亚洲视频一区在线 | 午夜视频网站 | 另类二区 | 亚洲欧洲日韩精品 中文字幕 | 一本色道久久综合亚洲精品高清 | 久久成人18免费网站 | 欧美日韩视频在线播放 | 一区二区视屏 | 综合第一页 | 日韩久久成人 | 精品在线看 | 精品国产一区二区三区免费 | 91久久婷婷| 婷婷久久综合 | 久久久久国产精品 | 二区欧美 | 一区二区在线不卡 | 免费网站国产 | 日本免费一区二区三区四区 | 日韩电影免费在线观看中文字幕 | 欧美成人激情视频 | 欧美成人激情 | av中文字幕在线播放 |