問題描述
我正在嘗試使用 spark sql 獲取 sql server 架構中所有表的行數和列數.
I am trying to get the row count and column count of all the tables in a schema in sql server using spark sql.
當我使用 sqoop 執行以下查詢時,它給了我正確的結果.
when I execute below query using sqoop, it's giving me the correct results.
sqoop eval --connect "jdbc:sqlserver://<hostname>;database=<dbname>" \
--username=<username> --password=<pwd> \
--query """SELECT
ta.name TableName ,
pa.rows RowCnt,
COUNT(ins.COLUMN_NAME) ColCnt FROM <db>.sys.tables ta INNER JOIN
<db>.sys.partitions pa ON pa.OBJECT_ID = ta.OBJECT_ID INNER JOIN
<db>.sys.schemas sc ON ta.schema_id = sc.schema_id join
<db>.INFORMATION_SCHEMA.COLUMNS ins on ins.TABLE_SCHEMA =sc.name and ins.TABLE_NAME=ta.name
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0) and sc.name ='<schema>' GROUP BY sc.name, ta.name, pa.rows order by
TableName"""
但是當我嘗試從 spark sql 執行相同的查詢時,我收到錯誤消息com.microsoft.sqlserver.jdbc.SQLServerException:關鍵字‘WHERE’附近的語法不正確"如果有人對此錯誤有任何想法,請幫助我.
But when I try to execute the same query from spark sql, I am getting an error that "com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'WHERE'" Please help me out, if anyone has an idea about this error.
下面是我執行的spark sql命令spark-shell --jars "/var/lib/sqoop/sqljdbc42.jar"
Below is the spark sql command I executed spark-shell --jars "/var/lib/sqoop/sqljdbc42.jar"
sqlContext.read.format("jdbc").option("url", "jdbc:sqlserver://<hostname>;database=<dbname>;user=<user>;password=<pwd>").option("dbtable", """(SELECT
ta.name TableName ,pa.rows RowCnt,
COUNT(ins.COLUMN_NAME) ColCnt FROM <db>.sys.tables ta INNER JOIN
<db>.sys.partitions pa ON pa.OBJECT_ID = ta.OBJECT_ID INNER JOIN
<db>.sys.schemas sc ON ta.schema_id = sc.schema_id join
<db>.INFORMATION_SCHEMA.COLUMNS ins on ins.TABLE_SCHEMA =sc.name and ins.TABLE_NAME=ta.name
WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0) and sc.name ='<schema>' GROUP BY sc.name,ta.name, pa.rows)""").option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver").load()
預期輸出:
表名、RowCnt、ColCnt
TableName, RowCnt, ColCnt
表 A、62、30
表 B, 3846, 76
table B, 3846, 76
推薦答案
Spark SQL 命令中的問題在于 dbTable
選項.
The problem in your Spark SQL command is with the dbTable
option.
dbTable
接受可以使用的 SQL 查詢的 FROM
子句中有效的任何內容.例如,您還可以使用括號中的子查詢來代替完整的表.但是,在括號中使用子查詢時,它應該有一個別名.因此你的命令應該修改為,
dbTable
accepts anything that is valid in a FROM
clause of a SQL query can be used. For example, instead of a full table you could also use a subquery in parentheses. However, when using subqueries in parentheses, it should have an alias. Thus your command should be modified as,
sqlContext
.read
.format("jdbc")
.option("url", "jdbc:sqlserver://<hostname>;database=<dbname>;user=<user>;password=<pwd>")
.option("dbtable",
"""(SELECT
ta.name TableName ,
pa.rows RowCnt,
COUNT(ins.COLUMN_NAME) ColCnt
FROM <db>.sys.tables ta
INNER JOIN
<db>.sys.partitions pa
ON pa.OBJECT_ID = ta.OBJECT_ID
INNER JOIN
<db>.sys.schemas sc
ON ta.schema_id = sc.schema_id
JOIN
<db>.INFORMATION_SCHEMA.COLUMNS ins
ON ins.TABLE_SCHEMA = sc.name and ins.TABLE_NAME = ta.name
WHERE ta.is_ms_shipped = 0
AND pa.index_id IN (1,0)
AND sc.name ='<schema>'
GROUP BY sc.name,ta.name, pa.rows) as TEMP""")
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.load()
只是一種預感.希望這會有所幫助!
Just a hunch. Hope this helps!
這篇關于使用 spark sql 在 sqlserver 上執行查詢的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!