久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<legend id='xHrib'><style id='xHrib'><dir id='xHrib'><q id='xHrib'></q></dir></style></legend>
    1. <tfoot id='xHrib'></tfoot>

    2. <i id='xHrib'><tr id='xHrib'><dt id='xHrib'><q id='xHrib'><span id='xHrib'><b id='xHrib'><form id='xHrib'><ins id='xHrib'></ins><ul id='xHrib'></ul><sub id='xHrib'></sub></form><legend id='xHrib'></legend><bdo id='xHrib'><pre id='xHrib'><center id='xHrib'></center></pre></bdo></b><th id='xHrib'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='xHrib'><tfoot id='xHrib'></tfoot><dl id='xHrib'><fieldset id='xHrib'></fieldset></dl></div>

      <small id='xHrib'></small><noframes id='xHrib'>

          <bdo id='xHrib'></bdo><ul id='xHrib'></ul>

        Flowfile 絕對路徑 Nifi

        Flowfile absolute path Nifi(Flowfile 絕對路徑 Nifi)

          <bdo id='Gkefc'></bdo><ul id='Gkefc'></ul>

            <tfoot id='Gkefc'></tfoot>
            <legend id='Gkefc'><style id='Gkefc'><dir id='Gkefc'><q id='Gkefc'></q></dir></style></legend>
              <tbody id='Gkefc'></tbody>

            <small id='Gkefc'></small><noframes id='Gkefc'>

                • <i id='Gkefc'><tr id='Gkefc'><dt id='Gkefc'><q id='Gkefc'><span id='Gkefc'><b id='Gkefc'><form id='Gkefc'><ins id='Gkefc'></ins><ul id='Gkefc'></ul><sub id='Gkefc'></sub></form><legend id='Gkefc'></legend><bdo id='Gkefc'><pre id='Gkefc'><center id='Gkefc'></center></pre></bdo></b><th id='Gkefc'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='Gkefc'><tfoot id='Gkefc'></tfoot><dl id='Gkefc'><fieldset id='Gkefc'></fieldset></dl></div>
                  本文介紹了Flowfile 絕對路徑 Nifi的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我正在嘗試使用批量加載選項將流文件加載到 MySQL 數據庫中.下面是我在 UpdateAttribute 處理器中使用的查詢,并在更新參數以執行批量加載后將該查詢傳遞給 PutSQL.

                  LOAD DATA INFILE '${absolute.path}${filename}' INTO TABLE ${dest.database}.${db.table.name} FIELDS TERMINATED BY ', LINES TERMINATED BY '\n'

                  當我運行流程時,它沒有說文件未找到異常.

                  <預><代碼>.總共有1個FlowFiles失敗,0個成功,0個沒有執行,將被路由重試;:java.sql.BatchUpdateException:無法為LOAD DATA INFILE"命令打開文件data.csv".由于底層IOException:`** 開始嵌套異常 **java.io.FileNotFoundException消息:data.csv(沒有那個文件或目錄)java.io.FileNotFoundException: data.csv(沒有這樣的文件或目錄).

                  這里 MySQL 服務器和 Nifi 在不同的節點上,所以我不能使用 LOAD DATA LOCAL INFILE 查詢.

                  即使我在 SQL 查詢中提到了流文件的完整絕對路徑,我也不知道為什么會出現文件未找到異常.

                  當我使用帶有硬編碼文件名的查詢并在 nifi 節點中提供文件的絕對路徑時,它按預期工作.

                  工作:

                  LOAD DATA LOCAL INFILE '/path/in/nifi/node/to/file/data.csv' INTO TABLE ${dest.database}.${db.table.name} FIELDS TERMINATED BY ',' 以 '\n'} 結尾的行

                  問題是如何獲取流文件的絕對路徑并將相同的流文件加載到mysql中.

                  流程:

                  解決方案

                  • 停止 PutSQL 處理器并讓流文件排隊.
                  • 一旦他們排隊,右鍵單擊success關系
                    UpdateAttributePutSQL 之間并選擇 List Queue.
                  • 選擇任意一個流文件并導航到 Attributes 選項卡并查看如果屬性absolute.pathflowfilename 存在并且如果
                    它們確實存在,請驗證它們是否具有預期值集.在你的情況下 absolute.path 應該有值 /path/in/nifi/node/to/fileflowfilename 應該有值 <代碼>/data.csv

                  問題:您是否使用 UpdateAttribute 自己設置這些屬性,原因是,NiFi 不會生成名為 flowfilename 的屬性,而是生成名為 <代碼>文件名.

                  還有一點,請確保 absolute.path 的值以 / 結尾或 flowfilename 的值開始帶有 /.如果沒有,它們將被附加,結果將是 /path/in/nifi/node/to/filedata.csv.您可以嘗試@Mahendra 建議的 append 函數,否則您可以簡單地使用 ${absolute.path}/${flowfilename}.

                  更新

                  我剛剛意識到absolute.path 是一個核心屬性,如filenamefilesizemime.type等.有些處理器使用所有核心屬性,而有些處理器使用很少的必要屬性.GenerateTableFetch 寫入 absolute.path 但沒有為它設置任何東西.這就是為什么它有 ./ 這是默認值.

                  所以我對您的工作方法的建議是,您可以使用 UpdateAttribute 手動設置/覆蓋 absolute.path 屬性(就像您覆蓋了 filename) 并設置所需的值,即 /path/in/nifi/node/to/file

                  I'm trying to load the flow files into MySQL database using bulk load option. Below is the query I'm using as part of the UpdateAttribute processor and passing that query to PutSQL after updating the parameters to do bulk load.

                  LOAD DATA INFILE '${absolute.path}${filename}' INTO TABLE ${dest.database}.${db.table.name} FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'
                  

                  When I ran the flow it's failing saying file not found exception.

                  . There were a total of 1 FlowFiles that failed, 0 that succeeded, and 0 that were not execute and will be routed to retry; : java.sql.BatchUpdateException: Unable to open file 'data.csv'for 'LOAD DATA INFILE command.Due to underlying IOException:`
                  
                  ** BEGIN NESTED EXCEPTION ** 
                  
                  java.io.FileNotFoundException
                  MESSAGE: data.csv (No such file or directory)
                  java.io.FileNotFoundException: data.csv (No such file or directory).
                  

                  Here MySQL Server and Nifi are on different nodes so I can't use LOAD DATA LOCAL INFILE query.

                  I'm not sure why I'm getting file not found exception even though I mentioned the complete absolute path of the flow file in the SQL Query.

                  When I use query with hard coded file name and providing the absolute path of the file in nifi node, it's working as expected.

                  Working:

                  LOAD DATA LOCAL INFILE '/path/in/nifi/node/to/file/data.csv' INTO TABLE ${dest.database}.${db.table.name} FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'}
                  

                  Question is how to get the absolute path of the flow file and load the same flow file into mysql.

                  Flow:

                  解決方案

                  • Stop the PutSQL processor and let the flowfiles queue up.
                  • Once they are queued up, right click on the success relationship
                    between UpdateAttribute and PutSQL and select List Queue.
                  • Select any one flowfile and navigate to the Attributes tab and see if the attributes absolute.path and flowfilename exists and if
                    they do exist, verify if they have the expected value set. In your case absolute.path should have the value /path/in/nifi/node/to/file and flowfilename should have the value /data.csv

                  Question for you: Are you setting these attributes yourself using UpdateAttribute, reason is, NiFi doesn't generate an attribute named flowfilename, it generates one with the name filename.

                  One more thing, make sure either the value for absolute.path ends with a / in the end or the value of flowfilename begins with a /. If not, they will be appended and the result will be /path/in/nifi/node/to/filedata.csv. You can try the append function that @Mahendra suggested, else you can simply use ${absolute.path}/${flowfilename}.

                  Update

                  I just realized that absolute.path is a core attribute like filename, filesize, mime.type, etc. Some processors use all the core attributes while some use very few which are needed. GenerateTableFetch writes absolute.path but doesn't set anything for it. That's why it has ./ which is the default value.

                  So my suggestion for your approach to work is, you can manually set/overwrite absolute.path attribute using UpdateAttribute (just like you have overwritten filename) and set the desired value which is /path/in/nifi/node/to/file

                  這篇關于Flowfile 絕對路徑 Nifi的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數根據 N 個先前值來決定接下來的 N 個行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達式的結果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時出錯,使用 for 循環數組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數據庫表作為 Spark 數據幀讀取?)
                • <small id='ZMTKG'></small><noframes id='ZMTKG'>

                      <i id='ZMTKG'><tr id='ZMTKG'><dt id='ZMTKG'><q id='ZMTKG'><span id='ZMTKG'><b id='ZMTKG'><form id='ZMTKG'><ins id='ZMTKG'></ins><ul id='ZMTKG'></ul><sub id='ZMTKG'></sub></form><legend id='ZMTKG'></legend><bdo id='ZMTKG'><pre id='ZMTKG'><center id='ZMTKG'></center></pre></bdo></b><th id='ZMTKG'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='ZMTKG'><tfoot id='ZMTKG'></tfoot><dl id='ZMTKG'><fieldset id='ZMTKG'></fieldset></dl></div>
                        <legend id='ZMTKG'><style id='ZMTKG'><dir id='ZMTKG'><q id='ZMTKG'></q></dir></style></legend>
                            <tbody id='ZMTKG'></tbody>
                            <bdo id='ZMTKG'></bdo><ul id='ZMTKG'></ul>
                            <tfoot id='ZMTKG'></tfoot>

                            主站蜘蛛池模板: 亚洲激精日韩激精欧美精品 | 国产综合视频 | 日韩一区二区三区视频 | 精品久久久久久久人人人人传媒 | 影音先锋成人资源 | 黄色av观看| 99热.com | 天天综合网7799精品 | 亚洲精品日韩在线 | 亚洲精品久久区二区三区蜜桃臀 | 亚洲综合电影 | 亚洲 自拍 另类 欧美 丝袜 | 91久久 | 成人精品鲁一区一区二区 | 久久国产亚洲 | 欧美区在线| 天天影视网天天综合色在线播放 | 日韩亚洲一区二区 | 国产网站在线免费观看 | 国产一区在线免费 | 久久综合伊人 | 亚洲精品福利视频 | www.久 | 欧美精品在线观看 | 超碰人人91| 欧美日韩在线高清 | 狠狠av | 国产精品污www一区二区三区 | 欧美精品在线观看 | 亚洲精品美女在线观看 | 二区亚洲| 色吧久久| 成人国产精品一级毛片视频毛片 | 国产精品久久久乱弄 | 国产精品福利网站 | 国产成人99久久亚洲综合精品 | 一级黄a视频 | 国产在线观看不卡一区二区三区 | av香港经典三级级 在线 | 国产精品久久久久久久久久久久午夜片 | 免费成人高清 |